diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..edce921dfc05c6146eaf139d83cafd62e6ce76f7
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,155 @@
+# Tactile Sensing and its Role in Learning and Deploying Robotic Grasping Controllers
+
+Alexander Koenig ${}^{1,2}$ , Zixi Liu ${}^{2}$ , Lucas Janson ${}^{3}$ and Robert Howe ${}^{2,4}$
+
+Abstract- A long-standing question in robot hand design is how accurate tactile sensing must be. This paper uses simulated tactile signals and the reinforcement learning (RL) framework to study the sensing needs in grasping systems. Our first experiment investigates the need for rich tactile sensing in the rewards of RL-based grasp refinement algorithms for multi-fingered robotic hands. We systematically integrate different levels of tactile data into the rewards using analytic grasp stability metrics. We find that combining information on contact positions, normals, and forces in the reward yields the highest average success rates of ${95.4}\%$ for cuboids, ${93.1}\%$ for cylinders, and 62.3% for spheres across wrist position errors between 0 and 7 centimeters and rotational errors between 0 and 14 degrees. This contact-based reward outperforms a non-tactile binary-reward baseline by ${42.9}\%$ . Our follow-up experiment shows that when training with tactile-enabled rewards, the use of tactile information in the control policy's state vector is drastically reducible at only a slight performance decrease of at most ${6.6}\%$ for no tactile sensing in the state. Since policies do not require access to the reward signal at test time, our work implies that models trained on tactile-enabled hands are deployable to robotic hands with a smaller sensor suite, potentially reducing cost dramatically.
+
+## I. INTRODUCTION
+
+Tactile sensing provides information about local object geometry, surface properties, contact forces, and grasp stability [1]. Hence, tactile sensors can be a valuable tool in contact-rich scenarios such as robotic grasp refinement [2] where a grasping system recovers from calibration errors. Computer vision approaches for grasp refinement often face limitations due to the occlusion of contact events. Tactile sensors can be expensive and fragile hardware components. Hence, for cost-effective robotic hand design, it is essential to understand when robot hands need precise sensing and how accurate it should be to achieve good grasping performance.
+
+A few research papers investigated the effect of tactile sensor resolution on grasp success. Wan et al. [3] found that reduced spatial resolution of tactile sensors negatively impacts grasp success since inaccuracies in contact position and normal sensing can influence grasp stability predictions. Other works analyzed the effect of contact sensor resolution on grasp performance in the context of reinforcement learning. In simulated experiments, Merzić et al. [4] found that contact feedback in a policy's state vector improves the performance of RL-based grasping controllers, and [5], [6] presented similar results for in-hand manipulation. However, [5], [6] also concluded that models trained with binary contact signals perform equally well as models that receive accurate normal force information. Furthermore, [5], [6] found that tactile resolution (92 vs. 16 sensors) has no noticeable effect on performance and sample efficiency of reinforcement learned manipulation controllers.
+
+
+
+Fig. 1: The hypothesized workflow for training and deploying RL-controlled grasping systems. First, train a policy $\pi \left( {\mathbf{a} \mid \mathbf{s}}\right)$ on a hand ${H}_{f}$ with a full tactile sensor suite (e.g., contact position, normal and force sensors) where the grasp quality metrics are available as a reward ${r}_{f}$ to learn a task, but only provide a subset of the available contact data in the state vector ${\mathbf{s}}_{r}$ . Afterwards, deploy the policy to many structurally similar hands ${H}_{r}$ with a reduced sensor set to save cost.
+
+In this paper, we use accurate tactile signals from simulation and the reinforcement learning framework to explore the tactile sensing needs in robotic systems. RL algorithms aim to produce a policy $\pi \left( {\mathbf{a} \mid \mathbf{s}}\right)$ that outputs actions $\mathbf{a}$ given state information $s$ such that the cumulative reward signal $r$ is maximized. The reward function is a critical part of every RL algorithm [7]. While the previous work in [4], [5], [6] only studied the tactile resolution in the policy's state, our first contribution investigates the impact of tactile information in the reward signal. We propose a unified framework to systematically incorporate different levels of tactile information from robotic hands into a reward signal via analytic grasp stability metrics. We conduct grasp refinement experiments on two types of quality metrics discussed in Section II: $\epsilon$ [8] calculated from contact positions and normals and a contact force-based reward $\delta$ . In Section III, we estimate the relevance of contact position, normal, and force sensing for the reward signal by comparing the individual and combined performance of $\epsilon$ and $\delta$ .
+
+---
+
+This material is based upon work supported by the US National Science Foundation under Grant No. IIS-1924984 and by the German Academic Exchange Service. An extended paper including the material in this abstract has been submitted for publication.
+
+${}^{1}$ Department of Informatics, Technical University of Munich
+
+${}^{2}$ School of Engineering and Applied Sciences, Harvard University
+
+${}^{3}$ Department of Statistics, Harvard University
+
+${}^{4}$ RightHand Robotics, Inc.,237 Washington St, Somerville, MA 02143 USA. Robert Howe is corresponding author howe@seas . harvard. edu.
+
+---
+
+Calculating grasp stability metrics requires costly tactile sensing capabilities on physical grippers. However, the reward signal is only required during the training of policies but not while testing, which suggests that sensing needs in both stages could be different. We hypothesize in Fig. 1 that policies trained with grasp stability metrics on a robotic hand ${H}_{f}$ with a full tactile sensor suite are deployable to structurally similar but more affordable hands ${H}_{r}$ with reduced tactile sensing at a small performance decrease. Hence, our second experiment in Section IV gradually decreases tactile resolution in the state vector to find realistic training and deployment workflows for grasping algorithms.
+
+## II. GRASP STABILITY METRICS
+
+## A. Largest-minimum resisted forces and torques
+
+Mirtich and Canny [8] define two quality metrics ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ that measure a grasp’s ability to resist unit forces and torques, respectively. As discussed in [9], the friction cone constrains the contact force ${\mathbf{f}}_{i}$ at each contact $i$ . It is discretized using $m$ edges ${\mathbf{f}}_{i, j}$ . The set of forces ${\mathcal{W}}_{f}$ that the contacts can apply to the object is ${\mathcal{W}}_{f} =$ ConvexHull $\left( {\mathop{\bigcup }\limits_{{i = 1}}^{{n}_{c}}\left\{ {{\mathbf{f}}_{i,1},\ldots ,{\mathbf{f}}_{i, m}}\right\} }\right)$ , where ${n}_{c}$ is the number of contacts. Finally, the quality metric ${\epsilon }_{f} =$ $\mathop{\min }\limits_{{\mathbf{f} \in \partial {\mathcal{W}}_{f}}}\parallel \mathbf{f}\parallel$ is the shortest distance from the origin to the nearest hyper-plane of ${\mathcal{W}}_{f}$ . Hence, the metric defines a lower bound on the resisted force in all directions.
+
+This concept is easily extended to the torque domain. The reaction torque ${\tau }_{i, j}$ resulting from a friction cone edge ${\mathbf{f}}_{i, j}$ is ${\mathbf{\tau }}_{i, j} = {\mathbf{r}}_{\mathbf{i}} \times {\mathbf{f}}_{i, j}$ , where ${\mathbf{r}}_{\mathbf{i}}$ is a vector pointing from the object’s center of mass to the contact point ${\mathbf{p}}_{\mathbf{i}}$ . Further, ${\mathcal{W}}_{\tau } =$ ConvexHull $\left( {\mathop{\bigcup }\limits_{{i = 1}}^{n}\left\{ {{\mathbf{\tau }}_{i,1},\ldots ,{\mathbf{\tau }}_{i, m}}\right\} }\right)$ is the set of resisted torques. The metric ${\epsilon }_{\tau } = \mathop{\min }\limits_{{\mathbf{\tau } \in \partial {\mathcal{W}}_{\tau }}}\parallel \mathbf{\tau }\parallel$ evaluates the grasp's quality by identifying the magnitude of the largest-minimum resisted torque.
+
+## B. Minimum distance to the friction cone
+
+The quality metrics ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ analyze the forces that each contact can theoretically exert on the object. However, these metrics do not consider the actual contact forces that the contacts apply to the object. To this end, we define two force-based quality metrics ${\delta }_{\text{cur }}$ and ${\delta }_{\text{task }}$ .
+
+
+
+Fig. 2: Grasp with current contact forces ${\mathbf{f}}_{i,{cur}}$ and tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ to the friction cones.
+
+Similar to Buss et al. [10], we measure grasp stability in terms of how far the contact forces are from the friction limits. Fig. 2 shows a grasp with the current contact forces ${\mathbf{f}}_{i,{cur}}$ and the tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ . The vectors ${\mathbf{f}}_{i,{cur}}$ are forces in the tangential direction that point from ${\mathbf{f}}_{i,{cur}}$ to the closest point on the friction cone, thereby identifying the direction in which the contact can take the least tangential force before slipping. A grasp with large tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ is desirable since the contacts are less prone to sliding when an object wrench is applied. Hence, the metric ${\delta }_{\text{cur }}$ measures the average magnitude of the safety margins $\begin{Vmatrix}{\overline{\mathbf{f}}}_{i,{cur}}\end{Vmatrix}$ across all contacts $i$ .
+
+The set of wrenches that the grasp must resist during task execution (e.g., object weight or wrenches from expected collisions) can often be estimated. Our task-oriented metric ${\delta }_{\text{task }}$ evaluates whether the current contact forces of a grasp are suitable to balance the anticipated task wrenches. We calculate the additional contact force ${\mathbf{f}}_{i,{add}}$ that each contact $i$ must react with to compensate a task wrench $w$ with ${\mathbf{G}}^{ + }\mathbf{w} = {\left( \begin{array}{llll} {\mathbf{f}}_{1,{add}}^{T} & {\mathbf{f}}_{2,{add}}^{T} & \ldots & {\mathbf{f}}_{{n}_{c},{add}}^{T} \end{array}\right) }^{T}$ , where ${\mathbf{G}}^{ + }$ is the pseudoinverse of the grasp matrix as defined in [11]. The task contact force is ${\mathbf{f}}_{i,\text{ task }} = {\mathbf{f}}_{i,\text{ cur }} + {\mathbf{f}}_{i,\text{ add }}$ for each contact. Finally, ${\delta }_{\text{task }}$ computes the average magnitude of the tangential force margins $\begin{Vmatrix}{\overline{\mathbf{f}}}_{i,\text{ task }}\end{Vmatrix}$ of the task contact forces ${\mathbf{f}}_{i,\text{ task }}$ to the friction cone.
+
+## III. TACTILE SENSING AND THE REWARD FUNCTION
+
+## A. Train and Test Dataset
+
+Each training sample consists of a tuple(O, E), where $O$ is the object, and $E$ is the wrist pose error sampled uniformly before every episode. There are three object types (cuboid, cylinder, and sphere) with a mass $\in \left\lbrack {{0.1},{0.4}}\right\rbrack \mathrm{{kg}}$ and randomly sampled sizes. Fig. 3 visualizes the minimum and maximum object dimensions. The wrist pose error $E$ consists of a translational and a rotational error. We uniformly sample the translational error $\left( {{e}_{x},{e}_{y},{e}_{z}}\right)$ from $\left\lbrack {-5,5}\right\rbrack \mathrm{{cm}}$ and the rotational error $\left( {{e}_{\xi },{e}_{\eta },{e}_{\zeta }}\right)$ from $\left\lbrack {-{10},{10}}\right\rbrack$ deg for each variable, respectively.
+
+
+
+Fig. 3: Minimum and maximum object sizes. We place the spheres on a concave mount to prevent rolling.
+
+We define 8 different wrist error cases for the test dataset. Let $d\left( {a, b, c}\right) = \sqrt{{a}^{2} + {b}^{2} + {c}^{2}}$ be the L2 norm of the variables(a, b, c). Table I shows the wrist error cases, where case A corresponds to no error and case $\mathrm{H}$ means maximum wrist error. The test dataset consists of 30 random objects $O$ (10 cuboids,10 cylinders, and 10 spheres). Per object $O$ , we randomly generate the eight wrist error cases $\{ A, B,\ldots , H\}$ from Table I. Hence, we run ${30} \times 8 = {240}$ grasping experiments to test one model.
+
+TABLE I: Wrist error cases
+
+
| Wrist Error Case | A | B | C | D | E | $\mathbf{F}$ | G | H |
| $d\left( {{e}_{x},{e}_{y},{e}_{z}}\right)$ in cm | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| $d\left( {{e}_{\xi },{e}_{\eta },{e}_{\zeta }}\right)$ in deg | 0 | 2 | 4 | 6 | 8 | 10 | 12 | 14 |
+
+
+
+Fig. 4: Overview of one algorithm episode. (A) Initialization of hand and object. (B) We split the grasp refinement algorithm into four stages and compare four reward frameworks: (1) $\epsilon$ and $\delta$ ,(2) only $\delta$ ,(3) only $\epsilon$ and (4) the non-tactile binary reward baseline $\beta$ . The weighting factors of ${\alpha }_{1} = 5$ and ${\alpha }_{2} = {0.5}$ were empirically determined.
+
+## B. State and Action Space
+
+The state vector $s$ consists of 7 joint positions ( 1 finger separation, 3 proximal bending, 3 distal bending degrees of freedom), and 7 contact cues ( 3 on proximal links, 3 on distal links, and 1 on palm) that include contact position, contact normal and contact force, which have $3\left( {x, y, z}\right)$ components each. The dimension of the state vector is $\mathbf{s} \in$ ${\mathbb{R}}^{7 + 7 \times \left( {3 \times 3}\right) = {70}}$ . Note that we do not assume any information about the object (e.g., object pose, geometry, or mass) in the state vector. The contact normals and positions are provided in the wrist frame, while the contact forces are represented in the contact frame. The action vector $\mathbf{a}$ consists of 3 finger position increments, 3 wrist position increments and 3 wrist rotation increments. The action vector's dimension is $\mathbf{a} \in {\mathbb{R}}^{3 + 3 + 3 = 9}$ . The policy ${\pi }_{\mathbf{\theta }}$ is parametrized by a neural network with weights $\mathbf{\theta }$ . The network is a multilayer perceptron with four layers(70,256,256,9). We use the stable-baselines 3 [12] implementation of the soft actor-critic (SAC) [13] algorithm and train for 25000 steps.
+
+## C. Experimental Setup
+
+We simulate the three-fingered ReFlex TakkTile hand (RightHand Robotics, Somerville, MA USA) using a custom Gazebo [14] simulation environment and the DART [15] physics engine. We model the under-actuated distal flexure [16] as a rigid link with two revolute joints (one between the proximal and one between the distal finger link). Further, we approximate the finger geometries as cuboids to reduce computational load. Our source code is available at github.com/axkoenig/grasp_refinement.
+
+
+
+Fig. 5: Test results for reward frameworks.
+
+Fig. 4 shows an overview of one training episode. In stage (A), we initialize the world. Thereby, we randomly generate a new object, wrist error tuple(O, E)(or we select one from the test dataset). We assume a computer vision system and a grasp planner that produces a side-ways facing grasp at a fixed $5\mathrm{\;{cm}}$ offset from the object’s center of mass. We add the wrist pose error $E$ to this grasp pose to simulate calibration errors and close the fingers of the robotic hand in the erroneous wrist pose until the fingers make contact with the object. Consequently, the grasp refinement episode (B) starts. We divide each episode into three stages, as displayed in Fig. 4. Firstly, the policy ${\pi }_{\mathbf{\theta }}$ refines the grasp. Afterward, the agent lifts the object by ${15}\mathrm{\;{cm}}$ via hard-coded increments to the wrist’s $z$ -position and holds the object in place to test the grasp’s stability. The policy ${\pi }_{\theta }$ can update the wrist and finger positions while lifting and holding. The control frequency of the policy in all stages is $3\mathrm{{Hz}}$ , while the update frequency of the low-level proportional-derivative (PD) controllers in the wrist and the fingers is ${100}\mathrm{\;{Hz}}$ .
+
+As shown in the table of Fig. 4, we use the analytic grasp stability metrics from section II as reward functions. We compare the following reward configurations: (1) both $\epsilon$ and $\delta$ ,(2) only $\epsilon$ ,(3) only $\delta$ and (4) the baseline $\beta$ . Fig. 4 shows that $\delta$ refers to ${\delta }_{\text{task }}$ in the refine stage to measure expected grasp stability before lifting and ${\delta }_{\text{cur }}$ in the lift and hold stages to measure current stability. Further, $\epsilon$ is a weighted combination of ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ . While $\epsilon$ and $\delta ,\delta$ , and $\epsilon$ provide stability feedback after every algorithm step, the baseline $\beta$ gives a sparse reward after the holding stage, indicating if the object is still in the hand (1) or not (0).
+
+## D. Results and Discussion
+
+For all experiments in this paper, we average over 40 models trained with different seeds for each framework. The error bars in all plots represent $\pm 2$ standard errors. Fig. 5 summarizes the performance on the test dataset. Our main observation is that combining the geometric grasp stability metric $\epsilon$ with the force-agnostic metric $\delta$ yields the highest average success rates of ${83.6}\%$ across all objects (95.4% for cuboids, 93.1% for cylinders, and 62.3% for spheres) over all wrist errors. The $\epsilon$ and $\delta$ framework outperforms the binary reward framework $\beta$ by ${42.9}\%$ . The p-values for our results ${\mu }_{\epsilon }$ and $\delta > {\mu }_{\delta },{\mu }_{\epsilon }$ and $\delta > {\mu }_{\epsilon }$ and ${\mu }_{\epsilon }$ and $\delta > {\mu }_{\beta }$ (where ${\mu }_{x}$ is the mean performance of framework $x$ ) are all $\ll {0.001}$ and are hence statistically significant. We also notice that the combination between $\epsilon$ and $\delta$ is particularly helpful for spheres. The average performance of all frameworks on spheres is greatly reduced, while the algorithms trained with $\beta$ especially struggle to grasp spheres.
+
+This study investigates the tactile sensing needs in the reward of RL grasping controllers by incorporating highly accurate contact information via analytic grasp stability metrics. The results demonstrate that information about contact positions and normals encoded in $\epsilon$ combines well with the force-based information in the $\delta$ reward. This result motivates building physical robotic hands capable of sensing these types of information. The low success rates for the spheres may be because they can roll and are therefore harder to grasp (cuboids and cylinders move comparatively less when touched by fingers or the palm). The $\beta$ framework performs worst after the defined number of training steps, which is unsurprising because shaped rewards are known to be more sample efficient than sparse rewards [17].
+
+## IV. TACTILE SENSING AND THE STATE VECTOR
+
+## A. Experimental Setup
+
+In a second experiment, we investigate the effect of contact sensing resolution in the state vector on grasp refinement. We compare four contact sensing frameworks. The full contact sensing framework receives the same state vector $\mathbf{s} \in {\mathbb{R}}^{70}$ as in section III-B. In the normal framework, we only provide the algorithm with the contact normal forces and omit the tangential forces $\left( {s \in {\mathbb{R}}^{56}}\right)$ . In the binary framework we only give a binary signal whether a link is in contact (1) or not (0) $\left( {s \in {\mathbb{R}}^{56}}\right)$ . Finally, we solely provide the joint positions in the none framework $\left( {s \in {\mathbb{R}}^{7}}\right)$ . We adjust the size of the input layer of the neural network from section III-B to match the size of the state vector of each framework. The reward function in these experiments is $\epsilon$ and $\delta$ from Fig. 4. Hence, all contact sensing frameworks receive contact information indirectly via the reward.
+
+## B. Results and Discussion
+
+In Fig. 6, we observe that the frameworks which receive contact feedback (full, normal, binary) outperform the none framework by ${6.3}\% ,{6.6}\%$ and ${3.7}\%$ , respectively. Providing normal force information yields a performance increase of ${2.9}\%$ compared to the binary framework. However, training with the full contact force vectors only increases the performance by ${2.6}\%$ compared to the binary framework. As expected, performance decreases for larger wrist errors. The results ${\mu }_{\text{normal }} > {\mu }_{\text{binary }}$ and ${\mu }_{\text{normal }} > {\mu }_{\text{none }}$ are statistically significant (p-values $\ll {0.001}$ ), while the result ${\mu }_{\text{normal }} >$ ${\mu }_{\text{full }}$ is not (p-value 0.2232).
+
+This experiment studies how contact sensing resolution in the policy's state vector is related to grasp success when training with fully contact informed rewards. Thereby, we investigate the viability of our hypothesized training and deployment workflow in Fig. 1. The improvements for the normal force framework over the binary and none frameworks are small. The results suggest that an affordable binary contact sensor suite, or even no contact sensing at all, may be suitable if a small decrease in performance is tolerable. This result supports our hypothesis that RL grasping algorithms are deployable to hands with reduced contact sensor resolution at little performance decrease when incorporating rich tactile feedback at train time. The algorithms trained with the full force vector perform approximately on par with the ones that receive the normal force information. This could be due to three reasons. (1) The full force framework has the most network parameters and requires even longer training times. (2) The model fails to represent the concept of the friction cone internally. An alternative representation of the tangential forces could be a solution (e.g., providing a margin to the friction cone instead of a tangential force vector). (3) Simulated contact forces are prone to instability [18], especially when simulating robotic grasping [19].
+
+
+
+Fig. 6: Test results for contact sensing frameworks.
+
+## V. CONCLUSION
+
+This paper investigated the importance of tactile signals in the reward and the policy's state vector to identify the tactile sensing needs in RL-based grasping algorithms. We found that rewards incorporating contact positions, normals, and forces are the most powerful optimization objectives for RL grasp refinement controllers. While this tactile information is essential in the reward function, we uncovered that reducing contact sensor resolution in the policy's state vector decreases algorithm performance only by a small amount. This result has implications for the design of physical grippers and their training and deployment workflows.
+
+In future work, we aim to build physical robotic hands with advanced sensing capabilities to calculate grasp metrics. Secondly, we want to test the proposed training and deployment workflow, providing only limited contact information in the state vector and testing the algorithm on other robotic hands.
+
+[1] M. R. Cutkosky and W. Provancher, "Force and tactile sensing," in Springer Handbook of Robotics. Springer, 2016, pp. 717-736.
+
+[2] A. M. Dollar, L. P. Jentoft, J. H. Gao, and R. D. Howe, "Contact sensing and grasping performance of compliant hands," Autonomous Robots, vol. 28, no. 1, pp. 65-75, 2010.
+
+[3] Q. Wan and R. D. Howe, "Modeling the effects of contact sensor resolution on grasp success," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1933-1940, 2018.
+
+[4] H. Merzić, M. Bogdanović, D. Kappler, L. Righetti, and J. Bohg, "Leveraging contact forces for learning to grasp," in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 3615-3621.
+
+[5] A. Melnik, L. Lach, M. Plappert, T. Korthals, R. Haschke, and H. Ritter, "Tactile sensing and deep reinforcement learning for in-hand manipulation tasks," in IROS Workshop on Autonomous Object Manipulation, 2019.
+
+[6] ——, “Using tactile sensing to improve the sample efficiency and performance of deep deterministic policy gradients for simulated in-hand manipulation tasks," Frontiers in Robotics and AI, vol. 8, p. 57, 2021.
+
+[7] D. Silver, S. Singh, D. Precup, and R. S. Sutton, "Reward is enough," Artificial Intelligence, vol. 299, p. 103535, 2021.
+
+[8] B. Mirtich and J. Canny, "Easily computable optimum grasps in 2-d and 3-d," in Proceedings of the 1994 IEEE International Conference on Robotics and Automation. IEEE, 1994, pp. 739-747.
+
+[9] I. Kao, K. M. Lynch, and J. W. Burdick, "Contact modeling and manipulation," in Springer Handbook of Robotics. Springer, 2016, pp. 931-954.
+
+[10] M. Buss, H. Hashimoto, and J. Moore, "Dextrous hand grasping force optimization," IEEE Transactions on Robotics and Automation, vol. 12, no. 3, pp. 406-418, 1996.
+
+[11] D. Prattichizzo and J. C. Trinkle, "Grasping," in Springer Handbook of Robotics. Springer, 2016, pp. 955-988.
+
+[12] A. Raffin, A. Hill, M. Ernestus, A. Gleave, A. Kanervisto, and N. Dormann, "Stable baselines3," https://github.com/DLR-RM/ stable-baselines3, 2019.
+
+[13] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor," 2018.
+
+[14] N. Koenig and A. Howard, "Design and use paradigms for gazebo, an open-source multi-robot simulator," in IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, Sep 2004, pp. 2149-2154.
+
+[15] J. Lee, M. X. Grey, S. Ha, T. Kunz, S. Jain, Y. Ye, S. S. Srinivasa, M. Stilman, and C. K. Liu, "Dart: Dynamic animation and robotics toolkit," Journal of Open Source Software, vol. 3, no. 22, p. 500, 2018.
+
+[16] L. U. Odhner, L. P. Jentoft, M. R. Claffee, N. Corson, Y. Tenzer, R. R. Ma, M. Buehler, R. Kohout, R. D. Howe, and A. M. Dollar, "A compliant, underactuated hand for robust manipulation," The International Journal of Robotics Research, vol. 33, no. 5, pp. 736-752, 2014.
+
+[17] A. Y. Ng, D. Harada, and S. Russell, "Policy invariance under reward transformations: Theory and application to reward shaping," in In Proceedings of the Sixteenth International Conference on Machine Learning. Morgan Kaufmann, 1999, pp. 278-287.
+
+[18] J. M. Hsu and S. C. Peters, "Extending open dynamics engine for the darpa virtual robotics challenge," in Proceedings of the 4th International Conference on Simulation, Modeling, and Programming for Autonomous Robots - Volume 8810, ser. SIMPAR 2014. Berlin, Heidelberg: Springer-Verlag, 2014, p. 37-48.
+
+[19] J. R. Taylor, E. M. Drumwright, and J. Hsu, "Analysis of grasping failures in multi-rigid body simulations," in 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), 2016, pp. 295-301.
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..15dd02e9f49db5397687174cff4e8f2c6baff89c
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,123 @@
+§ TACTILE SENSING AND ITS ROLE IN LEARNING AND DEPLOYING ROBOTIC GRASPING CONTROLLERS
+
+Alexander Koenig ${}^{1,2}$ , Zixi Liu ${}^{2}$ , Lucas Janson ${}^{3}$ and Robert Howe ${}^{2,4}$
+
+Abstract- A long-standing question in robot hand design is how accurate tactile sensing must be. This paper uses simulated tactile signals and the reinforcement learning (RL) framework to study the sensing needs in grasping systems. Our first experiment investigates the need for rich tactile sensing in the rewards of RL-based grasp refinement algorithms for multi-fingered robotic hands. We systematically integrate different levels of tactile data into the rewards using analytic grasp stability metrics. We find that combining information on contact positions, normals, and forces in the reward yields the highest average success rates of ${95.4}\%$ for cuboids, ${93.1}\%$ for cylinders, and 62.3% for spheres across wrist position errors between 0 and 7 centimeters and rotational errors between 0 and 14 degrees. This contact-based reward outperforms a non-tactile binary-reward baseline by ${42.9}\%$ . Our follow-up experiment shows that when training with tactile-enabled rewards, the use of tactile information in the control policy's state vector is drastically reducible at only a slight performance decrease of at most ${6.6}\%$ for no tactile sensing in the state. Since policies do not require access to the reward signal at test time, our work implies that models trained on tactile-enabled hands are deployable to robotic hands with a smaller sensor suite, potentially reducing cost dramatically.
+
+§ I. INTRODUCTION
+
+Tactile sensing provides information about local object geometry, surface properties, contact forces, and grasp stability [1]. Hence, tactile sensors can be a valuable tool in contact-rich scenarios such as robotic grasp refinement [2] where a grasping system recovers from calibration errors. Computer vision approaches for grasp refinement often face limitations due to the occlusion of contact events. Tactile sensors can be expensive and fragile hardware components. Hence, for cost-effective robotic hand design, it is essential to understand when robot hands need precise sensing and how accurate it should be to achieve good grasping performance.
+
+A few research papers investigated the effect of tactile sensor resolution on grasp success. Wan et al. [3] found that reduced spatial resolution of tactile sensors negatively impacts grasp success since inaccuracies in contact position and normal sensing can influence grasp stability predictions. Other works analyzed the effect of contact sensor resolution on grasp performance in the context of reinforcement learning. In simulated experiments, Merzić et al. [4] found that contact feedback in a policy's state vector improves the performance of RL-based grasping controllers, and [5], [6] presented similar results for in-hand manipulation. However, [5], [6] also concluded that models trained with binary contact signals perform equally well as models that receive accurate normal force information. Furthermore, [5], [6] found that tactile resolution (92 vs. 16 sensors) has no noticeable effect on performance and sample efficiency of reinforcement learned manipulation controllers.
+
+ < g r a p h i c s >
+
+Fig. 1: The hypothesized workflow for training and deploying RL-controlled grasping systems. First, train a policy $\pi \left( {\mathbf{a} \mid \mathbf{s}}\right)$ on a hand ${H}_{f}$ with a full tactile sensor suite (e.g., contact position, normal and force sensors) where the grasp quality metrics are available as a reward ${r}_{f}$ to learn a task, but only provide a subset of the available contact data in the state vector ${\mathbf{s}}_{r}$ . Afterwards, deploy the policy to many structurally similar hands ${H}_{r}$ with a reduced sensor set to save cost.
+
+In this paper, we use accurate tactile signals from simulation and the reinforcement learning framework to explore the tactile sensing needs in robotic systems. RL algorithms aim to produce a policy $\pi \left( {\mathbf{a} \mid \mathbf{s}}\right)$ that outputs actions $\mathbf{a}$ given state information $s$ such that the cumulative reward signal $r$ is maximized. The reward function is a critical part of every RL algorithm [7]. While the previous work in [4], [5], [6] only studied the tactile resolution in the policy's state, our first contribution investigates the impact of tactile information in the reward signal. We propose a unified framework to systematically incorporate different levels of tactile information from robotic hands into a reward signal via analytic grasp stability metrics. We conduct grasp refinement experiments on two types of quality metrics discussed in Section II: $\epsilon$ [8] calculated from contact positions and normals and a contact force-based reward $\delta$ . In Section III, we estimate the relevance of contact position, normal, and force sensing for the reward signal by comparing the individual and combined performance of $\epsilon$ and $\delta$ .
+
+This material is based upon work supported by the US National Science Foundation under Grant No. IIS-1924984 and by the German Academic Exchange Service. An extended paper including the material in this abstract has been submitted for publication.
+
+${}^{1}$ Department of Informatics, Technical University of Munich
+
+${}^{2}$ School of Engineering and Applied Sciences, Harvard University
+
+${}^{3}$ Department of Statistics, Harvard University
+
+${}^{4}$ RightHand Robotics, Inc.,237 Washington St, Somerville, MA 02143 USA. Robert Howe is corresponding author howe@seas . harvard. edu.
+
+Calculating grasp stability metrics requires costly tactile sensing capabilities on physical grippers. However, the reward signal is only required during the training of policies but not while testing, which suggests that sensing needs in both stages could be different. We hypothesize in Fig. 1 that policies trained with grasp stability metrics on a robotic hand ${H}_{f}$ with a full tactile sensor suite are deployable to structurally similar but more affordable hands ${H}_{r}$ with reduced tactile sensing at a small performance decrease. Hence, our second experiment in Section IV gradually decreases tactile resolution in the state vector to find realistic training and deployment workflows for grasping algorithms.
+
+§ II. GRASP STABILITY METRICS
+
+§ A. LARGEST-MINIMUM RESISTED FORCES AND TORQUES
+
+Mirtich and Canny [8] define two quality metrics ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ that measure a grasp’s ability to resist unit forces and torques, respectively. As discussed in [9], the friction cone constrains the contact force ${\mathbf{f}}_{i}$ at each contact $i$ . It is discretized using $m$ edges ${\mathbf{f}}_{i,j}$ . The set of forces ${\mathcal{W}}_{f}$ that the contacts can apply to the object is ${\mathcal{W}}_{f} =$ ConvexHull $\left( {\mathop{\bigcup }\limits_{{i = 1}}^{{n}_{c}}\left\{ {{\mathbf{f}}_{i,1},\ldots ,{\mathbf{f}}_{i,m}}\right\} }\right)$ , where ${n}_{c}$ is the number of contacts. Finally, the quality metric ${\epsilon }_{f} =$ $\mathop{\min }\limits_{{\mathbf{f} \in \partial {\mathcal{W}}_{f}}}\parallel \mathbf{f}\parallel$ is the shortest distance from the origin to the nearest hyper-plane of ${\mathcal{W}}_{f}$ . Hence, the metric defines a lower bound on the resisted force in all directions.
+
+This concept is easily extended to the torque domain. The reaction torque ${\tau }_{i,j}$ resulting from a friction cone edge ${\mathbf{f}}_{i,j}$ is ${\mathbf{\tau }}_{i,j} = {\mathbf{r}}_{\mathbf{i}} \times {\mathbf{f}}_{i,j}$ , where ${\mathbf{r}}_{\mathbf{i}}$ is a vector pointing from the object’s center of mass to the contact point ${\mathbf{p}}_{\mathbf{i}}$ . Further, ${\mathcal{W}}_{\tau } =$ ConvexHull $\left( {\mathop{\bigcup }\limits_{{i = 1}}^{n}\left\{ {{\mathbf{\tau }}_{i,1},\ldots ,{\mathbf{\tau }}_{i,m}}\right\} }\right)$ is the set of resisted torques. The metric ${\epsilon }_{\tau } = \mathop{\min }\limits_{{\mathbf{\tau } \in \partial {\mathcal{W}}_{\tau }}}\parallel \mathbf{\tau }\parallel$ evaluates the grasp's quality by identifying the magnitude of the largest-minimum resisted torque.
+
+§ B. MINIMUM DISTANCE TO THE FRICTION CONE
+
+The quality metrics ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ analyze the forces that each contact can theoretically exert on the object. However, these metrics do not consider the actual contact forces that the contacts apply to the object. To this end, we define two force-based quality metrics ${\delta }_{\text{ cur }}$ and ${\delta }_{\text{ task }}$ .
+
+ < g r a p h i c s >
+
+Fig. 2: Grasp with current contact forces ${\mathbf{f}}_{i,{cur}}$ and tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ to the friction cones.
+
+Similar to Buss et al. [10], we measure grasp stability in terms of how far the contact forces are from the friction limits. Fig. 2 shows a grasp with the current contact forces ${\mathbf{f}}_{i,{cur}}$ and the tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ . The vectors ${\mathbf{f}}_{i,{cur}}$ are forces in the tangential direction that point from ${\mathbf{f}}_{i,{cur}}$ to the closest point on the friction cone, thereby identifying the direction in which the contact can take the least tangential force before slipping. A grasp with large tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ is desirable since the contacts are less prone to sliding when an object wrench is applied. Hence, the metric ${\delta }_{\text{ cur }}$ measures the average magnitude of the safety margins $\begin{Vmatrix}{\overline{\mathbf{f}}}_{i,{cur}}\end{Vmatrix}$ across all contacts $i$ .
+
+The set of wrenches that the grasp must resist during task execution (e.g., object weight or wrenches from expected collisions) can often be estimated. Our task-oriented metric ${\delta }_{\text{ task }}$ evaluates whether the current contact forces of a grasp are suitable to balance the anticipated task wrenches. We calculate the additional contact force ${\mathbf{f}}_{i,{add}}$ that each contact $i$ must react with to compensate a task wrench $w$ with ${\mathbf{G}}^{ + }\mathbf{w} = {\left( \begin{array}{llll} {\mathbf{f}}_{1,{add}}^{T} & {\mathbf{f}}_{2,{add}}^{T} & \ldots & {\mathbf{f}}_{{n}_{c},{add}}^{T} \end{array}\right) }^{T}$ , where ${\mathbf{G}}^{ + }$ is the pseudoinverse of the grasp matrix as defined in [11]. The task contact force is ${\mathbf{f}}_{i,\text{ task }} = {\mathbf{f}}_{i,\text{ cur }} + {\mathbf{f}}_{i,\text{ add }}$ for each contact. Finally, ${\delta }_{\text{ task }}$ computes the average magnitude of the tangential force margins $\begin{Vmatrix}{\overline{\mathbf{f}}}_{i,\text{ task }}\end{Vmatrix}$ of the task contact forces ${\mathbf{f}}_{i,\text{ task }}$ to the friction cone.
+
+§ III. TACTILE SENSING AND THE REWARD FUNCTION
+
+§ A. TRAIN AND TEST DATASET
+
+Each training sample consists of a tuple(O, E), where $O$ is the object, and $E$ is the wrist pose error sampled uniformly before every episode. There are three object types (cuboid, cylinder, and sphere) with a mass $\in \left\lbrack {{0.1},{0.4}}\right\rbrack \mathrm{{kg}}$ and randomly sampled sizes. Fig. 3 visualizes the minimum and maximum object dimensions. The wrist pose error $E$ consists of a translational and a rotational error. We uniformly sample the translational error $\left( {{e}_{x},{e}_{y},{e}_{z}}\right)$ from $\left\lbrack {-5,5}\right\rbrack \mathrm{{cm}}$ and the rotational error $\left( {{e}_{\xi },{e}_{\eta },{e}_{\zeta }}\right)$ from $\left\lbrack {-{10},{10}}\right\rbrack$ deg for each variable, respectively.
+
+ < g r a p h i c s >
+
+Fig. 3: Minimum and maximum object sizes. We place the spheres on a concave mount to prevent rolling.
+
+We define 8 different wrist error cases for the test dataset. Let $d\left( {a,b,c}\right) = \sqrt{{a}^{2} + {b}^{2} + {c}^{2}}$ be the L2 norm of the variables(a, b, c). Table I shows the wrist error cases, where case A corresponds to no error and case $\mathrm{H}$ means maximum wrist error. The test dataset consists of 30 random objects $O$ (10 cuboids,10 cylinders, and 10 spheres). Per object $O$ , we randomly generate the eight wrist error cases $\{ A,B,\ldots ,H\}$ from Table I. Hence, we run ${30} \times 8 = {240}$ grasping experiments to test one model.
+
+TABLE I: Wrist error cases
+
+max width=
+
+Wrist Error Case A B C D E $\mathbf{F}$ G H
+
+1-9
+$d\left( {{e}_{x},{e}_{y},{e}_{z}}\right)$ in cm 0 1 2 3 4 5 6 7
+
+1-9
+$d\left( {{e}_{\xi },{e}_{\eta },{e}_{\zeta }}\right)$ in deg 0 2 4 6 8 10 12 14
+
+1-9
+
+ < g r a p h i c s >
+
+Fig. 4: Overview of one algorithm episode. (A) Initialization of hand and object. (B) We split the grasp refinement algorithm into four stages and compare four reward frameworks: (1) $\epsilon$ and $\delta$ ,(2) only $\delta$ ,(3) only $\epsilon$ and (4) the non-tactile binary reward baseline $\beta$ . The weighting factors of ${\alpha }_{1} = 5$ and ${\alpha }_{2} = {0.5}$ were empirically determined.
+
+§ B. STATE AND ACTION SPACE
+
+The state vector $s$ consists of 7 joint positions ( 1 finger separation, 3 proximal bending, 3 distal bending degrees of freedom), and 7 contact cues ( 3 on proximal links, 3 on distal links, and 1 on palm) that include contact position, contact normal and contact force, which have $3\left( {x,y,z}\right)$ components each. The dimension of the state vector is $\mathbf{s} \in$ ${\mathbb{R}}^{7 + 7 \times \left( {3 \times 3}\right) = {70}}$ . Note that we do not assume any information about the object (e.g., object pose, geometry, or mass) in the state vector. The contact normals and positions are provided in the wrist frame, while the contact forces are represented in the contact frame. The action vector $\mathbf{a}$ consists of 3 finger position increments, 3 wrist position increments and 3 wrist rotation increments. The action vector's dimension is $\mathbf{a} \in {\mathbb{R}}^{3 + 3 + 3 = 9}$ . The policy ${\pi }_{\mathbf{\theta }}$ is parametrized by a neural network with weights $\mathbf{\theta }$ . The network is a multilayer perceptron with four layers(70,256,256,9). We use the stable-baselines 3 [12] implementation of the soft actor-critic (SAC) [13] algorithm and train for 25000 steps.
+
+§ C. EXPERIMENTAL SETUP
+
+We simulate the three-fingered ReFlex TakkTile hand (RightHand Robotics, Somerville, MA USA) using a custom Gazebo [14] simulation environment and the DART [15] physics engine. We model the under-actuated distal flexure [16] as a rigid link with two revolute joints (one between the proximal and one between the distal finger link). Further, we approximate the finger geometries as cuboids to reduce computational load. Our source code is available at github.com/axkoenig/grasp_refinement.
+
+ < g r a p h i c s >
+
+Fig. 5: Test results for reward frameworks.
+
+Fig. 4 shows an overview of one training episode. In stage (A), we initialize the world. Thereby, we randomly generate a new object, wrist error tuple(O, E)(or we select one from the test dataset). We assume a computer vision system and a grasp planner that produces a side-ways facing grasp at a fixed $5\mathrm{\;{cm}}$ offset from the object’s center of mass. We add the wrist pose error $E$ to this grasp pose to simulate calibration errors and close the fingers of the robotic hand in the erroneous wrist pose until the fingers make contact with the object. Consequently, the grasp refinement episode (B) starts. We divide each episode into three stages, as displayed in Fig. 4. Firstly, the policy ${\pi }_{\mathbf{\theta }}$ refines the grasp. Afterward, the agent lifts the object by ${15}\mathrm{\;{cm}}$ via hard-coded increments to the wrist’s $z$ -position and holds the object in place to test the grasp’s stability. The policy ${\pi }_{\theta }$ can update the wrist and finger positions while lifting and holding. The control frequency of the policy in all stages is $3\mathrm{{Hz}}$ , while the update frequency of the low-level proportional-derivative (PD) controllers in the wrist and the fingers is ${100}\mathrm{\;{Hz}}$ .
+
+As shown in the table of Fig. 4, we use the analytic grasp stability metrics from section II as reward functions. We compare the following reward configurations: (1) both $\epsilon$ and $\delta$ ,(2) only $\epsilon$ ,(3) only $\delta$ and (4) the baseline $\beta$ . Fig. 4 shows that $\delta$ refers to ${\delta }_{\text{ task }}$ in the refine stage to measure expected grasp stability before lifting and ${\delta }_{\text{ cur }}$ in the lift and hold stages to measure current stability. Further, $\epsilon$ is a weighted combination of ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ . While $\epsilon$ and $\delta ,\delta$ , and $\epsilon$ provide stability feedback after every algorithm step, the baseline $\beta$ gives a sparse reward after the holding stage, indicating if the object is still in the hand (1) or not (0).
+
+§ D. RESULTS AND DISCUSSION
+
+For all experiments in this paper, we average over 40 models trained with different seeds for each framework. The error bars in all plots represent $\pm 2$ standard errors. Fig. 5 summarizes the performance on the test dataset. Our main observation is that combining the geometric grasp stability metric $\epsilon$ with the force-agnostic metric $\delta$ yields the highest average success rates of ${83.6}\%$ across all objects (95.4% for cuboids, 93.1% for cylinders, and 62.3% for spheres) over all wrist errors. The $\epsilon$ and $\delta$ framework outperforms the binary reward framework $\beta$ by ${42.9}\%$ . The p-values for our results ${\mu }_{\epsilon }$ and $\delta > {\mu }_{\delta },{\mu }_{\epsilon }$ and $\delta > {\mu }_{\epsilon }$ and ${\mu }_{\epsilon }$ and $\delta > {\mu }_{\beta }$ (where ${\mu }_{x}$ is the mean performance of framework $x$ ) are all $\ll {0.001}$ and are hence statistically significant. We also notice that the combination between $\epsilon$ and $\delta$ is particularly helpful for spheres. The average performance of all frameworks on spheres is greatly reduced, while the algorithms trained with $\beta$ especially struggle to grasp spheres.
+
+This study investigates the tactile sensing needs in the reward of RL grasping controllers by incorporating highly accurate contact information via analytic grasp stability metrics. The results demonstrate that information about contact positions and normals encoded in $\epsilon$ combines well with the force-based information in the $\delta$ reward. This result motivates building physical robotic hands capable of sensing these types of information. The low success rates for the spheres may be because they can roll and are therefore harder to grasp (cuboids and cylinders move comparatively less when touched by fingers or the palm). The $\beta$ framework performs worst after the defined number of training steps, which is unsurprising because shaped rewards are known to be more sample efficient than sparse rewards [17].
+
+§ IV. TACTILE SENSING AND THE STATE VECTOR
+
+§ A. EXPERIMENTAL SETUP
+
+In a second experiment, we investigate the effect of contact sensing resolution in the state vector on grasp refinement. We compare four contact sensing frameworks. The full contact sensing framework receives the same state vector $\mathbf{s} \in {\mathbb{R}}^{70}$ as in section III-B. In the normal framework, we only provide the algorithm with the contact normal forces and omit the tangential forces $\left( {s \in {\mathbb{R}}^{56}}\right)$ . In the binary framework we only give a binary signal whether a link is in contact (1) or not (0) $\left( {s \in {\mathbb{R}}^{56}}\right)$ . Finally, we solely provide the joint positions in the none framework $\left( {s \in {\mathbb{R}}^{7}}\right)$ . We adjust the size of the input layer of the neural network from section III-B to match the size of the state vector of each framework. The reward function in these experiments is $\epsilon$ and $\delta$ from Fig. 4. Hence, all contact sensing frameworks receive contact information indirectly via the reward.
+
+§ B. RESULTS AND DISCUSSION
+
+In Fig. 6, we observe that the frameworks which receive contact feedback (full, normal, binary) outperform the none framework by ${6.3}\% ,{6.6}\%$ and ${3.7}\%$ , respectively. Providing normal force information yields a performance increase of ${2.9}\%$ compared to the binary framework. However, training with the full contact force vectors only increases the performance by ${2.6}\%$ compared to the binary framework. As expected, performance decreases for larger wrist errors. The results ${\mu }_{\text{ normal }} > {\mu }_{\text{ binary }}$ and ${\mu }_{\text{ normal }} > {\mu }_{\text{ none }}$ are statistically significant (p-values $\ll {0.001}$ ), while the result ${\mu }_{\text{ normal }} >$ ${\mu }_{\text{ full }}$ is not (p-value 0.2232).
+
+This experiment studies how contact sensing resolution in the policy's state vector is related to grasp success when training with fully contact informed rewards. Thereby, we investigate the viability of our hypothesized training and deployment workflow in Fig. 1. The improvements for the normal force framework over the binary and none frameworks are small. The results suggest that an affordable binary contact sensor suite, or even no contact sensing at all, may be suitable if a small decrease in performance is tolerable. This result supports our hypothesis that RL grasping algorithms are deployable to hands with reduced contact sensor resolution at little performance decrease when incorporating rich tactile feedback at train time. The algorithms trained with the full force vector perform approximately on par with the ones that receive the normal force information. This could be due to three reasons. (1) The full force framework has the most network parameters and requires even longer training times. (2) The model fails to represent the concept of the friction cone internally. An alternative representation of the tangential forces could be a solution (e.g., providing a margin to the friction cone instead of a tangential force vector). (3) Simulated contact forces are prone to instability [18], especially when simulating robotic grasping [19].
+
+ < g r a p h i c s >
+
+Fig. 6: Test results for contact sensing frameworks.
+
+§ V. CONCLUSION
+
+This paper investigated the importance of tactile signals in the reward and the policy's state vector to identify the tactile sensing needs in RL-based grasping algorithms. We found that rewards incorporating contact positions, normals, and forces are the most powerful optimization objectives for RL grasp refinement controllers. While this tactile information is essential in the reward function, we uncovered that reducing contact sensor resolution in the policy's state vector decreases algorithm performance only by a small amount. This result has implications for the design of physical grippers and their training and deployment workflows.
+
+In future work, we aim to build physical robotic hands with advanced sensing capabilities to calculate grasp metrics. Secondly, we want to test the proposed training and deployment workflow, providing only limited contact information in the state vector and testing the algorithm on other robotic hands.
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..c2ee981e57dc3509295fbe2b949ab6eb24c42d84
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,513 @@
+# RRL: Resnet as representation for Reinforcement Learning
+
+Rutav Shah ${}^{1}$ and Vikash Kumar ${}^{2,3}$
+
+Abstract-Generalist robots capable of performing dexterous, contact-rich manipulation tasks will enhance productivity and provide care in un-instrumented settings like homes. Such tasks warrant operations in real-world only using the robot's proprioceptive sensor such as onboard cameras, joint encoders, etc which can be challenging for policy learning owing to the high dimensionality and partial observability issues. We propose RRL: Resnet as representation for Reinforcement Learning - a straightforward yet effective approach that can learn complex behaviors directly from proprioceptive inputs. RRL fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. In a simulated dexterous manipulation benchmark, where the state of the art methods fails to make significant progress, RRL delivers contact rich behaviors. The appeal of RRL lies in its simplicity in bringing together progress from the fields of Representation Learning, Imitation Learning, and Reinforcement Learning. Its effectiveness in learning behaviors directly from visual inputs with performance and sample efficiency matching learning directly from the state, even in complex high dimensional domains, is far from obvious.
+
+## I. INTRODUCTION
+
+Recently, Reinforcement learning (RL) has seen tremendous momentum and progress [9, 19, 37, 21] in learning complex behaviors from states [18, 24, 17]. Most success stories, however, are limited to simulations or instrumented laboratory conditions as real world doesn't provide direct access to its internal state. Not only learning with state-space, but visual observation spaces have also found reasonable success [26, 42]. However, the majority of these methods have been tested on low-dimensional, 2D tasks [31] that lack depth information. Contact rich manipulation tasks, on the other hand, are high dimensional and necessitate intricate details in order to be completed successfully. In order to deliver the promise presented by data-driven techniques, we need efficient techniques that can learn complex behaviors unobtrusively without the need for environment instrumentation.
+
+Learning without environment instrumentation, especially in unstructured settings like homes, can be quite challenging [59, 34, 46]. Challenges include - (a) Decision making with incomplete information owing to partial observability as the agents must rely only on proprioceptive on-board sensors (vision, touch, joint position encoders, etc) to perceive and act. (b) The influx of sensory information makes the input space quite high dimensional. (c) Information contamination due to sensory noise and task-irrelevant conditions like lightning, shadows, etc. (d) Most importantly, the scene being flushed with information irrelevant to the task (background, clutter, etc). Agents learning under these constraints is forced to take a large number of samples simply to untangle these task-irrelevant details before it makes any progress on the true task objective. A common approach to handle these high dimensionality and multi-modality issues is to learn representations that distil information into low dimensional features and use them as inputs to the policy. While such ideas have found reasonable success [43, 40], designing such representations in a supervised manner requires a deep understanding of the problem and domain expertise. An alternative approach is to leverage unsupervised representation learning to autonomously acquire representations based on either reconstruction [13, 59, 56] or contrastive [51, 52] objective. These methods are quite brittle as the representations are acquired from narrow task-specific distributions [61], and hence, do not generalize well across different tasks Table II. Additionally, they acquire task-specific representations, often needing additional samples from the environment leading to poor sample efficiency or domains specific data-augmentations for training representations.
+
+
+
+Fig. 1. RRL Resnet as representation for Reinforcement Learning takes a small step in bridging the gap between Representation learning and Reinforcement learning. RRL pre-trains an encoder on a wide variety of real world classes like ImageNet dataset using a simple supervised classification objective. Since the encoder is exposed to a much wider distribution of images while pretraining, it remains effective whatever distribution the policy might induce during the training of the agent. This allows us to freeze the encoder after pretraining without any additional efforts.
+
+The key idea behind our method stems from an intuitive observation over the desiderata of a good representation i.e. (a) it should be low dimensional for a compact representation. (b) it should be able to capture silent features encapsulating the diversity and the variability present in a real-world task for better generalization performance. (c) it should be robust to irrelevant information like noise, lighting, viewpoints, etc so that it is resilient to the changes in surroundings. (d) it should provide effective representation in the entire distribution that a policy can induce for effective learning. These requirements are quite harsh needing extreme domain expertise to manually design and an abundance of samples to automatically acquire. Can we acquire this representation without any additional effort? Our work takes a very small step in this direction.
+
+---
+
+${}^{1}$ Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, India rutavms@gmail.com
+
+${}^{2}$ Department of Computer Science, University of Washington, Seattle, USA vikash@cs.washington.edu
+
+${}^{3}$ Facebook AI Research, USA
+
+---
+
+The key insight behind our method (Figure 1) is embarrassingly simple - representations do not necessarily have to be trained on the exact task distribution; a representation trained on a sufficiently wide distribution of real-world scenarios, will remain effective on any distribution a policy optimizing a task in the real world might induce. While training over such wide distribution is demanding, this is precisely what the success of large image classification models [8, 10, 54, 12] in Computer Vision delivers - representations learned over a large family of real-world scenarios.
+
+Our Contributions: We list the major contributions
+
+1) We present a surprisingly simple method (RRL) at the intersection of representation learning, imitation learning (IL) and reinforcement learning (RL) that uses features from pre-trained image classification models (Resnet34) as representations in standard RL pipeline. Our method is quite general and can be incorporated with minimal changes to most state based RL/IL algorithms.
+
+2) Task-specific representations learned by supervised as well as unsupervised methods are usually brittle and suffer from distribution mismatch. We demonstrate that features learned by image classification models are general towards different task (Figure 2), robust to visual distractors, and when used in conjunction with standard IL and RL pipelines can efficiently acquire policies directly from proprioceptive inputs.
+
+3) While competing methods have restricted results primarily to planar tasks devoid of depth perspectives, on a rich collection of simulated high dimensional dexterous manipulation tasks, where state-of-the-art methods struggle, we demonstrate that RRL can learn rich behaviors directly from visual inputs with performance & sample efficiency approaching state-based methods.
+
+4) Additionally, we underline the performance gap between the SOTA approaches and RRL on simple low dimensional tasks as well as high dimensional more realistic tasks. Furthermore, we experimentally establish that the commonly used environments for studying image based continuous control methods are not a true representative of real-world scenario.
+
+## II. Related Work
+
+RRL rests on recent developments from the fields of Representation Learning, Imitation Learning and Reinforcement Learning. In this section, we outline related works leveraging representation learning for visual reinforcement and imitation learning.
+
+## A. Learning without explicit representation
+
+A common approach is to learn behaviors in an end to end fashion - from pixels to actions - without explicit distinction between feature representation and policy representations. Success stories in this categories range from seminal work [5] mastering Atari 2600 computer games using only raw pixels as input, to [14] which learns trajectory-centric local policies using Guided Policy Search [4] for diverse continuous control manipulation tasks in the real world learned directly from camera inputs. More recently, [35] has demonstrated success in acquiring multi-finger dexterous manipulation [33] and agile locomotion behaviors using off-policy action critic methods [24]. While learning directly from pixels has found reasonable success, it requires training large networks with high input dimensionality. Agents require a prohibitively large number of samples to untangle task-relevant information in order to acquire behaviors, limiting their application to simulations or constrained lab settings. RRL maintains an explicit representation network to extract low dimensional features. Decoupling representation learning from policy learning delivers results with large gains in efficiency. Next, we outline related works that use explicit representations.
+
+
+
+Fig. 2. Visualization of Layer 4 of Resnet model of the top 1 class using Grad-CAM [45][Top] and Guided Backpropogation [11][Bottom]. This indicates that Resnet is indeed looking for the right features in our task images (right) in spite of such high distributional shift.
+
+## B. Learning with supervised representations
+
+Another approach is to first acquire representations using expert supervision, and use features extracted from representation as inputs in standard policy learning pipelines. A predominant idea is to learn representative keypoints encapsulating task details from the input images and using the extracted keypoints as a replacement of the state information [38]. Using these techniques, [43, 39] demonstrated tool manipulation behaviors in rich scenes flushed with task-irrelevant details. [41] demonstrated simultaneous manipulation of multiple objects in the task of Baoding ball tasks on a high dimensional dexterous manipulation hand. Along with the inbuilt proprioceptive sensing at each joint, they use an RGB stereo image pair that is fed into a separate pre-trained tracker to produce 3D position estimates [57] for the two Baoding balls. These methods, while powerful, learn task-specific features and requires expert supervision, making it harder to (a) translate to variation in tasks/environments, and (b) scale with increasing task diversity. RRL, on the other hand, uses single task-agnostic representations with better generalization capability making it easy to scale.
+
+## C. Learning with unsupervised representations
+
+With the ambition of being scalable, this group of methods intends to acquire representation via unsupervised techniques. [30] uses contrastive learning to time-align visual features across different embodiment to demonstrate behavior transfer from human to a Fetch robot. [20], [62, 59] use variational inference $\left\lbrack {7,{20}}\right\rbrack$ to learn compressed latent representations and use it as input to standard RL pipeline to demonstrate rich manipulation behaviors. [47] additionally learns dynamics models directly in the latent space and use model-based RL to acquire behaviors on simulated tasks. On similar tasks, [36] uses multi-step variational inference to learn world dynamic as well as rewards models for off-policy RL. [51] use image augmentation with variational inference to construct features to be used in standard RL pipeline and demonstrate performance at par with learning directly from the state. [49, 48] demonstrate comparable results by assimilating updates over features acquired only via image augmentation. Similar to supervised methods, unsupervised methods often learns task-specific brittle representations as they break when subjected to small variations in the surroundings and often suffers challenges from non-stationarity arising from the mismatch between the distribution representations are learned on and the distribution policy induces. To induce stability, RRL uses pre-trained stationary representations trained on distribution with wider support than what policy can induce. Additionally, representations learned over a wide distribution of real-world samples are robust to noise and irrelevant information like lighting, illumination, etc.
+
+## D. Learning with representations and demonstrations
+
+Learning from demonstrations has a rich history. We focus our discussion on DAPG [17], a state-based method which optimizes for the natural gradient [2] of a joint loss with imitation as well as reinforcement objective. DAPG has been demonstrated to outperform competing methods [15, 16] on the high dimensional ADROIT dexterous manipulation task suite we test on. RRL extends DAPG to solve the task suite directly from proprioceptive signals with performance and sample efficiency comparable to state-DAPG. Unlike DAPG which is on-policy, FERM [58] is a closely related off-policy actor-critic methods combining learning from demonstrations with RL. FERM builds on RAD [49] and inherits its challenges like learning task-specific representations. We demonstrate via experiments that RRL is more stable, more robust to various distractors, and convincingly outperforms FERM since RRL uses a fixed feature extractor pre-trained over wide variety of real world images and avoids learning task specific representations.
+
+## III. BACKGROUND
+
+RRL solves a standard Markov decision process (Section III-A) by combining three fundamental building blocks - (a) Policy gradient algorithm (Section III-B), (b) Demonstration bootstrapping (Section III-C), and (c) Representation learning (Section III-D). We briefly outline these fundamentals before detailing our method in Section IV.
+
+## A. Preliminaries: MDP
+
+We model the control problem as a Markov decision process (MDP), which is defined using the tuple: $\mathcal{M} =$ $\left( {\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},{\rho }_{0},\gamma }\right) .\mathcal{S} \in {\mathbb{R}}^{n}$ and $\mathcal{A} \in {\mathbb{R}}^{m}$ represent the state and actions. $\mathcal{R} : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is the reward function. In the ideal case, this function is simply an indicator for task completion (sparse reward setting). $\mathcal{T} : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ is the transition dynamics, which can be stochastic. In model-free RL, we do not assume any knowledge about the transition function and require only sampling access to this function. ${\rho }_{0}$ is the probability distribution over initial states and $\gamma \in \lbrack 0,1)$ is the discount factor. We wish to solve for a stochastic policy of the form $\pi : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ which optimizes the expected sum of rewards:
+
+$$
+\eta \left( \pi \right) = {\mathbb{E}}_{\pi ,\mathcal{M}}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t}}\right\rbrack \tag{1}
+$$
+
+## B. Policy Gradient
+
+The goal of the RL agent is to maximise the expected discounted return $\eta \left( \pi \right)$ (Equation 1) under the distribution induced by the current policy $\pi$ . Policy Gradient algorithms optimize the policy ${\pi }_{\theta }\left( {a \mid s}\right)$ directly, where $\theta$ is the function parameter by estimating $\nabla \eta \left( \pi \right)$ . First we introduce a few standard notations, Value function : ${V}^{\pi }\left( s\right) ,\mathrm{Q}$ function : ${Q}^{\pi }\left( {s, a}\right)$ and the advantage function : ${A}^{\pi }\left( {s, a}\right)$ . The advantage function can be considered as another version of Q-value with lower variance by taking the state-value off as the baseline.
+
+$$
+{V}^{\pi }\left( s\right) = {\mathbb{E}}_{\pi \mathcal{M}}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t} \mid {s}_{0} = s}\right\rbrack
+$$
+
+$$
+{Q}^{\pi }\left( {s, a}\right) = {\mathbb{E}}_{\mathcal{M}}\left\lbrack {\mathcal{R}\left( {s, a}\right) }\right\rbrack + {\mathbb{E}}_{{s}^{\prime } \sim \mathcal{T}\left( {f, d}\right) }\left\lbrack {{V}^{\pi }\left( {s}^{\prime }\right) }\right\rbrack
+$$
+
+$$
+{A}^{\pi }\left( {s, a}\right) = {Q}^{\pi }\left( {s, a}\right) - {V}^{\pi }\left( s\right)
+$$
+
+(2)
+
+The gradient can be estimated using the Likelihood ratio approach and Markov property of the problem [1] and using a sampling based strategy,
+
+$$
+\nabla \eta \left( \pi \right) = g = \frac{1}{NT}\mathop{\sum }\limits_{{i = 0}}^{N}\mathop{\sum }\limits_{{t = 0}}^{T}{\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t}^{i} \mid {s}_{t}^{i}}\right) {\widehat{A}}^{\pi }\left( {{s}_{t}^{i},{a}_{t}^{i}, t}\right) \tag{3}
+$$
+
+Amongst the wide collection of policy gradient algorithms, we build upon Natural Policy Gradient (NPG) [2] to solve our MDP formulation owing to its stability and effectiveness in solving complex problems. We refer to [32] for a detailed background on different policy gradient approaches. In the next section, we describe how human demonstrations can be effectively used along with NPG to aid policy optimization.
+
+## C. Demo Augmented Policy Gradient
+
+Policy Gradients with appropriately shaped rewards can solve arbitrarily complex tasks. However, real-world environments seldom provide shaped rewards, and it must be manually specified by domain experts. Learning with sparse signals, such as task completion indicator functions, can relax domain expertise in reward shaping but it results in extremely high sample complexity due to exploration challenges. DAPG ([17]) combines policy gradients with few demonstrations in two ways to mitigate this issue and effectively learn from them. We represent the demonstration dataset using ${\rho }_{D} = \left\{ \left( {{s}_{t}^{\left( i\right) },{a}_{t}^{\left( i\right) },{s}_{t + 1}^{\left( i\right) },{r}_{t}^{\left( i\right) }}\right) \right\}$ where $t$ indexes time and $i$ indexes different trajectories.
+
+(1) Warm up the policy using few demonstrations (25 in our setting) using a simple Mean Squared Error(MSE) loss, i.e, initialize the policy using behavior cloning [Eq 4]. This provides an informed policy initialization that helps in resolving the early exploration issue as it now pays attention to task relevant state-action pairs and thereby, reduces the sample complexity.
+
+$$
+{L}_{BC}\left( \theta \right) = \frac{1}{2}\mathop{\sum }\limits_{{i, t \in \text{ minibatch }}}{\left( {\pi }_{\theta }\left( {s}_{t}^{\left( i\right) }\right) - {a}_{t}^{\left( i\right) H}\right) }^{2} \tag{4}
+$$
+
+where, $\theta$ are the agent parameters and ${a}_{t}^{\left( i\right) H}$ represents the action taken by the human/expert.
+
+(2) DAPG builds upon on-policy NPG algorithm [2] which uses a normalized gradient ascent procedure where the normalization is under the Fischer metric.
+
+$$
+{\theta }_{k + 1} = {\theta }_{k} + \sqrt{\frac{\delta }{{g}^{T}{\widehat{F}}_{{\theta }_{k}}^{-1}g}}{\widehat{F}}_{{\theta }_{k}}^{-1}g \tag{5}
+$$
+
+where ${\widehat{F}}_{{\theta }_{k}}$ is the Fischer Information Metric at the current iterate ${\theta }_{k}$ ,
+
+$$
+{\widehat{F}}_{{\theta }_{k}} = \frac{1}{T}\mathop{\sum }\limits_{{t = 0}}^{T}{\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {s}_{t}}\right) {\nabla }_{\theta }\log {\pi }_{\theta }{\left( {a}_{t} \mid {s}_{t}\right) }^{T} \tag{6}
+$$
+
+and $g$ is the sample based estimate of the policy gradient [Eq [3]. To make the best use of available demonstrations, DAPG proposes a joint loss ${g}_{\text{aug }}$ combining task as well as imitation objective. The imitation objective asymptotically decays over time allowing the agent to learn behaviors surpassing the expert.
+
+$$
+{g}_{\text{aug }} = \mathop{\sum }\limits_{{\left( {s, a}\right) \in {\rho }_{\pi }}}{\nabla }_{\theta }\ln {\pi }_{\theta }\left( {a \mid s}\right) {A}^{\pi }\left( {s, a}\right) \tag{7}
+$$
+
+$$
++ \mathop{\sum }\limits_{{\left( {s, a}\right) \in {\rho }_{D}}}{\nabla }_{\theta }\ln {\pi }_{\theta }\left( {a \mid s}\right) w\left( {s, a}\right)
+$$
+
+where, ${\rho }_{\pi }$ is the dataset obtained by executing the current policy, ${\rho }_{D}$ is the demonstration data and $w\left( {s, a}\right)$ is the heuristic weighting function defined as :
+
+$$
+w\left( {s, a}\right) = {\lambda }_{0}{\lambda }_{1}^{k}\mathop{\max }\limits_{{\left( {{s}^{\prime },{a}^{\prime }}\right) \in {\rho }_{\pi }}}{A}^{\pi }\left( {{s}^{\prime },{a}^{\prime }}\right) \;\forall \;\left( {s, a}\right) \in {\rho }_{D} \tag{8}
+$$
+
+DAPG has proven to be successful in learning policy for the dexterous manipulation tasks with reasonable sample complexity.
+
+## D. Representation Learning
+
+DAPG has thus far only been demonstrated to be effective with access to low-level state information which is not readily available in real-world. DAPG is based on NPG which works well but faces issues with input dimensionality and hence, cannot be directly used with the input images acquired from onboard cameras. Representation learning [6] is learning representations of input data typically by transforming it or extracting features from it, which makes it easier to perform the task (in our case it can be used in place of the exact state of the environment). Let $I \in {\mathbb{R}}^{n}$ represents the high dimensional input image, then
+
+$$
+h = {f}_{\rho }\left( I\right) \tag{9}
+$$
+
+where $f$ represents the feature extractor, $\rho$ is the distribution over which $f$ is valid and $h \in {\mathbb{R}}^{d}$ with $d < < n$ is the compact, low dimensional representation of $I$ . In the next section, we outline our method that scales DAPG to solve directly from visual information.
+
+## IV. RRL: RESNET AS REPRESENTATION FOR RL
+
+In an ideal RL setting, the agent interacts with the environment based on the current state, and in return, the environment outputs the next state and the reward obtained. This works well in a simulated environment but in a real-world scenario, we do not have access to this low-level state information. Instead we get the information from cameras $\left( {I}_{t}\right)$ and other onboard sensors like joint encoders $\left( {\delta }_{t}\right)$ . To overcome the challenges associated with learning from high dimensional inputs, we use representations that project information into a lower-dimensional manifolds. These representations can be (a) learned in tandem with the RL objective. However, this leads to non-stationarity issue where the distribution induced by the current policy ${\pi }_{i}$ may lie outside the expressive power of $f,{\pi }_{i} ⊄ {\rho }_{i}$ at any step $i$ during training. (b) decoupled from RL by pre-training $f$ . For this to work effectively, the feature extractor must be trained on a sufficiently wide distribution such that it covers any distribution that the policy might induce during training, ${\pi }_{i} \subset \rho \forall i$ . Getting hold of such task specific training data beforehand becomes increasingly difficult as the complexity and diversity of the task increases. To this end, we propose to use a fixed feature extractor (Section V-B) that is pretrained on a wide variety of real world scenarios like ImageNet dataset [Highlighted in purple in Figure 1]. We experimentally demonstrate that the diversity (Section V-C) of the such feature extractor allows us to use it across all tasks we considered. The use of pre-trained representations induces stability to RRL as our representations are frozen and do-not face the non-stationarity issues encountered while learning policy and representation in tandem.
+
+The features $\left( {h}_{t}\right)$ obtained from the above feature extractor are appended with the information obtained from the internal joint encoders of the Adroit Hand $\left( {\delta }_{t}\right)$ . As a substitute of the exact state $\left( {s}_{t}\right)$ , we empirically show that $\left\lbrack {{h}_{t},{\delta }_{t}}\right\rbrack$ can be used as an input to the policy. In principle any RL algorithm can be deployed to learn the policy, in RRL we build upon Natural Policy Gradients [3] owing to effectiveness in solving complex high dimensional tasks [17]. We present our full algorithm in Algorithm-1.
+
+Algorithm 1 RRL
+
+---
+
+Input: 25 Human Demonstrations ${\rho }_{D}$
+
+Initialize using Behavior Cloning [Eq. 4].
+
+repeat
+
+ for $\mathrm{i} = 1$ to $\mathrm{n}$ do
+
+ for $\mathrm{t} = 1$ to horizon do
+
+ Take action
+
+${a}_{t} = {\pi }_{\theta }\left( \left\lbrack {\operatorname{Encoder}\left( {I}_{t}\right) ,{\delta }_{t}}\right\rbrack \right)$
+
+and receive ${I}_{t + 1},{\delta }_{t + 1},{r}_{t + 1}$
+
+from the environment.
+
+ end for
+
+ end for
+
+ Compute ${\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {s}_{t}}\right)$ for each $\left( {s, a}\right) \in {\rho }_{\pi },{\rho }_{D}$
+
+ Compute ${A}^{\pi }\left( {s, a}\right)$ for each $\left( {s, a}\right) \in {\rho }_{\pi }$ and $w\left( {s, a}\right)$
+
+for each $\left( {s, a}\right) \in {\rho }_{D}$ according to Equations 2,8
+
+ Calculate policy gradient according to [7]
+
+ Compute Fisher matrix [6]
+
+ Take the gradient ascent step according to 5 .
+
+ Update the parameters of the value function in order
+
+to approximate 2 $: {V}_{k}^{\pi }\left( {s}_{t}^{\left( n\right) }\right) \approx \mathop{\sum }\limits_{{{t}^{\prime } = t}}^{T}{\gamma }^{{t}^{\prime } - t}{r}_{t}^{\left( n\right) }$
+
+until Satisfactory performance
+
+---
+
+## V. EXPERIMENTAL EVALUATIONS
+
+Our experimental evaluations aims to address the following questions: (1) Does pre-tained representations acquired via large real world image dataset allow RRL to learn complex tasks directly from proprioceptive signals (camera inputs and joint encoders)? (2) How does RRL's performance and efficiency compare against other state-of-the-art methods? (3) How various representational choices influence the generality and versatility of the resulting behaviors? (5) What are the effects of various design decisions on RRL? (6) Are commonly used benchmarks for studying image based continuous control methods effective?
+
+## A. Tasks
+
+Applicability of prior proprioception based RL methods $\left\lbrack {{49},{48},{47}}\right\rbrack$ have been limited to simple low dimensional tasks like Cartpole, Cheetah, Reacher, Finger spin, Walker, Ball in cup, etc. Moving beyond these simple domains, we investigate RRL on Adroit manipulation suite [17] which consists of contact-rich high-dimensional dexterous manipulation tasks (Figure 3) that have found to be challenging ever for state $\left( {s}_{t}\right)$ based methods. Furthermore, unlike prior task sets, which are fundamentally planar and devoid of depth perspective, the Adroit manipulation suite consists of visually-rich physically-realistic tasks that demand representations untangling complex depth information.
+
+## B. Implementation Details
+
+We use standard Resnet-34 model as RRL's feature extractor. The model is pre-trained on the ImageNet dataset which consists of 1000 classes. It is trained on 1.28 million images on the classification task of ImageNet. The last layer of the model is removed to recover a 512 dimensional feature space and all the parameters are frozen throughout the training of the RL agent. During inference, the observations obtained from the environment are of size ${256} \times {256}$ , a center crop of size ${224} \times {224}$ is fed into the model. We also evaluate our model using different Resnet sizes (Figure 7). All the hyperparameters used for training are summarized in Appendix(Table II). We report an average performance over three random seeds for all the experiments.
+
+
+
+Fig. 3. ADROIT manipulation suite consisting of complex dexterous manipulation tasks involving object relocation, in hand manipulation (pen repositioning), tool use (hammering a nail), and interacting with human centric environments (opening a door).
+
+## C. Results
+
+In Figure 4, we contrast the performance of RRL against the state of the art baselines. We begin by observing that NPG [3] struggles to solve the suite even with full state information, which establishes the difficulty of our task suite. DAPG(State) [17] uses privileged state information and a few demonstrations from the environment to solve the tasks and pose as the best case oracle. RRL demonstrates good performance on all the tasks, relocate being the hardest, and often approaches performance comparable to our strongest oracle-DAPG(State).
+
+A competing baseline FERM [58] is quite unstable in these tasks. It starts strong for hammer and door tasks but saturates in performance. It makes slow progress in pen, and completely fails for relocate. In Figure 5 [Left] we compare the computational footprint of FERM (along with other methods, discussed in later sections) with RRL. We note that our method not only outperforms FERM but also is approximately five times more compute-efficient.
+
+---
+
+${}^{1}$ Reporting best performance amongst over 30 configurations per task we tried in consultation with the FERM authors.
+
+---
+
+
+
+Fig. 4. Performance on ADROIT dexterous manipulation suite [17]: State of the art policy gradient method NPG(State) [29] struggles to solve the suite even with privileged low level state information, establishing the difficulty of the suite. Amongst demonstration accelerated methods, RRL(Ours) demonstrates stable performance and approaches performance of DAPG(State) [17] (upper bound), a demonstration accelerated method using privileged state information. A competing baseline FERM [58] makes good initial, but unstable, progress in a few tasks and often saturates in performance before exhausting our computational budget (40 hours/ task/ seed).
+
+
+
+Fig. 5. LEFT: Comparison of the computational cost of RRL with Resnet34 i.e RRL(Ours), FERM - Strongest baseline, RRL with Resnet 18, RRL with Resnet 50, RRL (VAE), RRL with ShuffleNet, RRL with MobileNet and RRL with Very Deep VAE baseline. CENTER, RIGHT: Influence of various environment distractions (lightning condition, object color) on RRL(Ours), and FERM. RRL(Ours) consistently performs better than FERM in all the variations we considered.
+
+## D. Effects of Visual Distractors
+
+In Figure 5 [Center, Right] we probe the robustness of the final policies by injecting visual distractors in the environment during inference. We note that the resilience of the resnet features induces robustness to RRL's policies. On the other hand, task-specific features learned by FERM are brittle leading to larger degradation in performance. In addition to improved sample and time complexity resulting from the use of pre-trained features, the resilience, robustness, and versatility of Resnet features lead to policies that are also robust to visual distractors, clutter in the scene. More details about the experiment setting is provided in Section VII-H in Appendix.
+
+## E. Effect of Representation
+
+Is Resnet lucky? To investigate if architectural choice of Resnet is lucky, in Figure 6 we test different models pretrained on ImageNet dataset as RRL's feature extractors - MobileNetV2 [44], ShuffleNet [27] and state of the art hierarchical VAE [60] [Refer Section VII-E in Appendix for more details]. Not much degradation in performance is observed with respect to the Resnet model. This highlights that it is not the architecture choices in particular, rather the dataset on which models are being pre-trained, that delivers generic features effective for the RL agents.
+
+Task-specific vs Task-agnostic representation: In Figure 7, we compare the performance between (a) learning task specific representations (VAE) (b) generic representation trained on a very wide distribution (Resnet). We note that RRL using Resnet34 significantly outperforms a variant RRL(VAE) (see appendix for details Section VII-G) that learns features via commonly used variational inference techniques on a task specific dataset [22, 23, 25, 28]. This indicates that pre-trained Resnet provides task agnostic and superior features compared to methods that explicitly learn brittle (Section-V-H) and task-specific features using additional samples from the environment. It is important to note that the latent dimension of the Resnet34 and VAE are kept same (512) for a fair comparison, however, the model sizes are different as one operates on a very wide distribution while the other on a much narrower task specific dataset. Additionally, we summarize the compute cost of both the methods RRL(Ours) and RRL(VAE) in Figrue 5 [Left]. We notice that even though RRL(VAE) is the cheapest, its performance is quite low (Figure 7). RRL(Ours) strikes a balance between compute and efficiency.
+
+
+
+Fig. 6. Effect of different types of Feature extractor pretrained on ImageNet dataset, highlighting that not just Resnet but any feature extractor pretrained on a sufficiently wide distribution of data remains effective.
+
+
+
+Fig. 7. Influence of representation: RRL(Ours), using resnet34 features, outperforms commonly used representation (RRL(VAE)) learning method VAE. Amongst different Resnet variations, Resnet34 strikes the balance between representation capacity and computational overhead. NPG(Resnet34) showcases the performance with Resnet34 features but without demonstration bootstrapping, indicating that only representational choices are not enough to solve the task suite.
+
+F. Effects of proprioception choices and sensor noise
+
+
+
+Fig. 8. Influence of proprioceptive signals on RRL(Vision+sensors-Ours): RRL(Noise) demonstrates that RRL remains effectiveness in presence of noisy (2%) proprioception. RRL(Vision) demonstrates that RRL remains performant with (only) visual inputs as well.
+
+While it's hard to envision a robot without proprioceptive joint sensing, harsh conditions of the real-world can lead to noisy sensing, even sensor failures. In Figure 8, we subjected RRL to (a) signals with $2\%$ noise in the information received from the joint encoders RRL(Noise), and (b) only visual inputs are used as proprioceptive signals RRL(Vision). In both these cases, our methods remained performant with slight to no degradation in performance.
+
+## G. Ablations and Analysis of Design Decisions
+
+In our next set of experiments, we evaluate the effect of various design decisions on our method. In Figure 7, we study the effect of different Resnet features as our representation. Resnet34, though computationally more demanding (Figure 5) than Resnet18, delivers better performance owing to its improved representational capacity and feature expressivity. A further boost in capacity (Resnet50) degrades performance, likely due to the incorporation of less useful features and an increase in samples required to train the resulting larger policy network.
+
+
+
+Fig. 9. LEFT: Influence of rewards signals: RRL(Ours), using sparse rewards, remains performant with a variation ${\mathrm{{RRL}}}_{\text{dense }}$ using well-shaped dense rewards. RIGHT: Effect of policy size on the performance of RRL. We observe that it is quite stable with respect to a wide range of policy sizes.
+
+Reward design, especially for complex high dimensional tasks, requires domain expertise. RRL replaces the needs of well-shaped rewards by using a few demonstrations (to curb the exploration challenges in high dimensional space) and sparse rewards (indicating task completion). This significantly lowers the domain expertise required for our methods. In Figure 9-LEFT, we observe that RRL (using sparse rewards) delivers competitive performance to a variant of our methods that uses well-shaped dense rewards while being resilient to variation in policy network capacity (Figure 9-RIGHT).
+
+## H. Rethinking benchmarking for visual ${RL}$
+
+DMControl [31] is a widely used benchmark for proprioception based RL methods - RAD [49], SAC+AE [56], CURL [51], DrQ [48]. While these methods perform well (Table 1) on such simple DMControl tasks, their progress struggles to scale when met with task representative of real world complexities such as realistic Adroit Manipulation benchmarks (Figure 4).
+
+For example we demonstrate in Figure 4 that a representative SOTA methods FERM (uses expert demos along with RAD) struggles to perform well on Adroit Manipulation benchmark. On the contrary, RRL using Resnet features pre-trained on real world image dataset, delivers state comparable results on Adroit Manipulation benchmark while struggles on the DMControl (RRL+SAC: RRL using SAC and Resnet34 features [1]. This highlights large domain gap between the DMControl suite and the real-world.
+
+We further note that the pretrained features learned by SOTA methods aren't as widely applicable. We use a pre-trained RAD encoder (pretrained on Cartpole) as fixed feature extractor (Fixed RAD encoder in Table 1) and retrain the policy using these features for all environments. The performance degrades for all the tasks except Cartpole. This highlights that the representation learned by RAD (even with various image augmentations) are task specific and fail to generalize to other tasks set with similar visuals. Furthermore, learning such task specific representations are easier on simpler scenes but their complexity grows drastically as the complexity of tasks and scenes increases. To ensure that important problems aren't overlooked, we emphasise the need for the community to move towards benchmarks representative of realistic real world tasks.
+
+| ${500}\mathrm{\;K}$ Step Scores | RRL+SAC | RAD | Fixed RAD Encoder | CURL | SAC+AE | State SAC |
| Finger, Spin | ${422} \pm {102}$ | ${947} \pm {101}$ | ${789} \pm {190}$ | ${926} \pm {45}$ | ${884} \pm {128}$ | 923 ± 211 |
| Cartpole, Swing | ${357} \pm {85}$ | ${863} \pm 9$ | ${875} \pm {01}$ | ${845} \pm {45}$ | ${735} \pm {63}$ | ${848} \pm {15}$ |
| Reacher, Easy | ${382} \pm {299}$ | ${955} \pm {71}$ | ${53} \pm {44}$ | ${929} \pm {44}$ | ${627} \pm {58}$ | ${923} \pm {24}$ |
| Cheetah, Run | ${154} \pm {23}$ | ${728} \pm {71}$ | ${203} \pm {31}$ | ${518} \pm {28}$ | ${550} \pm {34}$ | ${795} \pm {30}$ |
| Walker, Walk | ${148} \pm {12}$ | ${918} \pm {16}$ | ${182} \pm {40}$ | ${902} \pm {43}$ | ${847} \pm {48}$ | ${948} \pm {54}$ |
| Cup, Catch | ${447} \pm {132}$ | ${974} \pm {12}$ | ${719} \pm {70}$ | ${959} \pm {27}$ | ${794} \pm {58}$ | ${974} \pm {33}$ |
| 100K Step Scores | | | | | | |
| Finger, Spin | ${135} \pm {67}$ | ${856} \pm {73}$ | ${655} \pm {104}$ | ${767} \pm {56}$ | ${740} \pm {64}$ | ${811} \pm {46}$ |
| Cartpole, Swing | ${192} \pm {19}$ | ${828} \pm {27}$ | ${840} \pm {34}$ | ${582} \pm {146}$ | ${311} \pm {11}$ | ${835} \pm {22}$ |
| Reacher, Easy | ${322} \pm {285}$ | ${826} \pm {219}$ | ${162} \pm {40}$ | ${538} \pm {233}$ | ${274} \pm {14}$ | ${746} \pm {25}$ |
| Cheetah, Run | ${72} \pm {63}$ | ${447} \pm {88}$ | ${188} \pm {20}$ | ${299} \pm {48}$ | ${267} \pm {24}$ | ${616} \pm {18}$ |
| Walker, Walk | ${63} \pm {07}$ | ${504} \pm {191}$ | ${106} \pm {11}$ | ${403} \pm {24}$ | ${394} \pm {22}$ | ${891} \pm {82}$ |
| Cup, Catch | ${261} \pm {57}$ | ${840} \pm {179}$ | ${533} \pm {148}$ | ${769} \pm {43}$ | ${391} \pm {82}$ | ${746} \pm {91}$ |
+
+TABLE I
+
+Results on DMControl Benchmark. RAD outperforms all the baselines whereas RRL performs worse in the ${100}\mathrm{K}$ and ${500}\mathrm{к}$ Environmental step benchmark suggesting that it is quicker to learn task specific representation in simple tasks whereas Fixed RAD ENCODER HIGHLIGHTS THAT THE REPRESENTATIONS LEARNED BY RAD ARE NARROW AND TASK SPECIFIC.
+
+## VI. STRENGTHS, LIMITATIONS & OPPORTUNITIES
+
+This paper presents an intuitive idea bringing together advancements from the fields of representation learning, imitation learning, and reinforcement learning. We present a very simple method named RRL that leverages Resnet features as representation to learn complex behaviors directly from proprioceptive signals. The resulting algorithm approaches the performance of state-based methods in complex ADROIT dexterous manipulation suite.
+
+Strengths: The strength of our insight lies in its simplicity, and applicability to almost any reinforcement or imitation learning algorithm that intends to learn directly from high dimensional proprioceptive signals. We present RRL, an instantiation of this insight on top of imitation + (on-policy) reinforcement learning methods called DAPG, to showcase its strength. It presents yet another demonstration that features learned by Resnet are quite general and are broadly applicable. Resnet features trained over 1000s of real-world images are more robust and resilient in comparison to the features learned by methods that learn representation and policies in tandem using only samples from the task distribution. The use of such general but frozen representations in conjunction with RL pipelines additionally avoids the non-stationary issues faced by competing methods that simultaneously optimizes reinforcement and representation objectives, leading to more stable algorithms. Additionally, not having to train your own features extractors results in a significant sample and compute gains, Refer to Figure 5.
+
+Limitations: While this work demonstrates promises of using pre-trained features, it doesn't investigate the data mismatch problem that might exist. Real-world datasets used to train resnet features are from human-centric environments. While we desire robots to operate in similar settings, there are still differences in their morphology and mode of operations. Additionally, resent (and similar models) acquire features from data primarily comprised of static scenes. In contrast, embodied agents desire rich features of dynamic and interactive movements.
+
+Opportunities: RRL uses a single pre-trained representation for solving all the complex and very different tasks. Unlike the domains of vision and language, there is a nontrivial cost associated with data in robotics. The possibility of having a standard shared representational space opens up avenues for leveraging data from various sources, building hardware-accelerated devices using feature compression, low latency and low bandwidth information transmission.
+
+## REFERENCES
+
+[1] Ronald J. Williams. "Simple statistical gradient-following algorithms for connectionist reinforcement learning". In: Machine Learning. 1992, pp. 229-256.
+
+[2] S. Kakade. "A Natural Policy Gradient". In: NIPS. 2001.
+
+[3] Sham M Kakade. "A natural policy gradient". In: Advances in neural information processing systems 14 (2001).
+
+[4] Sergey Levine and Vladlen Koltun. "Guided Policy Search". In: Proceedings of the 30th International Conference on Machine Learning. Ed. by Sanjoy Dasgupta and David McAllester. Vol. 28. Proceedings of Machine Learning Research 3. Atlanta, Georgia, USA: PMLR, 17-19 Jun 2013, pp. 1-9. URL: http: //proceedings.mlr.press/v28/levine13 html
+
+[5] Volodymyr Mnih et al. Playing Atari with Deep Reinforcement Learning. 2013. arXiv: 1312.5602 [cs.LG].
+
+[6] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation Learning: A Review and New Perspectives. 2014. arXiv: 1206.5538 [cs.LG].
+
+[7] Diederik P Kingma and Max Welling. Auto-Encoding
+
+Variational Bayes. 2014. arXiv: 1312. 6114 [stat.ML].
+
+[8] Kaiming He et al. Deep Residual Learning for Image Recognition. 2015. arXiv: 1512.03385 [cs.CV]
+
+[9] Volodymyr Mnih et al. "Human-level control through deep reinforcement learning". In: Nature 518.7540 (Feb. 2015), pp. 529-533. ISSN: 00280836. URL: http: //dx.doi.org/10.1038/nature14236.
+
+[10] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. 2015. arXiv: 1409.1556 [cs.CV].
+
+[11] Jost Tobias Springenberg et al. Striving for Simplicity: The All Convolutional Net. 2015. arXiv: 1412.6806 [cs.LG].
+
+[12] Christian Szegedy et al. Rethinking the Inception Architecture for Computer Vision. 2015. arXiv: 1512 00567 [cs.CV].
+
+[13] Irina Higgins et al. "beta-vae: Learning basic visual concepts with a constrained variational framework". In: (2016).
+
+[14] Sergey Levine et al. End-to-End Training of Deep Visuomotor Policies. 2016. arXiv: 1504. 00702 [cs.LG].
+
+[15] Abhishek Gupta et al. Learning Dexterous Manipulation for a Soft Robotic Hand from Human Demonstration. 2017. arXiv: 1603.06348 [cs.LG]
+
+[16] Todd Hester et al. Deep Q-learning from Demonstrations. 2017. arXiv: 1704.03732 [cs.AI].
+
+[17] Aravind Rajeswaran et al. "Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations". In: CoRR abs/1709.10087 (2017). arXiv: 1709.10087. URL: http://arxiv.org/ abs/1709.10087
+
+[18] John Schulman et al. Trust Region Policy Optimization. 2017. arXiv: 1502.05477 [cs.LG].
+
+[19] Davide Silver et al. "Mastering the game of Go without human knowledge". In: Nature 550 (Oct. 2017), pp. 354-. URL: http://dx.doi.org/10.1038/ nature24270.
+
+[20] Christopher P. Burgess et al. Understanding disentangling in $\beta$ -VAE. 2018. arXiv: 1804 . 03599 [stat.ML].
+
+[21] Lasse Espeholt et al. IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. 2018. arXiv: 1802.01561 [cs.LG].
+
+[22] David Ha and Jürgen Schmidhuber. Recurrent World Models Facilitate Policy Evolution. 2018. arXiv: 1809. 01999 [cs.LG].
+
+[23] David Ha and Jürgen Schmidhuber. "World models". In: arXiv preprint arXiv:1803.10122 (2018).
+
+[24] Tuomas Haarnoja et al. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. 2018. arXiv: 1801. 01290 [cs.LG].
+
+[25] Irina Higgins et al. DARLA: Improving Zero-Shot Transfer in Reinforcement Learning. 2018. arXiv: 1707.08475 [stat.ML].
+
+[26] Dmitry Kalashnikov et al. QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation. 2018. arXiv: 1806.10293 [cs.LG]
+
+[27] Ningning Ma et al. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. 2018. arXiv: 1807.11164 [cs.CV].
+
+[28] Ashvin Nair et al. Visual Reinforcement Learning with Imagined Goals. 2018. arXiv: 1807.04742 [cs.LG].
+
+[29] Aravind Rajeswaran et al. Towards Generalization and Simplicity in Continuous Control. 2018. arXiv: 1703. 02660 [cs.LG].
+
+[30] Pierre Sermanet et al. Time-Contrastive Networks: Self-Supervised Learning from Video. 2018. arXiv: 1704. 06888 [cs.CV].
+
+[31] Yuval Tassa et al. DeepMind Control Suite. 2018. arXiv: 1801.00690 [cs.AI].
+
+[32] Lilian Weng. "Policy Gradient Algorithms". In: lilianweng.github.io/lil-log (2018). URL: https:// lilianweng.github.io/lil-log/2018/ 04 / 08 / policy - gradient - algorithms html
+
+[33] Henry Zhu et al. Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost. 2018. arXiv: 1810.06045 [cs.AI].
+
+[34] Gabriel Dulac-Arnold, Daniel Mankowitz, and Todd Hester. "Challenges of real-world reinforcement learning". In: arXiv preprint arXiv:1904.12901 (2019).
+
+[35] Tuomas Haarnoja et al. Soft Actor-Critic Algorithms and Applications. 2019. arXiv: 1812.05905 [cs.LG].
+
+[36] Danijar Hafner et al. Learning Latent Dynamics for Planning from Pixels. 2019. arXiv: 1811.04551 [cs.LG].
+
+[37] Max Jaderberg et al. "Human-level performance in 3D multiplayer games with population-based reinforcement learning". In: Science 364.6443 (May 2019), pp. 859-865. ISSN: 1095-9203. DOI: 10.1126/ science.aau6249.URL: http://dx.doi org/10.1126/science.aau6249
+
+[38] Tejas Kulkarni et al. Unsupervised Learning of Object Keypoints for Perception and Control. 2019. arXiv: 1906.11883 [cs.CV].
+
+[39] Lucas Manuelli et al. kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation. 2019. arXiv: 1903.06684 [cs.RO].
+
+[40] Lucas Manuelli et al. "kpam: Keypoint affordances for category-level robotic manipulation". In: arXiv preprint arXiv:1903.06684 (2019).
+
+[41] Anusha Nagabandi et al. Deep Dynamics Models for Learning Dexterous Manipulation. 2019. arXiv: 1909 11652 [cs.RO].
+
+[42] OpenAI et al. Solving Rubik's Cube with a Robot Hand. 2019. arXiv: 1910.07113 [cs.LG]
+
+[43] Zengyi Qin et al. KETO: Learning Keypoint Representations for Tool Manipulation. 2019. arXiv: 1910. 11977 [cs.RO].
+
+[44] Mark Sandler et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. 2019. arXiv: 1801.04381 [cs.CV].
+
+[45] Ramprasaath R. Selvaraju et al. "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization". In: International Journal of Computer Vision 128.2 (Oct. 2019), pp. 336-359. ISSN: 1573- 1405. DOI: 10.1007/s11263-019-01228-7. URL: http://dx.doi.org/10.1007/s11263- 019-01228-7.
+
+[46] Michael Ahn et al. "ROBEL: RObotics BEnchmarks for Learning with low-cost robots". In: Conference on Robot Learning. PMLR. 2020, pp. 1300-1313.
+
+[47] Danijar Hafner et al. Dream to Control: Learning Behaviors by Latent Imagination. 2020. arXiv: 1912. 01603 [cs.LG].
+
+[48] Ilya Kostrikov, Denis Yarats, and Rob Fergus. Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels. 2020. arXiv: 2004.13649 [cs.LG].
+
+[49] Michael Laskin et al. Reinforcement Learning with Augmented Data. 2020. arXiv: 2004.14990 [cs.LG].
+
+[50] Aravind Rajeswaran, Igor Mordatch, and Vikash Kumar. A Game Theoretic Framework for Model Based Reinforcement Learning. 2020. arXiv: 2004.07804 [cs.LG].
+
+[51] Aravind Srinivas, Michael Laskin, and Pieter Abbeel. CURL: Contrastive Unsupervised Representations for Reinforcement Learning. 2020. arXiv: 2004.04136 [cs.LG].
+
+[52] Adam Stooke et al. Decoupling Representation Learning from Reinforcement Learning. 2020. arXiv: 2009. 08319 [cs.LG].
+
+[53] A.K Subramanian. PyTorch-VAE. https://github com/AntixK/PyTorch-VAE.2020.
+
+[54] Mingxing Tan and Quoc V. Le. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. 2020. arXiv: 1905.11946 [cs.LG].
+
+[55] Denis Yarats and Ilya Kostrikov. Soft Actor-Critic (SAC) implementation in PyTorch. https:// github.com/denisyarats/pytorch_sac 2020.
+
+[56] Denis Yarats et al. Improving Sample Efficiency in Model-Free Reinforcement Learning from Images. 2020. arXiv: 1910.01741 [cs.LG].
+
+[57] Yang You et al. KeypointNet: A Large-scale 3D Keypoint Dataset Aggregated from Numerous Human Annotations. 2020. arXiv: 2002.12687 [cs.CV].
+
+[58] Albert Zhan et al. A Framework for Efficient Robotic Manipulation. 2020. arXiv: 2012.07975 [cs.RO].
+
+[59] Henry Zhu et al. The Ingredients of Real-World Robotic Reinforcement Learning. 2020. arXiv: 2004.12570 [cs.LG].
+
+[60] Rewon Child. Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images. 2021. arXiv: 2011.10650 [cs.LG]
+
+[61] Austin Stone et al. The Distracting Control Suite - A Challenging Benchmark for Reinforcement Learning from Pixels. 2021. arXiv: 2101.02722 [cs.RO].
+
+[62] Chelsea Finn et al. "Learning Visual Feature Spaces for Robotic Manipulation with Deep Spatial Autoen-coders". In: ( ).
+
+## VII. APPENDIX
+
+## A. Project's webpage
+
+Full details of the project (including video results, codebase, etc) are available at https://sites.google.com/view/abstractions4rl
+
+## B. Overview of all methods used in baselines and ablations
+
+The environmental setting and the feature extractor used in all the variations and different methods considered is summarized in Table VII-B
+
+ | Observation | Latent Features | Demos | Rewards |
| Vision (RGB) | Joint Encoders | Environment State |
| RRL(Ours) | ✓ | ✓ | | Resnet34 | ✓ | Sparse |
| RRL(Resnet18) | ✓ | ✓ | | Resnet18 | ✓ | Sparse |
| RRL(Resnet50) | ✓ | ✓ | | Resnet50 | ✓ | Sparse |
| RRL (VAE) | ✓ | ✓ | | VAE | ✓ | Sparse |
| RRL(Vision) | ✓ | | | Resnet34 | ✓ | Sparse |
| FERM | ✓ | ✓ | | | ✓ | Sparse |
| NPG(State) | | ✓ | ✓ | | | Sparse |
| NPG(Vision) | ✓ | | | Resnet34 | | Sparse |
| DAPG(State) | | ✓ | ✓ | | ✓ | Sparse |
| RRL(Sparse) | ✓ | ✓ | | Resnet34 | ✓ | Sparse |
| RRL(Dense) | ✓ | ✓ | | Resnet34 | ✓ | Dense |
| RRL(Noise) | ✓ | ✓ | | Resnet34 | ✓ | Sparse |
| RRL(Vision + Sensors) | ✓ | ✓ | | Resnet34 | ✓ | Sparse |
| RRL(ShuffleNet) | ✓ | ✓ | | ShuffleNet-v2 | ✓ | Sparse |
| RRL(MobileNet) | ✓ | ✓ | | MobileNet-v2 | ✓ | Sparse |
| RRL(vdvae) | ✓ | ✓ | | Very Deep VAE | ✓ | Sparse |
+
+C. RRL(Ours)
+
+| Parameters | Setting |
| BC batch size | 32 |
| BC epochs | 5 |
| BC learning rate | 0.001 |
| Policy Size | (256, 256) |
| vf _batch_size | 64 |
| vf_epochs | 2 |
| rl_step_size | 0.05 |
| rl_gamma | 0.995 |
| rl_gae | 0.97 |
| lam_0 | 0.01 |
| lam_1 | 0.95 |
+
+TABLE II
+
+HYPERPARAMETER DETAILS FOR ALL THE RRL VARIATIONS.
+
+Same parameters are used across all the tasks (Pen, Door, Hammer, Relocate, PegInsertion, Reacher) unless explicitly mentioned. Sparse reward setting is used in all the hand manipulation environments as proposed by Rajeswaran et al. along with 25 expert demonstrations. We have directly used the parameters (summarize in Table II) provided by DAPG without any additional hyperparameter tuning except for the policy size (used same across all tasks). On the Adroit Manipulation task, 200 trajectories for Hammer-v0, Door-v0, Relocate-v0 whereas 400 trajectories for Pen-v0 per iteration are collected in each iteration.
+
+## D. Results on MJRL Environment
+
+We benchmark the performance of RRL on two of the MJRL environments [50], Reacher and Peg Insertion in Figure 10. These environments are quite low dimensional (7DoF Robotic arm) compared to the Adroit hand (24 DoF) but still require rich understanding of the task. In the peg insertion task, RRL delivers state comparable (DAPG(State)) results and significantly outperforms FERM. However, in the Reacher task, we notice that DAPG(State) and FERM perform surprisingly well whereas RRL struggles to perform initially. This highlights that using task specific representations in simple, low dimensional environments might be beneficial as it is easy to overfit the feature encoder for the task in hand while the Resnet features are quite generic. For the MJRL environment, shaped reward setting is used as provided in the repository 2 along with 200 expert demonstrations. For the Peg Insertion task 200 trajectories and for Reacher task 400 trajectories are collected per iteration.
+
+
+
+Fig. 10. Results on MJRL Environment. RRL outperforms FERM and delivers results on par with DAPG(State) in the PegInsertion task. In Reacher, FERM outperforms RRL following that learning task specific representations is easier in simple tasks.
+
+## E. Other variations of RRL
+
+a) RRL(MobileNet), RRL(ShuffleNet) : The encoders (ShuffleNet [27] and MobileNet [44]) are pretrained on ImageNet Dataset using a classification objective. We pick the pretrained models from torchvision directly and freeze the parameters during the entire training of the RL agent. Similar to RRL(Ours), the last layer of the model is removed and a latent feature of dimension 1024 and 1280 in case of ShuffleNet and MobileNet respectively is used.
+
+b) $\mathbf{{RRL}}$ (volvae) : We use a very recent state of the art hierarchical VAE [60] that is trained on ImageNet dataset. The code along with the pretrained weights are publically available ${}^{3}$ by the author. We use the intermediate features of the encoder of dimension 512. All the parameters are frozen similar to RRL(Ours).
+
+## F. DMControl Experiment Details
+
+For the RAD [49], CURL [51], SAC+AE [56] and State SAC [35], we report the numbers directly provided by Laskin et al. For SAC+RRL, Resnet34 is used as a fixed feature extractor and the past three output features (frame_stack $= 3$ ) are used as a representative of state information in SAC algorithm. For the fixed RAD encoder, we train the RL agent along with RAD encoder using the default hyperparameters provided by the authors for Cartpole environment. We used the trained encoder as a fixed feature extractor and retrain the policies for all the tasks. The frame_skip values are task specific as mentioned in [56] also outlined in Table IV. The hyperparameters used are summarized in the Table III where a grid search is made on actor_lr $= \{ {1e} - 3,{1e} - 4\}$ , critic_lr $= \{ {1e} - 3,{1e} - 4\}$ , critic_update_freq $= \{ 1,2\}$ , critic_tau $= \{ {0.01},{0.05},{0.1}\}$ and an average over 3 seeds is reported. SAC implementation in PyTorch courtesy [55].
+
+## G. RRL(VAE)
+
+
+
+For training, we collected a dataset of 1 million images of size ${64} \times {64}$ . Out of the 1 million images collected, ${25}\%$ of the images are collected using an optimal course of actions (expert policy), 25% with a little noise (expert policy + small noise), 25% with even higher level of noise (expert policy + large noise) and remaining portion by randomly sampling actions (random actions). This is to ensure that the images collected sufficiently represents the distribution faced by policy during the training of the agent. We observed that this significantly helps compared to collecting data only from the expert policy. The variational auto-encoder(VAE) is trained using a reconstruction objective [7] for 10epochs. Figure 1 1 showcases the reconstructed images. We used a latent size of 512 for a fair comparison with Resnet. The weights of the encoder are freezed and used as feature extractors in place of Resnet in RRL. RRL(VAE) also uses the inputs from the pro-prioceptive sensors along with the encoded features. VAE implementation courtesy [53].
+
+---
+
+${}^{2}$ https://github.com/aravindr93/mjrl
+
+${}^{3}$ https://github.com/openai/vdvae
+
+---
+
+| Parameter | Setting |
| frame_stack | 3 |
| replay_buffer_capacity | 100000 |
| init_steps | 1000 |
| batch_size | 128 |
| hidden_dim | 1024 |
| critic_lr | 1e-3 |
| critic_beta | 0.9 |
| critic_tau | 0.01 |
| critic_target_update_freq | 2 |
| actor_lr | 1e-3 |
| actor_beta | 0.9 |
| actor_log_std_min | -10 |
| actor_log_std_max | 2 |
| actor_update_freq | 2 |
| discount | 0.99 |
| init_temperature | 0.1 |
| alpha_lr | 1e-4 |
| alpha_beta | 0.5 |
+
+TABLE III SAC HYPERPARAMETERS.
+
+| Environment | action_repeat |
| Cartpole, Swing | 8 |
| Reacher, Easy | 4 |
| Cheetah, Run | 4 |
| Cup, Catch | 4 |
| Walker, Walk | 2 |
| Finger, Spin | 2 |
+
+TABLE IV
+
+ACTION REPEAT VALUES FOR DMCONTROL SUITE
+
+## H. Visual Distractor Evaluation details
+
+
+
+Fig. 12. COL1: Original images; COL2: Change in light position; COL3: Change in light direction; COL4: Randomizing object colors; COL5: Introducing a random object in the scene. All the parameters are randomly sampled every time in an episode.
+
+In order to test the generalisation performance of RRL and FERM [58], we subject the environment to various kinds of visual distractions during inference (Figure 12). Note all parameters are freezed during this evaluation, an average performance over 75 rollouts is reported. Following distractors were used during inference to test robustness of the final policy -
+
+- Random change in light position.
+
+- Random change in light direction.
+
+- Random object color. (Handle, door color for Door-v0; Different hammer parts and nail for Hammer-v0)
+
+- Introducing a new object in scene - random color, position, size and geometry (Sphere, Capsule, Ellipsoid, Cylinder, Box).
+
+## I. Compute Cost calculation
+
+We calculate the actual compute cost involved for all the methods (RRL(Ours), FERM, RRL(Resnet-50), RRL(Resnet-18)) that we have considered. Since in a real-world scenario there is no simulation of the environment we do not include the cost of simulation into the calculation. For fair comparison we show the compute cost with same sample complexity ( 4 million steps) for all the methods. FERM is quite compute intensive (almost 5x RRL(Ours)) because (a) Data augmentation is applied at every step (b) The parameters of Actor and Critic are updated once/twice at every step (Compute results shown are with one update per step) whereas most of the computation of RRL goes in the encoding of features using Resnet. The cost of VAE pretraining in included in the over all cost. RRL(Ours) that uses Resnet-34 strikes a balance between the computational cost and performance. Note: No parallel processing is used while calculating the cost.
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..26481a6daf634d751b497a77d520c40254d0a8c3
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,332 @@
+§ RRL: RESNET AS REPRESENTATION FOR REINFORCEMENT LEARNING
+
+Rutav Shah ${}^{1}$ and Vikash Kumar ${}^{2,3}$
+
+Abstract-Generalist robots capable of performing dexterous, contact-rich manipulation tasks will enhance productivity and provide care in un-instrumented settings like homes. Such tasks warrant operations in real-world only using the robot's proprioceptive sensor such as onboard cameras, joint encoders, etc which can be challenging for policy learning owing to the high dimensionality and partial observability issues. We propose RRL: Resnet as representation for Reinforcement Learning - a straightforward yet effective approach that can learn complex behaviors directly from proprioceptive inputs. RRL fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. In a simulated dexterous manipulation benchmark, where the state of the art methods fails to make significant progress, RRL delivers contact rich behaviors. The appeal of RRL lies in its simplicity in bringing together progress from the fields of Representation Learning, Imitation Learning, and Reinforcement Learning. Its effectiveness in learning behaviors directly from visual inputs with performance and sample efficiency matching learning directly from the state, even in complex high dimensional domains, is far from obvious.
+
+§ I. INTRODUCTION
+
+Recently, Reinforcement learning (RL) has seen tremendous momentum and progress [9, 19, 37, 21] in learning complex behaviors from states [18, 24, 17]. Most success stories, however, are limited to simulations or instrumented laboratory conditions as real world doesn't provide direct access to its internal state. Not only learning with state-space, but visual observation spaces have also found reasonable success [26, 42]. However, the majority of these methods have been tested on low-dimensional, 2D tasks [31] that lack depth information. Contact rich manipulation tasks, on the other hand, are high dimensional and necessitate intricate details in order to be completed successfully. In order to deliver the promise presented by data-driven techniques, we need efficient techniques that can learn complex behaviors unobtrusively without the need for environment instrumentation.
+
+Learning without environment instrumentation, especially in unstructured settings like homes, can be quite challenging [59, 34, 46]. Challenges include - (a) Decision making with incomplete information owing to partial observability as the agents must rely only on proprioceptive on-board sensors (vision, touch, joint position encoders, etc) to perceive and act. (b) The influx of sensory information makes the input space quite high dimensional. (c) Information contamination due to sensory noise and task-irrelevant conditions like lightning, shadows, etc. (d) Most importantly, the scene being flushed with information irrelevant to the task (background, clutter, etc). Agents learning under these constraints is forced to take a large number of samples simply to untangle these task-irrelevant details before it makes any progress on the true task objective. A common approach to handle these high dimensionality and multi-modality issues is to learn representations that distil information into low dimensional features and use them as inputs to the policy. While such ideas have found reasonable success [43, 40], designing such representations in a supervised manner requires a deep understanding of the problem and domain expertise. An alternative approach is to leverage unsupervised representation learning to autonomously acquire representations based on either reconstruction [13, 59, 56] or contrastive [51, 52] objective. These methods are quite brittle as the representations are acquired from narrow task-specific distributions [61], and hence, do not generalize well across different tasks Table II. Additionally, they acquire task-specific representations, often needing additional samples from the environment leading to poor sample efficiency or domains specific data-augmentations for training representations.
+
+ < g r a p h i c s >
+
+Fig. 1. RRL Resnet as representation for Reinforcement Learning takes a small step in bridging the gap between Representation learning and Reinforcement learning. RRL pre-trains an encoder on a wide variety of real world classes like ImageNet dataset using a simple supervised classification objective. Since the encoder is exposed to a much wider distribution of images while pretraining, it remains effective whatever distribution the policy might induce during the training of the agent. This allows us to freeze the encoder after pretraining without any additional efforts.
+
+The key idea behind our method stems from an intuitive observation over the desiderata of a good representation i.e. (a) it should be low dimensional for a compact representation. (b) it should be able to capture silent features encapsulating the diversity and the variability present in a real-world task for better generalization performance. (c) it should be robust to irrelevant information like noise, lighting, viewpoints, etc so that it is resilient to the changes in surroundings. (d) it should provide effective representation in the entire distribution that a policy can induce for effective learning. These requirements are quite harsh needing extreme domain expertise to manually design and an abundance of samples to automatically acquire. Can we acquire this representation without any additional effort? Our work takes a very small step in this direction.
+
+${}^{1}$ Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, India rutavms@gmail.com
+
+${}^{2}$ Department of Computer Science, University of Washington, Seattle, USA vikash@cs.washington.edu
+
+${}^{3}$ Facebook AI Research, USA
+
+The key insight behind our method (Figure 1) is embarrassingly simple - representations do not necessarily have to be trained on the exact task distribution; a representation trained on a sufficiently wide distribution of real-world scenarios, will remain effective on any distribution a policy optimizing a task in the real world might induce. While training over such wide distribution is demanding, this is precisely what the success of large image classification models [8, 10, 54, 12] in Computer Vision delivers - representations learned over a large family of real-world scenarios.
+
+Our Contributions: We list the major contributions
+
+1) We present a surprisingly simple method (RRL) at the intersection of representation learning, imitation learning (IL) and reinforcement learning (RL) that uses features from pre-trained image classification models (Resnet34) as representations in standard RL pipeline. Our method is quite general and can be incorporated with minimal changes to most state based RL/IL algorithms.
+
+2) Task-specific representations learned by supervised as well as unsupervised methods are usually brittle and suffer from distribution mismatch. We demonstrate that features learned by image classification models are general towards different task (Figure 2), robust to visual distractors, and when used in conjunction with standard IL and RL pipelines can efficiently acquire policies directly from proprioceptive inputs.
+
+3) While competing methods have restricted results primarily to planar tasks devoid of depth perspectives, on a rich collection of simulated high dimensional dexterous manipulation tasks, where state-of-the-art methods struggle, we demonstrate that RRL can learn rich behaviors directly from visual inputs with performance & sample efficiency approaching state-based methods.
+
+4) Additionally, we underline the performance gap between the SOTA approaches and RRL on simple low dimensional tasks as well as high dimensional more realistic tasks. Furthermore, we experimentally establish that the commonly used environments for studying image based continuous control methods are not a true representative of real-world scenario.
+
+§ II. RELATED WORK
+
+RRL rests on recent developments from the fields of Representation Learning, Imitation Learning and Reinforcement Learning. In this section, we outline related works leveraging representation learning for visual reinforcement and imitation learning.
+
+§ A. LEARNING WITHOUT EXPLICIT REPRESENTATION
+
+A common approach is to learn behaviors in an end to end fashion - from pixels to actions - without explicit distinction between feature representation and policy representations. Success stories in this categories range from seminal work [5] mastering Atari 2600 computer games using only raw pixels as input, to [14] which learns trajectory-centric local policies using Guided Policy Search [4] for diverse continuous control manipulation tasks in the real world learned directly from camera inputs. More recently, [35] has demonstrated success in acquiring multi-finger dexterous manipulation [33] and agile locomotion behaviors using off-policy action critic methods [24]. While learning directly from pixels has found reasonable success, it requires training large networks with high input dimensionality. Agents require a prohibitively large number of samples to untangle task-relevant information in order to acquire behaviors, limiting their application to simulations or constrained lab settings. RRL maintains an explicit representation network to extract low dimensional features. Decoupling representation learning from policy learning delivers results with large gains in efficiency. Next, we outline related works that use explicit representations.
+
+ < g r a p h i c s >
+
+Fig. 2. Visualization of Layer 4 of Resnet model of the top 1 class using Grad-CAM [45][Top] and Guided Backpropogation [11][Bottom]. This indicates that Resnet is indeed looking for the right features in our task images (right) in spite of such high distributional shift.
+
+§ B. LEARNING WITH SUPERVISED REPRESENTATIONS
+
+Another approach is to first acquire representations using expert supervision, and use features extracted from representation as inputs in standard policy learning pipelines. A predominant idea is to learn representative keypoints encapsulating task details from the input images and using the extracted keypoints as a replacement of the state information [38]. Using these techniques, [43, 39] demonstrated tool manipulation behaviors in rich scenes flushed with task-irrelevant details. [41] demonstrated simultaneous manipulation of multiple objects in the task of Baoding ball tasks on a high dimensional dexterous manipulation hand. Along with the inbuilt proprioceptive sensing at each joint, they use an RGB stereo image pair that is fed into a separate pre-trained tracker to produce 3D position estimates [57] for the two Baoding balls. These methods, while powerful, learn task-specific features and requires expert supervision, making it harder to (a) translate to variation in tasks/environments, and (b) scale with increasing task diversity. RRL, on the other hand, uses single task-agnostic representations with better generalization capability making it easy to scale.
+
+§ C. LEARNING WITH UNSUPERVISED REPRESENTATIONS
+
+With the ambition of being scalable, this group of methods intends to acquire representation via unsupervised techniques. [30] uses contrastive learning to time-align visual features across different embodiment to demonstrate behavior transfer from human to a Fetch robot. [20], [62, 59] use variational inference $\left\lbrack {7,{20}}\right\rbrack$ to learn compressed latent representations and use it as input to standard RL pipeline to demonstrate rich manipulation behaviors. [47] additionally learns dynamics models directly in the latent space and use model-based RL to acquire behaviors on simulated tasks. On similar tasks, [36] uses multi-step variational inference to learn world dynamic as well as rewards models for off-policy RL. [51] use image augmentation with variational inference to construct features to be used in standard RL pipeline and demonstrate performance at par with learning directly from the state. [49, 48] demonstrate comparable results by assimilating updates over features acquired only via image augmentation. Similar to supervised methods, unsupervised methods often learns task-specific brittle representations as they break when subjected to small variations in the surroundings and often suffers challenges from non-stationarity arising from the mismatch between the distribution representations are learned on and the distribution policy induces. To induce stability, RRL uses pre-trained stationary representations trained on distribution with wider support than what policy can induce. Additionally, representations learned over a wide distribution of real-world samples are robust to noise and irrelevant information like lighting, illumination, etc.
+
+§ D. LEARNING WITH REPRESENTATIONS AND DEMONSTRATIONS
+
+Learning from demonstrations has a rich history. We focus our discussion on DAPG [17], a state-based method which optimizes for the natural gradient [2] of a joint loss with imitation as well as reinforcement objective. DAPG has been demonstrated to outperform competing methods [15, 16] on the high dimensional ADROIT dexterous manipulation task suite we test on. RRL extends DAPG to solve the task suite directly from proprioceptive signals with performance and sample efficiency comparable to state-DAPG. Unlike DAPG which is on-policy, FERM [58] is a closely related off-policy actor-critic methods combining learning from demonstrations with RL. FERM builds on RAD [49] and inherits its challenges like learning task-specific representations. We demonstrate via experiments that RRL is more stable, more robust to various distractors, and convincingly outperforms FERM since RRL uses a fixed feature extractor pre-trained over wide variety of real world images and avoids learning task specific representations.
+
+§ III. BACKGROUND
+
+RRL solves a standard Markov decision process (Section III-A) by combining three fundamental building blocks - (a) Policy gradient algorithm (Section III-B), (b) Demonstration bootstrapping (Section III-C), and (c) Representation learning (Section III-D). We briefly outline these fundamentals before detailing our method in Section IV.
+
+§ A. PRELIMINARIES: MDP
+
+We model the control problem as a Markov decision process (MDP), which is defined using the tuple: $\mathcal{M} =$ $\left( {\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},{\rho }_{0},\gamma }\right) .\mathcal{S} \in {\mathbb{R}}^{n}$ and $\mathcal{A} \in {\mathbb{R}}^{m}$ represent the state and actions. $\mathcal{R} : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is the reward function. In the ideal case, this function is simply an indicator for task completion (sparse reward setting). $\mathcal{T} : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ is the transition dynamics, which can be stochastic. In model-free RL, we do not assume any knowledge about the transition function and require only sampling access to this function. ${\rho }_{0}$ is the probability distribution over initial states and $\gamma \in \lbrack 0,1)$ is the discount factor. We wish to solve for a stochastic policy of the form $\pi : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ which optimizes the expected sum of rewards:
+
+$$
+\eta \left( \pi \right) = {\mathbb{E}}_{\pi ,\mathcal{M}}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t}}\right\rbrack \tag{1}
+$$
+
+§ B. POLICY GRADIENT
+
+The goal of the RL agent is to maximise the expected discounted return $\eta \left( \pi \right)$ (Equation 1) under the distribution induced by the current policy $\pi$ . Policy Gradient algorithms optimize the policy ${\pi }_{\theta }\left( {a \mid s}\right)$ directly, where $\theta$ is the function parameter by estimating $\nabla \eta \left( \pi \right)$ . First we introduce a few standard notations, Value function : ${V}^{\pi }\left( s\right) ,\mathrm{Q}$ function : ${Q}^{\pi }\left( {s,a}\right)$ and the advantage function : ${A}^{\pi }\left( {s,a}\right)$ . The advantage function can be considered as another version of Q-value with lower variance by taking the state-value off as the baseline.
+
+$$
+{V}^{\pi }\left( s\right) = {\mathbb{E}}_{\pi \mathcal{M}}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t} \mid {s}_{0} = s}\right\rbrack
+$$
+
+$$
+{Q}^{\pi }\left( {s,a}\right) = {\mathbb{E}}_{\mathcal{M}}\left\lbrack {\mathcal{R}\left( {s,a}\right) }\right\rbrack + {\mathbb{E}}_{{s}^{\prime } \sim \mathcal{T}\left( {f,d}\right) }\left\lbrack {{V}^{\pi }\left( {s}^{\prime }\right) }\right\rbrack
+$$
+
+$$
+{A}^{\pi }\left( {s,a}\right) = {Q}^{\pi }\left( {s,a}\right) - {V}^{\pi }\left( s\right)
+$$
+
+(2)
+
+The gradient can be estimated using the Likelihood ratio approach and Markov property of the problem [1] and using a sampling based strategy,
+
+$$
+\nabla \eta \left( \pi \right) = g = \frac{1}{NT}\mathop{\sum }\limits_{{i = 0}}^{N}\mathop{\sum }\limits_{{t = 0}}^{T}{\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t}^{i} \mid {s}_{t}^{i}}\right) {\widehat{A}}^{\pi }\left( {{s}_{t}^{i},{a}_{t}^{i},t}\right) \tag{3}
+$$
+
+Amongst the wide collection of policy gradient algorithms, we build upon Natural Policy Gradient (NPG) [2] to solve our MDP formulation owing to its stability and effectiveness in solving complex problems. We refer to [32] for a detailed background on different policy gradient approaches. In the next section, we describe how human demonstrations can be effectively used along with NPG to aid policy optimization.
+
+§ C. DEMO AUGMENTED POLICY GRADIENT
+
+Policy Gradients with appropriately shaped rewards can solve arbitrarily complex tasks. However, real-world environments seldom provide shaped rewards, and it must be manually specified by domain experts. Learning with sparse signals, such as task completion indicator functions, can relax domain expertise in reward shaping but it results in extremely high sample complexity due to exploration challenges. DAPG ([17]) combines policy gradients with few demonstrations in two ways to mitigate this issue and effectively learn from them. We represent the demonstration dataset using ${\rho }_{D} = \left\{ \left( {{s}_{t}^{\left( i\right) },{a}_{t}^{\left( i\right) },{s}_{t + 1}^{\left( i\right) },{r}_{t}^{\left( i\right) }}\right) \right\}$ where $t$ indexes time and $i$ indexes different trajectories.
+
+(1) Warm up the policy using few demonstrations (25 in our setting) using a simple Mean Squared Error(MSE) loss, i.e, initialize the policy using behavior cloning [Eq 4]. This provides an informed policy initialization that helps in resolving the early exploration issue as it now pays attention to task relevant state-action pairs and thereby, reduces the sample complexity.
+
+$$
+{L}_{BC}\left( \theta \right) = \frac{1}{2}\mathop{\sum }\limits_{{i,t \in \text{ minibatch }}}{\left( {\pi }_{\theta }\left( {s}_{t}^{\left( i\right) }\right) - {a}_{t}^{\left( i\right) H}\right) }^{2} \tag{4}
+$$
+
+where, $\theta$ are the agent parameters and ${a}_{t}^{\left( i\right) H}$ represents the action taken by the human/expert.
+
+(2) DAPG builds upon on-policy NPG algorithm [2] which uses a normalized gradient ascent procedure where the normalization is under the Fischer metric.
+
+$$
+{\theta }_{k + 1} = {\theta }_{k} + \sqrt{\frac{\delta }{{g}^{T}{\widehat{F}}_{{\theta }_{k}}^{-1}g}}{\widehat{F}}_{{\theta }_{k}}^{-1}g \tag{5}
+$$
+
+where ${\widehat{F}}_{{\theta }_{k}}$ is the Fischer Information Metric at the current iterate ${\theta }_{k}$ ,
+
+$$
+{\widehat{F}}_{{\theta }_{k}} = \frac{1}{T}\mathop{\sum }\limits_{{t = 0}}^{T}{\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {s}_{t}}\right) {\nabla }_{\theta }\log {\pi }_{\theta }{\left( {a}_{t} \mid {s}_{t}\right) }^{T} \tag{6}
+$$
+
+and $g$ is the sample based estimate of the policy gradient [Eq [3]. To make the best use of available demonstrations, DAPG proposes a joint loss ${g}_{\text{ aug }}$ combining task as well as imitation objective. The imitation objective asymptotically decays over time allowing the agent to learn behaviors surpassing the expert.
+
+$$
+{g}_{\text{ aug }} = \mathop{\sum }\limits_{{\left( {s,a}\right) \in {\rho }_{\pi }}}{\nabla }_{\theta }\ln {\pi }_{\theta }\left( {a \mid s}\right) {A}^{\pi }\left( {s,a}\right) \tag{7}
+$$
+
+$$
++ \mathop{\sum }\limits_{{\left( {s,a}\right) \in {\rho }_{D}}}{\nabla }_{\theta }\ln {\pi }_{\theta }\left( {a \mid s}\right) w\left( {s,a}\right)
+$$
+
+where, ${\rho }_{\pi }$ is the dataset obtained by executing the current policy, ${\rho }_{D}$ is the demonstration data and $w\left( {s,a}\right)$ is the heuristic weighting function defined as :
+
+$$
+w\left( {s,a}\right) = {\lambda }_{0}{\lambda }_{1}^{k}\mathop{\max }\limits_{{\left( {{s}^{\prime },{a}^{\prime }}\right) \in {\rho }_{\pi }}}{A}^{\pi }\left( {{s}^{\prime },{a}^{\prime }}\right) \;\forall \;\left( {s,a}\right) \in {\rho }_{D} \tag{8}
+$$
+
+DAPG has proven to be successful in learning policy for the dexterous manipulation tasks with reasonable sample complexity.
+
+§ D. REPRESENTATION LEARNING
+
+DAPG has thus far only been demonstrated to be effective with access to low-level state information which is not readily available in real-world. DAPG is based on NPG which works well but faces issues with input dimensionality and hence, cannot be directly used with the input images acquired from onboard cameras. Representation learning [6] is learning representations of input data typically by transforming it or extracting features from it, which makes it easier to perform the task (in our case it can be used in place of the exact state of the environment). Let $I \in {\mathbb{R}}^{n}$ represents the high dimensional input image, then
+
+$$
+h = {f}_{\rho }\left( I\right) \tag{9}
+$$
+
+where $f$ represents the feature extractor, $\rho$ is the distribution over which $f$ is valid and $h \in {\mathbb{R}}^{d}$ with $d < < n$ is the compact, low dimensional representation of $I$ . In the next section, we outline our method that scales DAPG to solve directly from visual information.
+
+§ IV. RRL: RESNET AS REPRESENTATION FOR RL
+
+In an ideal RL setting, the agent interacts with the environment based on the current state, and in return, the environment outputs the next state and the reward obtained. This works well in a simulated environment but in a real-world scenario, we do not have access to this low-level state information. Instead we get the information from cameras $\left( {I}_{t}\right)$ and other onboard sensors like joint encoders $\left( {\delta }_{t}\right)$ . To overcome the challenges associated with learning from high dimensional inputs, we use representations that project information into a lower-dimensional manifolds. These representations can be (a) learned in tandem with the RL objective. However, this leads to non-stationarity issue where the distribution induced by the current policy ${\pi }_{i}$ may lie outside the expressive power of $f,{\pi }_{i} ⊄ {\rho }_{i}$ at any step $i$ during training. (b) decoupled from RL by pre-training $f$ . For this to work effectively, the feature extractor must be trained on a sufficiently wide distribution such that it covers any distribution that the policy might induce during training, ${\pi }_{i} \subset \rho \forall i$ . Getting hold of such task specific training data beforehand becomes increasingly difficult as the complexity and diversity of the task increases. To this end, we propose to use a fixed feature extractor (Section V-B) that is pretrained on a wide variety of real world scenarios like ImageNet dataset [Highlighted in purple in Figure 1]. We experimentally demonstrate that the diversity (Section V-C) of the such feature extractor allows us to use it across all tasks we considered. The use of pre-trained representations induces stability to RRL as our representations are frozen and do-not face the non-stationarity issues encountered while learning policy and representation in tandem.
+
+The features $\left( {h}_{t}\right)$ obtained from the above feature extractor are appended with the information obtained from the internal joint encoders of the Adroit Hand $\left( {\delta }_{t}\right)$ . As a substitute of the exact state $\left( {s}_{t}\right)$ , we empirically show that $\left\lbrack {{h}_{t},{\delta }_{t}}\right\rbrack$ can be used as an input to the policy. In principle any RL algorithm can be deployed to learn the policy, in RRL we build upon Natural Policy Gradients [3] owing to effectiveness in solving complex high dimensional tasks [17]. We present our full algorithm in Algorithm-1.
+
+Algorithm 1 RRL
+
+Input: 25 Human Demonstrations ${\rho }_{D}$
+
+Initialize using Behavior Cloning [Eq. 4].
+
+repeat
+
+ for $\mathrm{i} = 1$ to $\mathrm{n}$ do
+
+ for $\mathrm{t} = 1$ to horizon do
+
+ Take action
+
+${a}_{t} = {\pi }_{\theta }\left( \left\lbrack {\operatorname{Encoder}\left( {I}_{t}\right) ,{\delta }_{t}}\right\rbrack \right)$
+
+and receive ${I}_{t + 1},{\delta }_{t + 1},{r}_{t + 1}$
+
+from the environment.
+
+ end for
+
+ end for
+
+ Compute ${\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {s}_{t}}\right)$ for each $\left( {s,a}\right) \in {\rho }_{\pi },{\rho }_{D}$
+
+ Compute ${A}^{\pi }\left( {s,a}\right)$ for each $\left( {s,a}\right) \in {\rho }_{\pi }$ and $w\left( {s,a}\right)$
+
+for each $\left( {s,a}\right) \in {\rho }_{D}$ according to Equations 2,8
+
+ Calculate policy gradient according to [7]
+
+ Compute Fisher matrix [6]
+
+ Take the gradient ascent step according to 5 .
+
+ Update the parameters of the value function in order
+
+to approximate 2 $: {V}_{k}^{\pi }\left( {s}_{t}^{\left( n\right) }\right) \approx \mathop{\sum }\limits_{{{t}^{\prime } = t}}^{T}{\gamma }^{{t}^{\prime } - t}{r}_{t}^{\left( n\right) }$
+
+until Satisfactory performance
+
+§ V. EXPERIMENTAL EVALUATIONS
+
+Our experimental evaluations aims to address the following questions: (1) Does pre-tained representations acquired via large real world image dataset allow RRL to learn complex tasks directly from proprioceptive signals (camera inputs and joint encoders)? (2) How does RRL's performance and efficiency compare against other state-of-the-art methods? (3) How various representational choices influence the generality and versatility of the resulting behaviors? (5) What are the effects of various design decisions on RRL? (6) Are commonly used benchmarks for studying image based continuous control methods effective?
+
+§ A. TASKS
+
+Applicability of prior proprioception based RL methods $\left\lbrack {{49},{48},{47}}\right\rbrack$ have been limited to simple low dimensional tasks like Cartpole, Cheetah, Reacher, Finger spin, Walker, Ball in cup, etc. Moving beyond these simple domains, we investigate RRL on Adroit manipulation suite [17] which consists of contact-rich high-dimensional dexterous manipulation tasks (Figure 3) that have found to be challenging ever for state $\left( {s}_{t}\right)$ based methods. Furthermore, unlike prior task sets, which are fundamentally planar and devoid of depth perspective, the Adroit manipulation suite consists of visually-rich physically-realistic tasks that demand representations untangling complex depth information.
+
+§ B. IMPLEMENTATION DETAILS
+
+We use standard Resnet-34 model as RRL's feature extractor. The model is pre-trained on the ImageNet dataset which consists of 1000 classes. It is trained on 1.28 million images on the classification task of ImageNet. The last layer of the model is removed to recover a 512 dimensional feature space and all the parameters are frozen throughout the training of the RL agent. During inference, the observations obtained from the environment are of size ${256} \times {256}$ , a center crop of size ${224} \times {224}$ is fed into the model. We also evaluate our model using different Resnet sizes (Figure 7). All the hyperparameters used for training are summarized in Appendix(Table II). We report an average performance over three random seeds for all the experiments.
+
+ < g r a p h i c s >
+
+Fig. 3. ADROIT manipulation suite consisting of complex dexterous manipulation tasks involving object relocation, in hand manipulation (pen repositioning), tool use (hammering a nail), and interacting with human centric environments (opening a door).
+
+§ C. RESULTS
+
+In Figure 4, we contrast the performance of RRL against the state of the art baselines. We begin by observing that NPG [3] struggles to solve the suite even with full state information, which establishes the difficulty of our task suite. DAPG(State) [17] uses privileged state information and a few demonstrations from the environment to solve the tasks and pose as the best case oracle. RRL demonstrates good performance on all the tasks, relocate being the hardest, and often approaches performance comparable to our strongest oracle-DAPG(State).
+
+A competing baseline FERM [58] is quite unstable in these tasks. It starts strong for hammer and door tasks but saturates in performance. It makes slow progress in pen, and completely fails for relocate. In Figure 5 [Left] we compare the computational footprint of FERM (along with other methods, discussed in later sections) with RRL. We note that our method not only outperforms FERM but also is approximately five times more compute-efficient.
+
+${}^{1}$ Reporting best performance amongst over 30 configurations per task we tried in consultation with the FERM authors.
+
+ < g r a p h i c s >
+
+Fig. 4. Performance on ADROIT dexterous manipulation suite [17]: State of the art policy gradient method NPG(State) [29] struggles to solve the suite even with privileged low level state information, establishing the difficulty of the suite. Amongst demonstration accelerated methods, RRL(Ours) demonstrates stable performance and approaches performance of DAPG(State) [17] (upper bound), a demonstration accelerated method using privileged state information. A competing baseline FERM [58] makes good initial, but unstable, progress in a few tasks and often saturates in performance before exhausting our computational budget (40 hours/ task/ seed).
+
+ < g r a p h i c s >
+
+Fig. 5. LEFT: Comparison of the computational cost of RRL with Resnet34 i.e RRL(Ours), FERM - Strongest baseline, RRL with Resnet 18, RRL with Resnet 50, RRL (VAE), RRL with ShuffleNet, RRL with MobileNet and RRL with Very Deep VAE baseline. CENTER, RIGHT: Influence of various environment distractions (lightning condition, object color) on RRL(Ours), and FERM. RRL(Ours) consistently performs better than FERM in all the variations we considered.
+
+§ D. EFFECTS OF VISUAL DISTRACTORS
+
+In Figure 5 [Center, Right] we probe the robustness of the final policies by injecting visual distractors in the environment during inference. We note that the resilience of the resnet features induces robustness to RRL's policies. On the other hand, task-specific features learned by FERM are brittle leading to larger degradation in performance. In addition to improved sample and time complexity resulting from the use of pre-trained features, the resilience, robustness, and versatility of Resnet features lead to policies that are also robust to visual distractors, clutter in the scene. More details about the experiment setting is provided in Section VII-H in Appendix.
+
+§ E. EFFECT OF REPRESENTATION
+
+Is Resnet lucky? To investigate if architectural choice of Resnet is lucky, in Figure 6 we test different models pretrained on ImageNet dataset as RRL's feature extractors - MobileNetV2 [44], ShuffleNet [27] and state of the art hierarchical VAE [60] [Refer Section VII-E in Appendix for more details]. Not much degradation in performance is observed with respect to the Resnet model. This highlights that it is not the architecture choices in particular, rather the dataset on which models are being pre-trained, that delivers generic features effective for the RL agents.
+
+Task-specific vs Task-agnostic representation: In Figure 7, we compare the performance between (a) learning task specific representations (VAE) (b) generic representation trained on a very wide distribution (Resnet). We note that RRL using Resnet34 significantly outperforms a variant RRL(VAE) (see appendix for details Section VII-G) that learns features via commonly used variational inference techniques on a task specific dataset [22, 23, 25, 28]. This indicates that pre-trained Resnet provides task agnostic and superior features compared to methods that explicitly learn brittle (Section-V-H) and task-specific features using additional samples from the environment. It is important to note that the latent dimension of the Resnet34 and VAE are kept same (512) for a fair comparison, however, the model sizes are different as one operates on a very wide distribution while the other on a much narrower task specific dataset. Additionally, we summarize the compute cost of both the methods RRL(Ours) and RRL(VAE) in Figrue 5 [Left]. We notice that even though RRL(VAE) is the cheapest, its performance is quite low (Figure 7). RRL(Ours) strikes a balance between compute and efficiency.
+
+ < g r a p h i c s >
+
+Fig. 6. Effect of different types of Feature extractor pretrained on ImageNet dataset, highlighting that not just Resnet but any feature extractor pretrained on a sufficiently wide distribution of data remains effective.
+
+ < g r a p h i c s >
+
+Fig. 7. Influence of representation: RRL(Ours), using resnet34 features, outperforms commonly used representation (RRL(VAE)) learning method VAE. Amongst different Resnet variations, Resnet34 strikes the balance between representation capacity and computational overhead. NPG(Resnet34) showcases the performance with Resnet34 features but without demonstration bootstrapping, indicating that only representational choices are not enough to solve the task suite.
+
+F. Effects of proprioception choices and sensor noise
+
+ < g r a p h i c s >
+
+Fig. 8. Influence of proprioceptive signals on RRL(Vision+sensors-Ours): RRL(Noise) demonstrates that RRL remains effectiveness in presence of noisy (2%) proprioception. RRL(Vision) demonstrates that RRL remains performant with (only) visual inputs as well.
+
+While it's hard to envision a robot without proprioceptive joint sensing, harsh conditions of the real-world can lead to noisy sensing, even sensor failures. In Figure 8, we subjected RRL to (a) signals with $2\%$ noise in the information received from the joint encoders RRL(Noise), and (b) only visual inputs are used as proprioceptive signals RRL(Vision). In both these cases, our methods remained performant with slight to no degradation in performance.
+
+§ G. ABLATIONS AND ANALYSIS OF DESIGN DECISIONS
+
+In our next set of experiments, we evaluate the effect of various design decisions on our method. In Figure 7, we study the effect of different Resnet features as our representation. Resnet34, though computationally more demanding (Figure 5) than Resnet18, delivers better performance owing to its improved representational capacity and feature expressivity. A further boost in capacity (Resnet50) degrades performance, likely due to the incorporation of less useful features and an increase in samples required to train the resulting larger policy network.
+
+ < g r a p h i c s >
+
+Fig. 9. LEFT: Influence of rewards signals: RRL(Ours), using sparse rewards, remains performant with a variation ${\mathrm{{RRL}}}_{\text{ dense }}$ using well-shaped dense rewards. RIGHT: Effect of policy size on the performance of RRL. We observe that it is quite stable with respect to a wide range of policy sizes.
+
+Reward design, especially for complex high dimensional tasks, requires domain expertise. RRL replaces the needs of well-shaped rewards by using a few demonstrations (to curb the exploration challenges in high dimensional space) and sparse rewards (indicating task completion). This significantly lowers the domain expertise required for our methods. In Figure 9-LEFT, we observe that RRL (using sparse rewards) delivers competitive performance to a variant of our methods that uses well-shaped dense rewards while being resilient to variation in policy network capacity (Figure 9-RIGHT).
+
+§ H. RETHINKING BENCHMARKING FOR VISUAL ${RL}$
+
+DMControl [31] is a widely used benchmark for proprioception based RL methods - RAD [49], SAC+AE [56], CURL [51], DrQ [48]. While these methods perform well (Table 1) on such simple DMControl tasks, their progress struggles to scale when met with task representative of real world complexities such as realistic Adroit Manipulation benchmarks (Figure 4).
+
+For example we demonstrate in Figure 4 that a representative SOTA methods FERM (uses expert demos along with RAD) struggles to perform well on Adroit Manipulation benchmark. On the contrary, RRL using Resnet features pre-trained on real world image dataset, delivers state comparable results on Adroit Manipulation benchmark while struggles on the DMControl (RRL+SAC: RRL using SAC and Resnet34 features [1]. This highlights large domain gap between the DMControl suite and the real-world.
+
+We further note that the pretrained features learned by SOTA methods aren't as widely applicable. We use a pre-trained RAD encoder (pretrained on Cartpole) as fixed feature extractor (Fixed RAD encoder in Table 1) and retrain the policy using these features for all environments. The performance degrades for all the tasks except Cartpole. This highlights that the representation learned by RAD (even with various image augmentations) are task specific and fail to generalize to other tasks set with similar visuals. Furthermore, learning such task specific representations are easier on simpler scenes but their complexity grows drastically as the complexity of tasks and scenes increases. To ensure that important problems aren't overlooked, we emphasise the need for the community to move towards benchmarks representative of realistic real world tasks.
+
+max width=
+
+${500}\mathrm{\;K}$ Step Scores RRL+SAC RAD Fixed RAD Encoder CURL SAC+AE State SAC
+
+1-7
+Finger, Spin ${422} \pm {102}$ ${947} \pm {101}$ ${789} \pm {190}$ ${926} \pm {45}$ ${884} \pm {128}$ 923 ± 211
+
+1-7
+Cartpole, Swing ${357} \pm {85}$ ${863} \pm 9$ ${875} \pm {01}$ ${845} \pm {45}$ ${735} \pm {63}$ ${848} \pm {15}$
+
+1-7
+Reacher, Easy ${382} \pm {299}$ ${955} \pm {71}$ ${53} \pm {44}$ ${929} \pm {44}$ ${627} \pm {58}$ ${923} \pm {24}$
+
+1-7
+Cheetah, Run ${154} \pm {23}$ ${728} \pm {71}$ ${203} \pm {31}$ ${518} \pm {28}$ ${550} \pm {34}$ ${795} \pm {30}$
+
+1-7
+Walker, Walk ${148} \pm {12}$ ${918} \pm {16}$ ${182} \pm {40}$ ${902} \pm {43}$ ${847} \pm {48}$ ${948} \pm {54}$
+
+1-7
+Cup, Catch ${447} \pm {132}$ ${974} \pm {12}$ ${719} \pm {70}$ ${959} \pm {27}$ ${794} \pm {58}$ ${974} \pm {33}$
+
+1-7
+100K Step Scores X X X X X X
+
+1-7
+Finger, Spin ${135} \pm {67}$ ${856} \pm {73}$ ${655} \pm {104}$ ${767} \pm {56}$ ${740} \pm {64}$ ${811} \pm {46}$
+
+1-7
+Cartpole, Swing ${192} \pm {19}$ ${828} \pm {27}$ ${840} \pm {34}$ ${582} \pm {146}$ ${311} \pm {11}$ ${835} \pm {22}$
+
+1-7
+Reacher, Easy ${322} \pm {285}$ ${826} \pm {219}$ ${162} \pm {40}$ ${538} \pm {233}$ ${274} \pm {14}$ ${746} \pm {25}$
+
+1-7
+Cheetah, Run ${72} \pm {63}$ ${447} \pm {88}$ ${188} \pm {20}$ ${299} \pm {48}$ ${267} \pm {24}$ ${616} \pm {18}$
+
+1-7
+Walker, Walk ${63} \pm {07}$ ${504} \pm {191}$ ${106} \pm {11}$ ${403} \pm {24}$ ${394} \pm {22}$ ${891} \pm {82}$
+
+1-7
+Cup, Catch ${261} \pm {57}$ ${840} \pm {179}$ ${533} \pm {148}$ ${769} \pm {43}$ ${391} \pm {82}$ ${746} \pm {91}$
+
+1-7
+
+TABLE I
+
+Results on DMControl Benchmark. RAD outperforms all the baselines whereas RRL performs worse in the ${100}\mathrm{K}$ and ${500}\mathrm{к}$ Environmental step benchmark suggesting that it is quicker to learn task specific representation in simple tasks whereas Fixed RAD ENCODER HIGHLIGHTS THAT THE REPRESENTATIONS LEARNED BY RAD ARE NARROW AND TASK SPECIFIC.
+
+§ VI. STRENGTHS, LIMITATIONS & OPPORTUNITIES
+
+This paper presents an intuitive idea bringing together advancements from the fields of representation learning, imitation learning, and reinforcement learning. We present a very simple method named RRL that leverages Resnet features as representation to learn complex behaviors directly from proprioceptive signals. The resulting algorithm approaches the performance of state-based methods in complex ADROIT dexterous manipulation suite.
+
+Strengths: The strength of our insight lies in its simplicity, and applicability to almost any reinforcement or imitation learning algorithm that intends to learn directly from high dimensional proprioceptive signals. We present RRL, an instantiation of this insight on top of imitation + (on-policy) reinforcement learning methods called DAPG, to showcase its strength. It presents yet another demonstration that features learned by Resnet are quite general and are broadly applicable. Resnet features trained over 1000s of real-world images are more robust and resilient in comparison to the features learned by methods that learn representation and policies in tandem using only samples from the task distribution. The use of such general but frozen representations in conjunction with RL pipelines additionally avoids the non-stationary issues faced by competing methods that simultaneously optimizes reinforcement and representation objectives, leading to more stable algorithms. Additionally, not having to train your own features extractors results in a significant sample and compute gains, Refer to Figure 5.
+
+Limitations: While this work demonstrates promises of using pre-trained features, it doesn't investigate the data mismatch problem that might exist. Real-world datasets used to train resnet features are from human-centric environments. While we desire robots to operate in similar settings, there are still differences in their morphology and mode of operations. Additionally, resent (and similar models) acquire features from data primarily comprised of static scenes. In contrast, embodied agents desire rich features of dynamic and interactive movements.
+
+Opportunities: RRL uses a single pre-trained representation for solving all the complex and very different tasks. Unlike the domains of vision and language, there is a nontrivial cost associated with data in robotics. The possibility of having a standard shared representational space opens up avenues for leveraging data from various sources, building hardware-accelerated devices using feature compression, low latency and low bandwidth information transmission.
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..4d121f5195124c7ad271fa6c9b2b373383d7cfeb
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,327 @@
+# Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity
+
+Wenxuan Zhou ${}^{1}$ and David Held ${}^{1}$
+
+
+
+Fig. 1: We study the task of "Occluded Grasping" with extrinsic dexterity. The goal of this task is to reach an occluded grasp configuration (indicated by a transparent gripper attached to the object in the top row). The figure shows the emergent behavior of the trained policy which uses the wall of the bin to rotate the object to reach a grasp.
+
+Abstract-A robot can solve more complex manipulation tasks beyond the limitations of its body if it can utilize the external environment such as pushing the object against the table or a vertical wall. These behaviors are known as "Extrinsic Dexterity." Previous work in extrinsic dexterity usually relies on hand-crafted primitives or careful assumptions about contacts. In this work, we explore the use of reinforcement learning (RL) on the extrinsic dexterity with the task of "Occluded Grasping". The goal of the task is to grasp the object in configurations that are initially occluded; the robot must interact with the object and the extrinsic environment to move the object into a configuration from which these grasps can be achieved. To accomplish this task, we train a policy to co-optimize pre-grasp and grasping motions; this results in emergent behavior of pushing the object against the wall in order to rotate and then grasp it. We demonstrate the generality of the learned policy across environment variations in simulation and evaluate it on a real robot with zero-shot sim2real transfer. Videos can be found at https://sites.google.com/view/grasp-ungraspable.
+
+## I. INTRODUCTION
+
+Humans have dexterous multi-fingered hands; however, similarly dexterous robot hands are expensive and fragile. Instead, robots can achieve dexterous manipulation with a simple hand by leveraging the environment, known as "Extrinsic Dexterity" [1]. For example, a simple gripper can rotate an object in-hand by pushing it against the table [2], or lifting an object by sliding it along a vertical surface [3]. With the exploitation of external resources such as contact surfaces or gravity, even simple grippers can perform skillful maneuvers that are typically studied with a multi-fingered dexterous hand. Different from a common practice of considering the robot and an object of interest in isolation, extrinsic dexterity focuses on a holistic view of considering the interactions among the robot, the object, and the external environment.
+
+Previous work in extrinsic dexterity has demonstrated a variety of tasks such as in-hand reorientation with a simple gripper, prehensile pushing or shared grasping [1], [2], [3]. However, the underlying approaches come with several limitations such as relying on hand-designed primitives, making assumptions about contact locations and contact modes, or requiring specific gripper design. Instead, we use reinforcement learning (RL) to remove these limitations. With reinforcement learning, the agent can learn a closed-loop policy of how the robot should interact with the object and the environment to solve the task. In addition, when trained with domain randomization, the policy can learn to be robust to different variations of physics. These properties of RL can enable extrinsic dexterity in a more general setting.
+
+We study "Occluded Grasping" as an example of a task that requires extrinsic dexterity. Occluded Grasping is defined with the goal of grasping an object in poses that are initially occluded. Consider, for example, a robot that needs to grasp a cereal box lying on its side on a table; the desired grasp is not reachable because it is partially occluded by the table (Figure 1). To achieve this grasp with a parallel gripper, the robot might rotate the object by pushing it against a vertical wall to expose the desired grasp. This task is in contrast with existing grasping tasks which mostly focus on reaching an unoccluded grasp in free space with a static or near-static scene [4], [5], [6]. Prior work has attempted to design pre-grasp motions of exposing occluded grasp poses with primitives or special gripper design [7]. In our work, the pre-grasp motion is an emergent behavior through a novel reward function that co-optimizes exposing the grasp pose and achieving the grasp pose. In addition, we frame the task as a goal-conditioned RL problem, in which the policy is conditioned on the selected grasp. During training, the policy learns to reach as many grasp poses as possible with an automatic curriculum [8]. During testing, given a set of grasps, the policy can select one of them as a goal to execute.
+
+In summary, we present a system for "Occluded Grasping" as an example of the combination of reinforcement learning and extrinsic dexterity. We provide a comprehensive evaluation of the system both in simulation and on a real Franka Emika Panda robot. We showcase the importance of each components and the generalization of the learned policy across environment variations in simulation and real.
+
+---
+
+${}^{1}$ Robotics Institute, Carnegie Mellon University
+
+---
+
+## II. RELATED WORK
+
+## A. Extrinsic dexterity
+
+"Extrinsic dexterity" is a type of manipulation skills that enhance the intrinsic capability of a hand using external resources including external contacts, gravity, or dynamic motions of the arm [1]. Previous work in extrinsic dexterity has demonstrated complex manipulation tasks with a simple gripper including in-hand reorientation [1], [9], prehensile pushing [2], [10], shared grasping [3], etc. In this work, we study a different task that can further demonstrate the benefit of extrinsic dexterity. Extrinsic dexterity usually involves contact-rich behaviors which poses difficulties in planning and control. Previous work has used hand-crafted trajectories [1], task-specific motion primitives [9], [3] or motion planning over contact mode switches [2], [10], [11], [12]. They come with the restrictions on the contact modes between the finger and the object which will limit the motion and the design of the gripper. In this work, we take an alternative approach of using reinforcement learning to learn a closed-loop policy that considers both planning and control.
+
+## B. Reinforcement Learning for Manipulation
+
+Previous work that uses reinforcement learning for manipulation tasks treat the object and the robot in isolation without considering extrinsic dexterity [13], [14], [8]. In our work, we demonstrate that the agent can benefit from extrinsic dexterity when solving the occluded grasping task.
+
+## C. Grasping
+
+Grasping has been an important task in robot manipulation and has been studied from various aspects.
+
+Grasp generation: One area of study in grasping is to generate stable grasp configurations [15], [16], [17], [4], [18], [5], [19]. We assume that we will use the grasps generated by any grasp generation method as input to our system.
+
+Grasp execution: To execute a grasp following grasp generation, a motion planner is usually used to generate a collision-free path towards the desired grasp configuration. If there is a set of desired grasps, integrated grasp and motion planning could be considered [20], [21], [6]. [22] uses imitation learning and reinforcement learning to finetune the trajectories from the planner. All of these works aim at achieving the unoccluded grasp configurations in static or near-static scenes. Instead, our work focuses on a complementary direction of achieving occluded grasp locations by interacting with the object of interest.
+
+Pre-Grasp manipulation: To deal with occluded grasp configurations, prior work has studied pre-grasps as a preparatory stage [23], [24], [25], [7]. [7] is the most related to our work, but they use a specially designed end-effector to perform the pre-grasp motion and then use a second gripper to grasp the object. We demonstrate that the full grasping task can be solved with a single gripper without special requirements on the end-effector. These previous work typically separates pre-grasp motion and grasp execution into two stages and impose restrictions on the transitions of the stages. In our work, we co-optimize pre-grasp and grasp execution within an episode without explicit separation of the stages. The pre-grasping behavior emerges through learning without restrictions on object or gripper motions.
+
+End-to-end grasping: Another line of work use an end-to-end pipeline for grasping with reinforcement learning [26] or imitation learning [27]. The policy performs an arbitrary grasp of the object without the possibility of specifying a certain set of grasps. Also, there has not been any emergent behavior of exposing occluded grasp pose in existing work.
+
+## III. TASK DEFINITION: OCCLUDED GRASPING
+
+Our work is designed to be used in a pipeline that follows a grasp pose generation method such as [4], [5], [19]. Given a rigid object, we assume a desired grasp $g$ as input to the system. A grasp configuration $g \in {SE}\left( 3\right)$ is defined to be the desired $6\mathrm{D}$ pose of the end-effector in the object frame $O$ . The grasp is fixed with respect to the object, and it will move when the object moves. On the top row of Figure 1, an example of a desired grasp is shown as a transparent gripper attached to the object. The goal of our work is to learn grasp execution which is to move the end-effector $E$ close to a given $g$ with a pose difference metric $\Delta \left( {g, E}\right)$ . In this paper, the task is defined to be successful if the position difference ${\Delta T}\left( {g, E}\right)$ and the orientation difference ${\Delta \theta }\left( {g, E}\right)$ are less than the pre-defined thresholds ${\varepsilon }_{T}$ and ${\varepsilon }_{P}$ respectively at the end of an episode. After successfully reaching the desired grasp pose, the gripper will be closed to complete the grasp. We define an "Occluded Grasping" task to be the case where the grasp $g$ is initially occluded (not in free space). When a set of grasps $G = \left\{ {g}_{i}\right\}$ are available, we may select a grasp ${g}_{i}$ from the set $G$ to execute (Appendix VII).
+
+## IV. LEARNING OCCLUDED GRASPING WITH REINFORCEMENT LEARNING
+
+We study the use of reinforcement learning (RL) to train a closed-loop policy for the occluded grasping task defined above. In this section, we will first discuss important design choices of the system considering a single target grasp including the extrinsic environment and the design of the RL problem. Then, we will also discuss how to improve the generalization of the policy using Automatic Domain Randomization [8]. Training and evaluation procedures that process a set of grasps can be found in Appendix VII,
+
+## A. Extrinsic Environment
+
+To showcase the benefits of extrinsic dexterity from object-scene interaction in this task, we construct the scene of the task as having an object in a bin, instead of leaving the object on the table (Figure 2). In Section V, we will show that the emergent policy will utilize the wall of the bin to rotate the object. Without the wall, it is not able to find a strategy that can successfully perform the task.
+
+### B.RL Problem Design
+
+We discuss the design of the RL problem in this section. More details can be found in Appendix 1. We train a goal-conditioned policy $\pi \left( {{a}_{t} \mid {s}_{t}, g}\right)$ for this task where the goal is a target grasp configuration $g.{s}_{t}$ includes the pose of the end-effector and the object pose. The action space of the policy is the delta pose of the end-effector ${\Delta E}$ which will be sent to a low-level Operational Space Controller (OSC). The choice of OSC allows compliant movement for such a contact-rich task (See Appendix 1 for more discussion). The reward function is designed to co-optimize the pre-grasp motion as well as grasp execution:
+
+$$
+r = {\alpha D}\left( {g, E}\right) + \beta \mathop{\sum }\limits_{i}P\left( {m}_{i}\right) \tag{1}
+$$
+
+
+
+Fig. 2: $E$ denotes the $6\mathrm{D}$ pose of the end-effector. $g$ denotes the target grasp defined in the object frame. Marker locations ${m}_{i}$ in green on the target grasp are used to calculate the occlusion penalty.
+
+where
+
+$$
+D\left( {g, E}\right) = {\alpha }_{1}{\Delta T}\left( {g, E}\right) + {\alpha }_{2}{\Delta \theta }\left( {g, E}\right) \tag{2}
+$$
+
+${\alpha }_{1},{\alpha }_{2}$ and $\beta$ are the weights for the reward terms. The first term of Equation 1, $D\left( {g, E}\right)$ , is the pose difference between the target grasp and the current end-effector pose. This term is expanded in Equation 2 to include the translational and rotational distance, as described in Section III. The second term of Equation 1 is the target grasp occlusion penalty which is to penalize the gripper if it is occluded by the table. We set several marker points on the target gripper (Figure 2) denoted as ${m}_{i}$ and compare the height of the markers with the table top. If a marker is below the table top, the height difference will be used as the penalty. Having the occlusion penalty can effectively reduce the local optima where the gripper will reach close to the target grasp (which is occluded) without trying to move the object.
+
+To summarize, the first term of Equation 1 is to optimize for successful grasp execution and the second term is to encourage pre-grasp motions to move the object such that the grasp $g$ becomes unoccluded. An important difference from previous work is that pre-grasp and grasp execution components are optimized together instead of being separated into two stages. We did not have any reward terms that are explicitly related to extrinsic dexterity. In our system, the use of extrinsic dexterity is an emergent behavior of policy optimization given our objective and environmental setup.
+
+## C. Policy Generalization
+
+One benefit of using RL is that it generates a closed-loop policy instead of an open-loop trajectory. A closed-loop policy can ideally generalize to a wider range of state distributions which implies better performance over the variations of the environment properties such as object size, density, and friction coefficient, etc. The generalization can be improved further by training with domain randomization on the environment variations. This can also benefit sim-to-real transfer. We use Automatic Domain Randomization (ADR) [8] to improve the generalization of the policy. More implementation details can be found in Appendix 1,
+
+
+
+Fig. 3: Left: Ablations on the reward function and the walls. Right: Evaluation on the generalization of the policies by sampling 100 environments.
+
+## V. EXPERIMENTS
+
+## A. Training Curves and Ablations
+
+Details of the experiment setup can be found in Appendix III. In this section, we train the policies with a single desired grasp in the default environment without randomization of the physical parameters. From the training curve shown in Figure 3a, the policy trained with the complete system can reach a success rate of 1 before 4000 episodes which corresponds to 160000 environment steps. We performed an ablations analysis on the design choices to determine which components were the most important to the success of the system. First, we experiment with removing the wall of the bin to evaluate the importance of using the wall for extrinsic dexterity. As shown in Figure 3a, the resulting policy has $0\%$ success rate and pushes the object outside of the table. Second, we performed an ablation on the reward function. When we remove the grasp pose occlusion penalty (the second term of Equation 1), the policy is more likely to get stuck at a local optima of only trying to match the position and orientation of the gripper and thus the average success rate across random seeds becomes lower. An alternative is to use a $\{ - 1,0\}$ sparse reward according to the success criteria defined in Section III instead of the reward that we define in Equation 1. With a sparse reward, the policy learns much slower. Training this task with sparse reward makes the exploration task of the policy much more difficult.In addition, ablations on the choice of controller can be found in Appendix V. We also include results for multi-grasp training and multi-grasp selection in Appendix VII,
+
+## B. Emergent Behaviors
+
+Figure 1 shows a typical strategy of the successful policies. The strategy involves multiple stages of contact switches. The gripper first moves close to the object and makes contact on the side of the object with the left finger. It then pushes the object against the wall to rotate it. During this stage, the gripper maintains a fixed or rolling contact with the object. The object is usually under sliding contact with the wall and the ground of the bin at some of the corners. After the gripper has rotated a bit further and the right fingertip is below the object, the left finger will slide on the object or simply leave the object to let the object drop on the right finger. After the object lies on the right finger, the gripper will try to match the desired pose more precisely. At this point, the policy has executed the grasp successfully and it is ready to close the gripper. We include more visualizations of emergent behaviors in Appendix D. including another type of successful strategy, local optima behavior and multi-grasp behaviors. Videos can be found on the website [1]
+
+## C. Policy Generalization
+
+In this section, we analyze the performance of the policy across environment variations. The robustness over environment variations might come from the policy being closed-loop and the randomization of the physical parameters during training. Thus, we evaluate over open loop trajectories (Open Loop), policies trained over a fixed environment (Fixed Env) and policies trained with ADR (With ADR). The open loop trajectories are obtained by rolling out the Fixed Env policies in the default environment. We also turn off the randomization of the initial gripper pose for Open Loop; otherwise, the success rate is too low to compare with even in the default environment. We sample 100 environments from the training range of the ADR policies (Appendix II) and plot the percentage of environments that are above a certain performance metric (Figure 3b). The closed-loop policies are much better than open-loop trajectories across environment variations. The policy trained over a fixed environment is able to generalize to a wide range of variations. With ADR, the generalization can be improved even further. We also modify the important physical parameters one at a time to understand the sensitivity of these parameters in Appendix VI.
+
+## D. Real-robot experiment
+
+To further evaluate the generalization of the policies and demonstrate the feasibility of the proposed system, we execute the policies on the real robot with zero-shot sim2real transfer over 6 test cases shown in Figure 4. There are four box-shape objects with different sizes, density and surface friction. Box-1 has the same size and density as the default object trained in simulation. Box-2 is larger than the training range in the y-direction. Box-3 is larger than the training range in the z-direction. The surface friction are very different for different boxes. For example, Box- 3 has tape on its surface which has much higher friction than the others (which can be shown in the videos on the website ${}^{\square }$ ). However, we do not have access to the true friction coefficients of the objects to compare with the values in simulation. In addition, we evaluate Box-1 with additional weights by putting four or eight erasers inside of the box. Note that the erasers will move in the box during execution, which is not modeled in simulation. We evaluate two types of single grasp policies trained in simulation: one policy is trained with Automatic Domain Randomization as described in Section IV-C; another policy is trained on a fixed default environment without domain randomization.
+
+
+
+Fig. 4: Test cases for real robot experiments.
+
+TABLE I: Real robot evaluations.
+
+| Object-ID | Size (cm) | Weight (g) | Success $\mathbf{w}/$ ADR | Success $\mathbf{w}/\mathbf{o}\mathbf{{ADR}}$ |
| Box-1 | (15.0,20.0,5.0) | 128 | 10/10 | 10/10 |
| Box-1 + 4 erasers | (15.0,20.0,5.0) | 237 | 8/10 | 7/10 |
| Box-1 + 8 erasers | (15.0,20.0,5.0) | 345 | 6/10 | 4/10 |
| Box-2 | (15.4,29.2,5.8) | 130 | 8/10 | 8/10 |
| Box-3 | (15.3,22.2,7.4) | 113 | 10/10 | 4/10 |
| Box-4 | (15.3, 22.2, 7.4) | 50 | 7/10 | 0/10 |
| Average | | | 0.82 | 0.55 |
+
+We evaluate 10 episodes for each test case and summarize the results in Table 1. Videos of the real robot experiments can be found on the website ${}^{\square }$ . Overall, the policy with ADR achieves a success rate of ${82}\%$ while the policy without ADR achieves ${55}\%$ . ADR effectively improves the performance over a wider range of object variations. Note that both policies are evaluated on out-of-distribution objects: Box- 1 with 8 erasers, Box-3 and Box-4 are out of the training distribution of ADR (See Appendix II); All of the test cases except the first one (Box-1) are out-of-distribution for the policy without ADR. This demonstrates the robustness of the closed-loop policies of the proposed pipeline on such a dynamic manipulation task.
+
+## VI. CONCLUSION
+
+We study the "Occluded Grasping" task of reaching a desired grasp configuration that is initially occluded. With a parallel gripper, the robot has to use extrinsic dexterity to solve this task. We present a system that learns a closed-loop policy for this task with reinforcement learning. In the experiments, we demonstrate that the wall, the choice of controller, and the design of the reward function are all essential components. The policy can generalize across a wide range of environment variations and can be executed on the real robot. One potential extension of our work is to train the policy with a wide variety of object shapes which may require image-based policies. Also, the pipeline can potentially be applied to other extrinsic dexterity tasks.
+
+---
+
+https://sites.google.com/view/grasp-ungraspable
+
+---
+
+## REFERENCES
+
+[1] N. C. Dafle, A. Rodriguez, R. Paolini, B. Tang, S. S. Srinivasa,
+
+M. Erdmann, M. T. Mason, I. Lundberg, H. Staab, and T. Fuhlbrigge, "Extrinsic dexterity: In-hand manipulation with external forces," in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 1578-1585.
+
+[2] N. Chavan-Dafle and A. Rodriguez, "Sampling-based planning of in-hand manipulation with external pushes," 2017.
+
+[3] Y. Hou, Z. Jia, and M. Mason, "Manipulation with shared grasping," in Robotics: Science and Systems, 2020.
+
+[4] A. Mousavian, C. Eppner, and D. Fox, "6-dof graspnet: Variational grasp generation for object manipulation," in International Conference on Computer Vision (ICCV), 2019.
+
+[5] A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox, "6-dof grasping for target-driven object manipulation in clutter," 2020.
+
+[6] L. Wang, Y. Xiang, and D. Fox, "Manipulation trajectory optimization with online grasp synthesis and selection," in Robotics: Science and Systems (RSS), 2020.
+
+[7] Z. Sun, K. Yuan, W. Hu, C. Yang, and Z. Li, "Learning pregrasp manipulation of objects from ungraspable poses," 2020.
+
+[8] OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan, W. Zaremba, and L. Zhang, "Solving rubik's cube with a robot hand," 2019.
+
+[9] Y. Hou, Z. Jia, and M. T. Mason, "Fast planning for 3d any-pose-reorienting using pivoting," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 1631-1638.
+
+[10] N. Chavan-Dafle, R. Holladay, and A. Rodriguez, "In-hand manipulation via motion cones," 2019.
+
+[11] X. Cheng, E. Huang, Y. Hou, and M. T. Mason, "Contact mode guided sampling-based planning for quasistatic dexterous manipulation in 2d," in 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 6520-6526.
+
+[12] ——, “Contact mode guided motion planning for quasidynamic dexterous manipulation in 3d," arXiv preprint arXiv:2105.14431, 2021.
+
+[13] S. Levine, C. Finn, T. Darrell, and P. Abbeel, "End-to-end training of deep visuomotor policies," 2016.
+
+[14] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, "Domain randomization for transferring deep neural networks from simulation to the real world," 2017.
+
+[15] K. B. Shimoga, "Robot grasp synthesis algorithms: A survey," The International Journal of Robotics Research, vol. 15, no. 3, pp. 230- 266, 1996.
+
+[16] V.-D. Nguyen, "Constructing force-closure grasps," The International Journal of Robotics Research, vol. 7, no. 3, pp. 3-16, 1988.
+
+[17] L. Pinto and A. Gupta, "Supersizing self-supervision: Learning to grasp from ${50}\mathrm{k}$ tries and 700 robot hours," in 2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016, pp. 3406-3413.
+
+[18] J. Bohg, A. Morales, T. Asfour, and D. Kragic, "Data-driven grasp synthesis-a survey," IEEE Transactions on Robotics, vol. 30, no. 2, pp. 289-309, 2013.
+
+[19] A. Murali, W. Liu, K. Marino, S. Chernova, and A. Gupta, "Same object, different grasps: Data and semantic knowledge for task-oriented grasping," 2020.
+
+[20] N. Vahrenkamp, M. Do, T. Asfour, and R. Dillmann, "Integrated grasp and motion planning," in 2010 IEEE International Conference on Robotics and Automation, 2010, pp. 2883-2888.
+
+[21] J. Fontanals, B.-A. Dang-Vu, O. Porges, J. Rosell, and M. A. Roa, "Integrated grasp and motion planning using independent contact regions," in 2014 IEEE-RAS International Conference on Humanoid Robots, 2014, pp. 887-893.
+
+[22] L. Wang, Y. Xiang, W. Yang, A. Mousavian, and D. Fox, "Goal-auxiliary actor-critic for 6d robotic grasping with point clouds," 2021.
+
+[23] L. Y. Chang, S. S. Srinivasa, and N. S. Pollard, "Planning pre-grasp manipulation for transport tasks," in 2010 IEEE International Conference on Robotics and Automation. IEEE, 2010, pp. 2697- 2704.
+
+[24] J. King, M. Klingensmith, C. Dellin, M. Dogar, P. Velagapudi, N. Pollard, and S. Srinivasa, "Pregrasp manipulation as trajectory optimization," in Proceedings of Robotics: Science and Systems, Berlin, Germany, June 2013.
+
+[25] K. Hang, A. S. Morgan, and A. M. Dollar, "Pre-grasp sliding manipulation of thin objects using soft, compliant, or underactuated hands," IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 662-669, 2019.
+
+[26] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine, "Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation," 2018.
+
+[27] S. Song, A. Zeng, J. Lee, and T. Funkhouser, "Grasping in the wild: Learning 6dof closed-loop grasping from low-cost demonstrations," Robotics and Automation Letters, 2020.
+
+[28] O. Khatib, "A unified approach for motion and force control of robot manipulators: The operational space formulation," IEEE Journal on Robotics and Automation, vol. 3, no. 1, pp. 43-53, 1987.
+
+[29] R. Martín-Martín, M. A. Lee, R. Gardner, S. Savarese, J. Bohg, and A. Garg, "Variable impedance control in end-effector space: An action space for reinforcement learning in contact-rich tasks," in 2019 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 1010-1017.
+
+[30] Y. Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín, "robosuite: A modular simulation framework and benchmark for robot learning," arXiv preprint arXiv:2009.12293, 2020.
+
+[31] E. Todorov, T. Erez, and Y. Tassa, "Mujoco: A physics engine for model-based control," in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp. 5026-5033.
+
+[32] K. Zhang, M. Sharma, J. Liang, and O. Kroemer, "A modular robotic arm control stack for research: Franka-interface and frankapy," arXiv preprint arXiv:2011.02398, 2020.
+
+[33] S. Rusinkiewicz and M. Levoy, "Efficient variants of the icp algorithm," in Proceedings third international conference on 3-D digital imaging and modeling. IEEE, 2001, pp. 145-152.
+
+[34] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor," in International conference on machine learning. PMLR, 2018, pp. 1861-1870.
+
+[35] D. Ghosh, A. Singh, A. Rajeswaran, V. Kumar, and S. Levine, "Divide-and-conquer reinforcement learning," arXiv preprint arXiv:1711.09874, 2017.
+
+[36] T. Yu, S. Kumar, A. Gupta, S. Levine, K. Hausman, and C. Finn, "Gradient surgery for multi-task learning," arXiv preprint arXiv:2001.06782, 2020.
+
+## Appendix I MORE DETAILS OF RL PROBLEM DESIGN
+
+Observations: We train a goal-conditioned policy $\pi \left( {{a}_{t} \mid {s}_{t},\eta }\right)$ for this task where the goal $\eta$ is a target grasp configuration $g$ . Note that the policy only takes one grasp as input but we will discuss how to deal with a set of grasps in Appendix VII. ${s}_{t}$ includes the pose of the end-effector in the world frame ${}^{W}E$ and the object pose in the world frame ${}^{W}O$ . We also include the pose of the end-effector in the object frame ${}^{O}E = {\left( {}^{W}O\right) }^{-1}\left( {{}^{W}E}\right)$ because we found that it sometimes speeds up learning. Each pose is represented as a 3D translation vector and a 4D quarternion representation of the rotation. In summary, the input to the policy includes $\left( {g,{}^{W}E,{}^{W}O,{}^{O}E}\right)$ which has a dimension of 28 in total.
+
+Actions: An outline of the policy execution pipeline is shown in Figure 5. The action space of the policy is the delta pose of the end-effector ${\Delta E}$ in its local frame represented by a vector of translation $p \in {\mathbb{R}}^{3}$ and a 3D vector of rotation $q \in$ ${SO}\left( 3\right)$ with axis-angle representation. Thus, the dimension of the action space is $6.{\Delta E}$ and the current gripper pose $E$ form a desired pose ${E}_{d}$ at timestep $t$ which will be sent to a low-level Operational Space Controller which will be discussed in the next section.
+
+If the corresponding joint configuration of the desired pose is going to reach joint limits, we will overwrite the policy action to output the desired pose of the previous timestep to the low-level controller. In detail, we use the Jacobian $J$ to estimate the joint configuration of the desired pose:
+
+$$
+{\theta }_{\text{joints }}^{t + 1} = {\theta }_{\text{joints }}^{t} + {J}^{-1} \cdot {\Delta E} \tag{3}
+$$
+
+
+
+Fig. 5: Outline of policy execution: Given the goal and the observation, the policy outputs a delta movement of the end-effector. If the desired pose is within the joint limit of the robot, it will be sent to the lower level controller.
+
+where ${\theta }_{\text{joints }}$ are the joint angles. If any joint in ${\theta }_{\text{joints }}^{t + 1}$ is close to the limit, the low-level controller will use the previous desired pose instead.
+
+Low-level controller: We use Operation Space Control (OSC) as the lower-level controller to achieve the desired pose [28]. Given a desired pose of the end-effector, OSC first calculates the corresponding force and torque at the end-effector to minimize the pose error according to a PD controller with gain ${K}_{p}$ and ${K}_{d}$ . Then, the desired force and torque of the end-effector will be converted into desired joint torques according to the model of the robot. OSC will operate at a higher frequency(100Hz)than the policy $\pi \left( {2\mathrm{{Hz}}}\right)$ .
+
+This choice of controller is very important for this task due to the fact that we expect the agent to use extrinsic dexterity to solve the task which involves contacts among the gripper, the object and the bin. There are two benefits of OSC in contact-rich manipulation. First, being compliant in end-effector space allows safe execution of the motions without smashing the gripper on the objects or the bin. Limiting the delta pose and selecting proper gains ${K}_{p},{K}_{d}$ will limit the final force and torque output of the end-effector. If we use a controller that is compliant in the joint configuration space instead, we will not have direct control over the maximum force the end-effector might have on the object and the bin. Second, as shown in [29], using OSC as the low-level controller might speed up RL training and improve sim2real transfer for contact-rich manipulation.
+
+## Appendix II DETAILS OF AUTOMATIC DOMAIN RANDOMIZATION
+
+As discussed in Section IV-C, we use Automatic Domain Randomization [8] to improve policy generalization across environment variations. In ADR, the policy is first trained with an environment with very little randomization, and then we gradually expand the variations based on the evaluation performance. For a set of environment parameters ${\lambda }_{i}$ , each ${\lambda }_{i}$ is sampled from a uniform distribution ${\lambda }_{i} \sim U\left( {{\phi }_{i}^{L},{\phi }_{i}^{H}}\right)$ at the beginning of each episode. During training, the policy will be evaluated at these boundary values ${\lambda }_{i} = {\phi }_{i}^{L}$ or ${\lambda }_{i} = {\phi }_{i}^{H}$ . If the performance is higher than a threshold, the boundary value will be expanded by an increment $\Delta$ . For example, if the performance at ${\lambda }_{i} = {\phi }_{i}^{H}$ is higher than the threshold, the training distribution becomes ${\lambda }_{i} \sim U\left( {{\phi }_{i}^{L},{\phi }_{i}^{H} + \Delta }\right)$ in the next iteration. Compared to directly training the policy with the entire variations, Automatic Domain Randomization can reduce the need of manually tuning a suitable range of variations for each environment parameter.
+
+Table II summarized the simulation parameters in the experiment. They start from a single initial value and gradually expand to a wider range according to the pre-specific increment step $+ \Delta$ on the upper bound and the decrement step $- \Delta$ at the lower bound. We include the final range from ADR expansion in the last column. These ranges are used when we sample 100 environments for evaluation in Section V-C. All the parameters are uniformly sampled from these ranges at the beginning of each episode.
+
+ | Initial Value | +Δ | $- \Delta$ | Final Range |
| Object size x (m) | 0.15 | 0.01 | -0.01 | $\left\lbrack {{0.14},{0.16}}\right\rbrack$ |
| Object size $\mathrm{z}\left( \mathrm{m}\right)$ | 0.05 | 0.01 | -0.01 | $\left\lbrack {{0.04},{0.06}}\right\rbrack$ |
| Table friction | 0.3 | 0.1 | -0.1 | $\left\lbrack {{0.1},{0.5}}\right\rbrack$ |
| Gripper friction | 3 | / | -1 | $\left\lbrack {2,3}\right\rbrack$ |
| Object Density $\left( {g/{m}^{3}}\right)$ | 86 | 86 | 43 | $\left\lbrack {{43},{172}}\right\rbrack$ |
| Action translation scale (m) | 0.03 | / | -0.005 | $\left\lbrack {{0.02},{0.03}}\right\rbrack$ |
| Action rotation scale (rad) | 0.2 | / | -0.05 | $\left\lbrack {{0.1},{0.2}}\right\rbrack$ |
| Initial distance to wall ( $\mathrm{m}$ ) | 0 | 0.01 | / | $\left\lbrack {0,{0.02}}\right\rbrack$ |
| Table offset $\mathrm{x}\left( \mathrm{m}\right)$ | 0.5 | 0.01 | -0.01 | $\left\lbrack {{0.48},{0.52}}\right\rbrack$ |
| Table offset $\mathrm{z}\left( \mathrm{m}\right)$ | 0.07 | 0.01 | 0.01 | $\left\lbrack {{0.055},{0.075}}\right\rbrack$ |
+
+TABLE II: Simulation parameters in Automatic Domain Randomization
+
+## Appendix III EXPERIMENT SETUP
+
+Simulation: We build the simulation environment with Robosuite [30] in the MuJoCo simulator [31]. We use a box-shaped object in this task with a default grasp location shown in Figure 1. The object is placed in a bin in front of the robot. We use single grasp training by default; the results related to multi-grasp can be found in Appendix VII. Each episode has a length of 40 timesteps which corresponds to 20 seconds of real time execution. The initial joint configuration of the robot is randomized with a Gaussian of 0.02 rad.
+
+Real robot experiment: The policy is trained in the simulator and zero-shot transferred on a physical Franka Emika Panda robot. The code for controlling the robot is built on top of FrankaPy [32]. For real robot experiments, we use Iterative Closest Point (ICP) for pose estimation of the object which matches a template point cloud of the object to the current point cloud [33]. An example of ICP result is shown in Figure 7.
+
+
+
+Fig. 6: Emergent behavior of the policy for the occluded grasping task involves multiple stages of contact mode transitions among the gripper, the object and the bin. The figure shows the corresponding stages in simulation versus the real robot execution of the policy.
+
+
+
+Fig. 7: Illustration of object pose estimation with ICP at three different timesteps of an episode. The blue points are observed point cloud which includes both the gripper and the object. The red points are the template model of the object.
+
+Evaluation metrics: We compare the policies across 5 random seeds of each method and plot the average performance with standard deviation across seeds. Our main evaluation metric is the success rate at the final step of the episode computed as $\mathbb{1}\left( {{\Delta T} < 3\mathrm{\;{cm}}}\right) \cdot \mathbb{1}\left( {{\Delta \theta } < {10}\mathrm{{deg}}}\right)$ (See Section III for definitions). We use 10 episodes for each evaluation setting.
+
+Implementation details: We use Soft Actor Critic [34] to train the RL policy with the impementation from rlkit. Both the policy network and the Q-function are parameterized as a multi-layer perceptron (MLP) with 3 layers of 512 neurons.
+
+## Appendix IV ADDITIONAL RESULTS ON EMERGENT BEHAVIORS
+
+In Section V-B, we discuss a typical emergent strategy of solving this task as a result of the design of the full system. Figure 6 includes a more detailed view of this strategy across multiple stages in simulation and on the real robot.
+
+One of the key decisions in this strategy is to use the left finger to rotate the object instead of the right finger. One might suppose an alternative approach which is to use the right finger to scoop the object against the wall and then directly roll the finger underneath the object to reach the grasp. However, this strategy is not physically feasible on the parallel gripper due to the limited degree of freedom of the finger. We observe that the policies that follow this strategy during exploration usually get stuck at a local optima without successfully reaching the grasp (Figure 8a).
+
+
+
+(a) Local optima: The gripper uses the right finger to lift the object and get stuck at a local optima.
+
+
+
+(b) Standing object: One of the successful strategies is to flip the object until it stands on the side and then reach the grasp.
+
+Fig. 8: More visualizations on the emergent behavior of the policies.
+
+Another type of successful strategy from some of the seeds is to flip the object to stand on its side and then move to the grasp (Figure 8b). This strategy overfits to the box object because it relies on the fact that the object remains stable after the flip. If the agent is trained on a more diverse set of objects without such stable poses, it might learn to avoid this strategy; however, for a box object, this is also a viable approach.
+
+
+
+Fig. 9: Ablations on the choice of controller.
+
+
+
+Fig. 10: Evaluation on the generalization of the policies by changing one physical parameter at a time.
+
+## Appendix V ABLATIONS ON LOW-LEVEL CONTROLLER
+
+We compare our method to different types of controllers to demonstrate that the choice of Operational Space Controller (OSC) is critical for extrinsic dexterity. From Figure 9, both joint torque and joint position control lead to worse performance which indicates the importance of using end-effector coordinates for the action space. We also try increasing the gain of OSC so that it becomes roughly equivalent to position control. The success rate becomes lower which demonstrates that being compliant is important for the success of contact-rich tasks in addition to the importance of compliance for safety considerations.
+
+## Appendix VI MORE RESULTS ON POLICY GENERALIZATION
+
+To further analyze the robustness of the policy across environment variations, we modify the important physical parameters one at a time to understand the sensitivity of the policies to these parameters. Following Section V-C, we include the comparison of open loop trajectories (Open Loop), policies trained over a fixed environment (Fixed Env) and policies trained with ADR (With ADR). The closed-loop policies with ADR can deal with much wider variations of physical parameters than open loop trajectories.
+
+
+
+(b) MultiGrasp-Side:The policy can use another side of the wall to rotate the object and reach the desired grasp.
+
+Fig. 11: Visualizations of the multi-grasp policies.
+
+## Appendix VII MULTIGRASP TRAINING AND SELECTION
+
+In previous sections, we only consider the scenario when a single grasp is given for each episode. In this section, we consider the scenario in which a set of desired grasp configurations $G = {g}_{i}$ are given. We will first discuss the method for multi-grasp training and selection and then provide the experimental results.
+
+MultiGrasp Training with Curriculum: During training, we aim at covering as many grasp configurations from ${G}_{\text{train }}$ as possible. The straight-forward approach is to uniformly sample a goal $g \sim {G}_{\text{train }}$ for each episode. However, previous work has shown that learning directly over such a diverse set of goals might create a difficulty for policy learning [35], [36]. Instead, we use an automatic curriculum following [8] to gradually expand the set of grasps to be trained with. We start the training with just a single fixed grasp; after the policy has achieved a success rate larger than a threshold, it will be trained on a slightly larger set with grasps close to the initial grasp location.
+
+MultiGrasp Selection: During testing, a set of grasps ${G}_{\text{test }}$ is provided. Our method selects the best grasp within the set that maximizes the learned Q-function for the current observation: ${g}^{ * } = \arg \mathop{\max }\limits_{{g \sim {G}_{\text{test }}}}Q\left( {{s}_{t},{a}_{t},{g}_{t}}\right)$ . Selecting the best grasp from the set (instead of just using a single grasp) can improve the performance of the grasping task, following previous work in integrated grasp and motion planning [20], [21], [6]. The learned Q-function can select the grasp that is most easily reached with the trained policy; which grasp is selected thus depends both on the environmental configuration as well as how well the policy has learned to achieve different grasp configurations.
+
+MultiGrasp Training Results: In this experiment, we train the policy to reach a range of grasp locations with curriculum as described above. Given the box object, we generate the grasp configurations around the box and parameterized the grasps into a continuous scalar grasp ID in the range of $\left\lbrack {0,4}\right\rbrack$ (Figure 12a). Grasp ID 1.5 is the default grasp we use in the single grasp experiments. The policy is trained with an automatic curriculum. When the success rate of policy on a boundary case of the training range is above 0.8 , it will expand the range of grasps by 0.25 . For example, if the policy is currently training with grasps [1,2], and the success rate evaluated at grasp ID 1 is above 0.8 , the new training range will be $\left\lbrack {{0.75},2}\right\rbrack$ . We train two types of multi-grasp policies starting from two different grasp poses: MultiGrasp-Front which starts the training from ID 1.5 and MultiGrasp-Side which starts the training from ID 2.5. As a baseline, we also train a policy by uniformly sampling from the entire set of grasps without using ADR, named All Grasp.
+
+
+
+Fig. 12: Multi-grasp training: Left: Visualization of the range of grasp configurations and the grasp IDs used in multi-grasp training. Right: Performance of the multi-grasp policies across grasp configurations.
+
+Figure 11a and Figure 11b include qualitative examples of the behaviors of MultiGrasp-Front and MultiGrasp-Side. The policy will rotate the object first and then try to match the pose more precisely. MultiGrasp-Side will use a different wall of the bin to rotate the object than MultiGrasp-Front. Figure 12b shows the performance of these policies evaluated over across grasp configuration IDs. We found that both MultiGrasp-Front and MultiGrasp-Side are able to expand from a single grasp to most of the grasps on one side of the object based on the curriculum. The policies have difficulties in reaching other sides potentially due to exploration issues or limited policy capacity. It may require a completely different strategy to reach different grasp configurations (Figure 11) which is difficult to learn with a single policy (related to [35]). In contrast, All Grasp has difficulties in learning any of the grasp configurations, thus showing the importance of using a curriculum for multi-grasp training.
+
+MultiGrasp Selection Results: To compare grasp selection methods, at the beginning of each episode, we sample 50 grasp configurations from the training range of the policy. The grasp selection methods will use it as the set of desired grasps. We evaluate the following grasp selection options:
+
+- Argmax $Q$ : passes all the possible grasp configurations into the Q-function and select the one that corresponds to the highest Q-value.
+
+- PoseDiff: selects the grasp by the closest distance to the current gripper location according to Equation 2 (with the same weights as the reward function).
+
+TABLE III: Comparison of grasp selection methods in two scenarios: front grasps and side grasps. When grasping from the side, the policy achieves better performance when using the $\mathrm{Q}$ -function to select the grasp.
+
+ | MultiGrasp-Front | MultiGrasp-Side |
| ArgmaxQ | ${1.00} \pm {0.00}$ | ${1.00} \pm {0.00}$ |
| ArgmaxQ- ${t}_{0}$ | ${1.00} \pm {0.00}$ | ${1.00} \pm {0.00}$ |
| PoseDiff | ${1.00} \pm {0.00}$ | ${0.96} \pm {0.08}$ |
| PoseDiff- ${t}_{0}$ | ${1.00} \pm {0.00}$ | ${0.50} \pm {0.43}$ |
| Uniform | ${0.54} \pm {0.16}$ | ${0.90} \pm {0.06}$ |
+
+- Argmax $Q - {t}_{0}$ : selects the grasp according to Argmax $Q$ only during the first timestep of the episode instead of selecting it at every timestep.
+
+- PoseDiff- ${t}_{0}$ : selects the grasp according to PoseDiff only during the first timestep of the episode instead of selecting it at every timestep.
+
+- Uniform: samples a grasp from the set uniformly.
+
+The result is summarized in Table III. For MultiGrasp-Front, all of the methods other than Uniform achieve 100% success rate. In this case, the best grasp according to the Q-function does correspond to the grasp that is the closest to the gripper at grasp ID 1.5. For MultiGrasp-Side, ArgmaxQ- ${t}_{0}$ has higher success rate than PoseDiff- ${t}_{0}$ . The policy has a more complicated maneuver to reach the side grasp so the Q-function may capture the difficulty of the goal better than pose difference. At the beginning of the episode, the Q-function selects ID $= {2.5}$ while the pose difference selects $\mathrm{{ID}} = 2$ . If we keep this goal throughout the episode, PoseDiff- ${t}_{0}$ has a much lower success rate than the other baselines. If the policy can select the goal throughout the episode instead (PoseDiff), the performance can be improved compared to PoseDiff- ${t}_{0}$ .
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..73219c213aff5f58b2ef3734cb8cd87bb66a85c7
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,144 @@
+§ LEARNING TO GRASP THE UNGRASPABLE WITH EMERGENT EXTRINSIC DEXTERITY
+
+Wenxuan Zhou ${}^{1}$ and David Held ${}^{1}$
+
+ < g r a p h i c s >
+
+Fig. 1: We study the task of "Occluded Grasping" with extrinsic dexterity. The goal of this task is to reach an occluded grasp configuration (indicated by a transparent gripper attached to the object in the top row). The figure shows the emergent behavior of the trained policy which uses the wall of the bin to rotate the object to reach a grasp.
+
+Abstract-A robot can solve more complex manipulation tasks beyond the limitations of its body if it can utilize the external environment such as pushing the object against the table or a vertical wall. These behaviors are known as "Extrinsic Dexterity." Previous work in extrinsic dexterity usually relies on hand-crafted primitives or careful assumptions about contacts. In this work, we explore the use of reinforcement learning (RL) on the extrinsic dexterity with the task of "Occluded Grasping". The goal of the task is to grasp the object in configurations that are initially occluded; the robot must interact with the object and the extrinsic environment to move the object into a configuration from which these grasps can be achieved. To accomplish this task, we train a policy to co-optimize pre-grasp and grasping motions; this results in emergent behavior of pushing the object against the wall in order to rotate and then grasp it. We demonstrate the generality of the learned policy across environment variations in simulation and evaluate it on a real robot with zero-shot sim2real transfer. Videos can be found at https://sites.google.com/view/grasp-ungraspable.
+
+§ I. INTRODUCTION
+
+Humans have dexterous multi-fingered hands; however, similarly dexterous robot hands are expensive and fragile. Instead, robots can achieve dexterous manipulation with a simple hand by leveraging the environment, known as "Extrinsic Dexterity" [1]. For example, a simple gripper can rotate an object in-hand by pushing it against the table [2], or lifting an object by sliding it along a vertical surface [3]. With the exploitation of external resources such as contact surfaces or gravity, even simple grippers can perform skillful maneuvers that are typically studied with a multi-fingered dexterous hand. Different from a common practice of considering the robot and an object of interest in isolation, extrinsic dexterity focuses on a holistic view of considering the interactions among the robot, the object, and the external environment.
+
+Previous work in extrinsic dexterity has demonstrated a variety of tasks such as in-hand reorientation with a simple gripper, prehensile pushing or shared grasping [1], [2], [3]. However, the underlying approaches come with several limitations such as relying on hand-designed primitives, making assumptions about contact locations and contact modes, or requiring specific gripper design. Instead, we use reinforcement learning (RL) to remove these limitations. With reinforcement learning, the agent can learn a closed-loop policy of how the robot should interact with the object and the environment to solve the task. In addition, when trained with domain randomization, the policy can learn to be robust to different variations of physics. These properties of RL can enable extrinsic dexterity in a more general setting.
+
+We study "Occluded Grasping" as an example of a task that requires extrinsic dexterity. Occluded Grasping is defined with the goal of grasping an object in poses that are initially occluded. Consider, for example, a robot that needs to grasp a cereal box lying on its side on a table; the desired grasp is not reachable because it is partially occluded by the table (Figure 1). To achieve this grasp with a parallel gripper, the robot might rotate the object by pushing it against a vertical wall to expose the desired grasp. This task is in contrast with existing grasping tasks which mostly focus on reaching an unoccluded grasp in free space with a static or near-static scene [4], [5], [6]. Prior work has attempted to design pre-grasp motions of exposing occluded grasp poses with primitives or special gripper design [7]. In our work, the pre-grasp motion is an emergent behavior through a novel reward function that co-optimizes exposing the grasp pose and achieving the grasp pose. In addition, we frame the task as a goal-conditioned RL problem, in which the policy is conditioned on the selected grasp. During training, the policy learns to reach as many grasp poses as possible with an automatic curriculum [8]. During testing, given a set of grasps, the policy can select one of them as a goal to execute.
+
+In summary, we present a system for "Occluded Grasping" as an example of the combination of reinforcement learning and extrinsic dexterity. We provide a comprehensive evaluation of the system both in simulation and on a real Franka Emika Panda robot. We showcase the importance of each components and the generalization of the learned policy across environment variations in simulation and real.
+
+${}^{1}$ Robotics Institute, Carnegie Mellon University
+
+§ II. RELATED WORK
+
+§ A. EXTRINSIC DEXTERITY
+
+"Extrinsic dexterity" is a type of manipulation skills that enhance the intrinsic capability of a hand using external resources including external contacts, gravity, or dynamic motions of the arm [1]. Previous work in extrinsic dexterity has demonstrated complex manipulation tasks with a simple gripper including in-hand reorientation [1], [9], prehensile pushing [2], [10], shared grasping [3], etc. In this work, we study a different task that can further demonstrate the benefit of extrinsic dexterity. Extrinsic dexterity usually involves contact-rich behaviors which poses difficulties in planning and control. Previous work has used hand-crafted trajectories [1], task-specific motion primitives [9], [3] or motion planning over contact mode switches [2], [10], [11], [12]. They come with the restrictions on the contact modes between the finger and the object which will limit the motion and the design of the gripper. In this work, we take an alternative approach of using reinforcement learning to learn a closed-loop policy that considers both planning and control.
+
+§ B. REINFORCEMENT LEARNING FOR MANIPULATION
+
+Previous work that uses reinforcement learning for manipulation tasks treat the object and the robot in isolation without considering extrinsic dexterity [13], [14], [8]. In our work, we demonstrate that the agent can benefit from extrinsic dexterity when solving the occluded grasping task.
+
+§ C. GRASPING
+
+Grasping has been an important task in robot manipulation and has been studied from various aspects.
+
+Grasp generation: One area of study in grasping is to generate stable grasp configurations [15], [16], [17], [4], [18], [5], [19]. We assume that we will use the grasps generated by any grasp generation method as input to our system.
+
+Grasp execution: To execute a grasp following grasp generation, a motion planner is usually used to generate a collision-free path towards the desired grasp configuration. If there is a set of desired grasps, integrated grasp and motion planning could be considered [20], [21], [6]. [22] uses imitation learning and reinforcement learning to finetune the trajectories from the planner. All of these works aim at achieving the unoccluded grasp configurations in static or near-static scenes. Instead, our work focuses on a complementary direction of achieving occluded grasp locations by interacting with the object of interest.
+
+Pre-Grasp manipulation: To deal with occluded grasp configurations, prior work has studied pre-grasps as a preparatory stage [23], [24], [25], [7]. [7] is the most related to our work, but they use a specially designed end-effector to perform the pre-grasp motion and then use a second gripper to grasp the object. We demonstrate that the full grasping task can be solved with a single gripper without special requirements on the end-effector. These previous work typically separates pre-grasp motion and grasp execution into two stages and impose restrictions on the transitions of the stages. In our work, we co-optimize pre-grasp and grasp execution within an episode without explicit separation of the stages. The pre-grasping behavior emerges through learning without restrictions on object or gripper motions.
+
+End-to-end grasping: Another line of work use an end-to-end pipeline for grasping with reinforcement learning [26] or imitation learning [27]. The policy performs an arbitrary grasp of the object without the possibility of specifying a certain set of grasps. Also, there has not been any emergent behavior of exposing occluded grasp pose in existing work.
+
+§ III. TASK DEFINITION: OCCLUDED GRASPING
+
+Our work is designed to be used in a pipeline that follows a grasp pose generation method such as [4], [5], [19]. Given a rigid object, we assume a desired grasp $g$ as input to the system. A grasp configuration $g \in {SE}\left( 3\right)$ is defined to be the desired $6\mathrm{D}$ pose of the end-effector in the object frame $O$ . The grasp is fixed with respect to the object, and it will move when the object moves. On the top row of Figure 1, an example of a desired grasp is shown as a transparent gripper attached to the object. The goal of our work is to learn grasp execution which is to move the end-effector $E$ close to a given $g$ with a pose difference metric $\Delta \left( {g,E}\right)$ . In this paper, the task is defined to be successful if the position difference ${\Delta T}\left( {g,E}\right)$ and the orientation difference ${\Delta \theta }\left( {g,E}\right)$ are less than the pre-defined thresholds ${\varepsilon }_{T}$ and ${\varepsilon }_{P}$ respectively at the end of an episode. After successfully reaching the desired grasp pose, the gripper will be closed to complete the grasp. We define an "Occluded Grasping" task to be the case where the grasp $g$ is initially occluded (not in free space). When a set of grasps $G = \left\{ {g}_{i}\right\}$ are available, we may select a grasp ${g}_{i}$ from the set $G$ to execute (Appendix VII).
+
+§ IV. LEARNING OCCLUDED GRASPING WITH REINFORCEMENT LEARNING
+
+We study the use of reinforcement learning (RL) to train a closed-loop policy for the occluded grasping task defined above. In this section, we will first discuss important design choices of the system considering a single target grasp including the extrinsic environment and the design of the RL problem. Then, we will also discuss how to improve the generalization of the policy using Automatic Domain Randomization [8]. Training and evaluation procedures that process a set of grasps can be found in Appendix VII,
+
+§ A. EXTRINSIC ENVIRONMENT
+
+To showcase the benefits of extrinsic dexterity from object-scene interaction in this task, we construct the scene of the task as having an object in a bin, instead of leaving the object on the table (Figure 2). In Section V, we will show that the emergent policy will utilize the wall of the bin to rotate the object. Without the wall, it is not able to find a strategy that can successfully perform the task.
+
+§ B.RL PROBLEM DESIGN
+
+We discuss the design of the RL problem in this section. More details can be found in Appendix 1. We train a goal-conditioned policy $\pi \left( {{a}_{t} \mid {s}_{t},g}\right)$ for this task where the goal is a target grasp configuration $g.{s}_{t}$ includes the pose of the end-effector and the object pose. The action space of the policy is the delta pose of the end-effector ${\Delta E}$ which will be sent to a low-level Operational Space Controller (OSC). The choice of OSC allows compliant movement for such a contact-rich task (See Appendix 1 for more discussion). The reward function is designed to co-optimize the pre-grasp motion as well as grasp execution:
+
+$$
+r = {\alpha D}\left( {g,E}\right) + \beta \mathop{\sum }\limits_{i}P\left( {m}_{i}\right) \tag{1}
+$$
+
+ < g r a p h i c s >
+
+Fig. 2: $E$ denotes the $6\mathrm{D}$ pose of the end-effector. $g$ denotes the target grasp defined in the object frame. Marker locations ${m}_{i}$ in green on the target grasp are used to calculate the occlusion penalty.
+
+where
+
+$$
+D\left( {g,E}\right) = {\alpha }_{1}{\Delta T}\left( {g,E}\right) + {\alpha }_{2}{\Delta \theta }\left( {g,E}\right) \tag{2}
+$$
+
+${\alpha }_{1},{\alpha }_{2}$ and $\beta$ are the weights for the reward terms. The first term of Equation 1, $D\left( {g,E}\right)$ , is the pose difference between the target grasp and the current end-effector pose. This term is expanded in Equation 2 to include the translational and rotational distance, as described in Section III. The second term of Equation 1 is the target grasp occlusion penalty which is to penalize the gripper if it is occluded by the table. We set several marker points on the target gripper (Figure 2) denoted as ${m}_{i}$ and compare the height of the markers with the table top. If a marker is below the table top, the height difference will be used as the penalty. Having the occlusion penalty can effectively reduce the local optima where the gripper will reach close to the target grasp (which is occluded) without trying to move the object.
+
+To summarize, the first term of Equation 1 is to optimize for successful grasp execution and the second term is to encourage pre-grasp motions to move the object such that the grasp $g$ becomes unoccluded. An important difference from previous work is that pre-grasp and grasp execution components are optimized together instead of being separated into two stages. We did not have any reward terms that are explicitly related to extrinsic dexterity. In our system, the use of extrinsic dexterity is an emergent behavior of policy optimization given our objective and environmental setup.
+
+§ C. POLICY GENERALIZATION
+
+One benefit of using RL is that it generates a closed-loop policy instead of an open-loop trajectory. A closed-loop policy can ideally generalize to a wider range of state distributions which implies better performance over the variations of the environment properties such as object size, density, and friction coefficient, etc. The generalization can be improved further by training with domain randomization on the environment variations. This can also benefit sim-to-real transfer. We use Automatic Domain Randomization (ADR) [8] to improve the generalization of the policy. More implementation details can be found in Appendix 1,
+
+ < g r a p h i c s >
+
+Fig. 3: Left: Ablations on the reward function and the walls. Right: Evaluation on the generalization of the policies by sampling 100 environments.
+
+§ V. EXPERIMENTS
+
+§ A. TRAINING CURVES AND ABLATIONS
+
+Details of the experiment setup can be found in Appendix III. In this section, we train the policies with a single desired grasp in the default environment without randomization of the physical parameters. From the training curve shown in Figure 3a, the policy trained with the complete system can reach a success rate of 1 before 4000 episodes which corresponds to 160000 environment steps. We performed an ablations analysis on the design choices to determine which components were the most important to the success of the system. First, we experiment with removing the wall of the bin to evaluate the importance of using the wall for extrinsic dexterity. As shown in Figure 3a, the resulting policy has $0\%$ success rate and pushes the object outside of the table. Second, we performed an ablation on the reward function. When we remove the grasp pose occlusion penalty (the second term of Equation 1), the policy is more likely to get stuck at a local optima of only trying to match the position and orientation of the gripper and thus the average success rate across random seeds becomes lower. An alternative is to use a $\{ - 1,0\}$ sparse reward according to the success criteria defined in Section III instead of the reward that we define in Equation 1. With a sparse reward, the policy learns much slower. Training this task with sparse reward makes the exploration task of the policy much more difficult.In addition, ablations on the choice of controller can be found in Appendix V. We also include results for multi-grasp training and multi-grasp selection in Appendix VII,
+
+§ B. EMERGENT BEHAVIORS
+
+Figure 1 shows a typical strategy of the successful policies. The strategy involves multiple stages of contact switches. The gripper first moves close to the object and makes contact on the side of the object with the left finger. It then pushes the object against the wall to rotate it. During this stage, the gripper maintains a fixed or rolling contact with the object. The object is usually under sliding contact with the wall and the ground of the bin at some of the corners. After the gripper has rotated a bit further and the right fingertip is below the object, the left finger will slide on the object or simply leave the object to let the object drop on the right finger. After the object lies on the right finger, the gripper will try to match the desired pose more precisely. At this point, the policy has executed the grasp successfully and it is ready to close the gripper. We include more visualizations of emergent behaviors in Appendix D. including another type of successful strategy, local optima behavior and multi-grasp behaviors. Videos can be found on the website [1]
+
+§ C. POLICY GENERALIZATION
+
+In this section, we analyze the performance of the policy across environment variations. The robustness over environment variations might come from the policy being closed-loop and the randomization of the physical parameters during training. Thus, we evaluate over open loop trajectories (Open Loop), policies trained over a fixed environment (Fixed Env) and policies trained with ADR (With ADR). The open loop trajectories are obtained by rolling out the Fixed Env policies in the default environment. We also turn off the randomization of the initial gripper pose for Open Loop; otherwise, the success rate is too low to compare with even in the default environment. We sample 100 environments from the training range of the ADR policies (Appendix II) and plot the percentage of environments that are above a certain performance metric (Figure 3b). The closed-loop policies are much better than open-loop trajectories across environment variations. The policy trained over a fixed environment is able to generalize to a wide range of variations. With ADR, the generalization can be improved even further. We also modify the important physical parameters one at a time to understand the sensitivity of these parameters in Appendix VI.
+
+§ D. REAL-ROBOT EXPERIMENT
+
+To further evaluate the generalization of the policies and demonstrate the feasibility of the proposed system, we execute the policies on the real robot with zero-shot sim2real transfer over 6 test cases shown in Figure 4. There are four box-shape objects with different sizes, density and surface friction. Box-1 has the same size and density as the default object trained in simulation. Box-2 is larger than the training range in the y-direction. Box-3 is larger than the training range in the z-direction. The surface friction are very different for different boxes. For example, Box- 3 has tape on its surface which has much higher friction than the others (which can be shown in the videos on the website ${}^{\square }$ ). However, we do not have access to the true friction coefficients of the objects to compare with the values in simulation. In addition, we evaluate Box-1 with additional weights by putting four or eight erasers inside of the box. Note that the erasers will move in the box during execution, which is not modeled in simulation. We evaluate two types of single grasp policies trained in simulation: one policy is trained with Automatic Domain Randomization as described in Section IV-C; another policy is trained on a fixed default environment without domain randomization.
+
+ < g r a p h i c s >
+
+Fig. 4: Test cases for real robot experiments.
+
+TABLE I: Real robot evaluations.
+
+max width=
+
+Object-ID Size (cm) Weight (g) Success $\mathbf{w}/$ ADR Success $\mathbf{w}/\mathbf{o}\mathbf{{ADR}}$
+
+1-5
+Box-1 (15.0,20.0,5.0) 128 10/10 10/10
+
+1-5
+Box-1 + 4 erasers (15.0,20.0,5.0) 237 8/10 7/10
+
+1-5
+Box-1 + 8 erasers (15.0,20.0,5.0) 345 6/10 4/10
+
+1-5
+Box-2 (15.4,29.2,5.8) 130 8/10 8/10
+
+1-5
+Box-3 (15.3,22.2,7.4) 113 10/10 4/10
+
+1-5
+Box-4 (15.3, 22.2, 7.4) 50 7/10 0/10
+
+1-5
+Average X X 0.82 0.55
+
+1-5
+
+We evaluate 10 episodes for each test case and summarize the results in Table 1. Videos of the real robot experiments can be found on the website ${}^{\square }$ . Overall, the policy with ADR achieves a success rate of ${82}\%$ while the policy without ADR achieves ${55}\%$ . ADR effectively improves the performance over a wider range of object variations. Note that both policies are evaluated on out-of-distribution objects: Box- 1 with 8 erasers, Box-3 and Box-4 are out of the training distribution of ADR (See Appendix II); All of the test cases except the first one (Box-1) are out-of-distribution for the policy without ADR. This demonstrates the robustness of the closed-loop policies of the proposed pipeline on such a dynamic manipulation task.
+
+§ VI. CONCLUSION
+
+We study the "Occluded Grasping" task of reaching a desired grasp configuration that is initially occluded. With a parallel gripper, the robot has to use extrinsic dexterity to solve this task. We present a system that learns a closed-loop policy for this task with reinforcement learning. In the experiments, we demonstrate that the wall, the choice of controller, and the design of the reward function are all essential components. The policy can generalize across a wide range of environment variations and can be executed on the real robot. One potential extension of our work is to train the policy with a wide variety of object shapes which may require image-based policies. Also, the pipeline can potentially be applied to other extrinsic dexterity tasks.
+
+https://sites.google.com/view/grasp-ungraspable
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..2c4fbd331d45c02488a931b62cac981b4876edb2
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,215 @@
+# Learning active tactile perception through belief-space control
+
+Jean-François Tremblay, Johanna Hansen, David Meger, Francois Hogan, Gregory Dudek
+
+Abstract- Robots operating in an open world can encounter novel objects with unknown physical properties, such as mass, friction, or size. It is desirable to be able to sense those property through contact-rich interaction, before performing downstream tasks with the objects. We propose a method for autonomously learning active tactile perception policies, by learning a generative world model leveraging a differentiable bayesian filtering algorithm, and designing an information-gathering model predictive controller. We test the method on two simulated tasks: mass estimation and height estimation. Our method is able to discover policies which gather information about the desired property in an intuitive manner.
+
+## I. INTRODUCTION
+
+Robots operating in an open world can encounter arbitrary, unseen objects and are expected to manipulate them effectively. To achieve this, robots must have the ability to infer the physical properties of unknown objects through physical interactions. The online measurement of these properties is key for robots to operate robustly in the real-world with open-ended object categories.
+
+Psychology literature refers to the way humans measure these properties as exploratory procedures [1]. These procedures, for example, include pressing to test for object hardness and lifting to estimate object mass. These exploratory procedures are challenging to hand-engineer and vary based on the object class. This work focuses on learning such exploratory procedures to estimate object properties through belief-space control. Using a combination of 1) learning-based state-estimation to infer the property from a sequence of observations and actions 2) information-gathering model-predictive control (MPC), we demonstrate that it is possible to learn to execute actions that are informative about the property of interest and to discover exploratory procedure without any human priors.
+
+## II. RELATED WORKS
+
+## A. Learning for state-estimation
+
+There are several works proposing the fusion of Bayesian filtering methods with deep learning, where the dynamics and observation models used are learned neural networks.
+
+Lee et al. [2] provide a good overview of learning Bayesian filtering models for robotics applications, and release torchfilter, a library of algorithm for this purpose which we build on for our belief-space control algorithm.
+
+In [3], the authors present the Backprop Kalman filter described as a discriminative approach to filtering. Discriminative filtering does away with learning an observation model (a mapping from state to observation) and learns a mapping from observation to state instead. Here, we argue that learning a generative observation model, while more computationally challenging, is key to predicting future state uncertainty and planning for informative actions.
+
+Burkhart et al. [4] present the discriminative Kalman filter concurrently to [3]. This approach assumes linear dynamics and models the prior over observations as Gaussian. It can only handle stationary observation processes.
+
+## B. Active perception
+
+Active perception consist of acting in a way that assists perception and can incorporate learning, including the learning methods above. Denil et al. [5] use reinforcement learning for "Which is Heavier" and "Tower" environments, where the goal of the former is to push blocks and, after a certain interaction period, take a "labelling action" to guess which block is heavier. You get a reward if the label is correct. They then train a recurrent deep reinforcement learning policy on that environment. The action space for these problems is constrained and designed to act such that the blocks are pushed with a fixed force towards their center of mass. While this method enables the robot to effectively retrieve mass using human priors and intuition, our work differs where the robot is tasked with discovering such behaviors autonomously with unconstrained action spaces.
+
+More specifically to robotics, Wang et al. [6] introduce SwingBot, a robotic system that swings up an object with changing physical properties (moments, center of mass). Before the swing up phase, the system follows a hand-engineered exploratory procedure that shakes and tilts the object in the hand to extract the necessary information for a successful swing up. Rather than engineering the exploration phase, we propose a generic framework for extracting such information before accomplishing a given task.
+
+## III. METHODS
+
+We are in a controlled hidden Markov model (HMM) setting (a partially observable Markov decision process (POMDP) without a reward function), where each observation ${o}_{t}$ gives us partial information about the state of the robot and object we are interested in. More formally a controlled HMM is a tuple $\left( {\mathcal{S},\mathcal{A}, p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right) ,\Omega , p\left( {{o}_{t} \mid {s}_{t}}\right) }\right)$ , where the state, action and observation space $(\mathcal{S},\mathcal{A}$ and $\Omega$ respectively) are in ${\mathbb{R}}^{n},{\mathbb{R}}^{m},{\mathbb{R}}^{d}$ respectively. It is important to note that in this context, the state can contain robot pose and velocity, object pose and velocity, object properties, and all properties that describes the environment and that are subject to change either during or in between episodes. The representation for the state will be learned in a self-supervised fashion, as described in $§$ III-A, and will be learned in such a way that the first element of the state represents the object property of interest:
+
+$$
+{s}_{t} = \left( {{m}_{t},{z}_{t}}\right) ,{m}_{t} \in \mathbb{R},{z}_{t} \in {\mathbb{R}}^{n - 1}. \tag{1}
+$$
+
+We are in an episodic setting with ending timestep $T$ , and where at each episode the object is randomized. For mass estimation as an example, at each episode, an object with a different mass is presented and the goal is to infer the mass of this new object.
+
+In $\sharp$ III-A we describe how to infer the belief state (containing an estimate of the object property of interest) ${b}_{t} \approx p\left( {{s}_{t} \mid {a}_{0},\ldots ,{a}_{t - 1},{o}_{1},\ldots {o}_{t}}\right) ,{\bar{b}}_{t} \approx$ $p\left( {{s}_{t} \mid {a}_{0},\ldots ,{a}_{t - 1},{o}_{1},\ldots {o}_{t - 1}}\right)$ . In $§$ III-B we use that estimate to design an information-gathering controller. Finally, in $§$ III-C we present how to integrate these two things in a data-collection/training and control loop.
+
+## A. Learning-based Kalman filter
+
+Here the goal is to learn a dynamics and observation model while performing belief-state inference. The dynamics model representing $p\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1}}\right)$ is
+
+$$
+{s}_{t} = {f}_{\theta }\left( {{s}_{t - 1},{a}_{t - 1}}\right) + {\sum }_{\theta }\left( {{s}_{t - 1},{a}_{t - 1}}\right) {w}_{t} \tag{2}
+$$
+
+where ${w}_{t}$ are independent and identically distributed (IID) standard Gaussian random variable in ${\mathbb{R}}^{n}$ .
+
+Generative filtering (as opposed to discriminative filtering $\left\lbrack {2,3}\right\rbrack )$ implies learning a generative world-model, able to fully simulate the system and generate observations via the equation
+
+$$
+{o}_{t} = {h}_{\theta }\left( {s}_{t}\right) + {\Gamma }_{\theta }\left( {s}_{t}\right) {v}_{t}. \tag{3}
+$$
+
+where ${v}_{t}$ are IID standard Gaussian random variables in ${\mathbb{R}}^{d}$ . While learning this model can be more challenging in the face of high-dimensional and complex observation spaces (e.g. images), it opens up new avenues for forward belief-space planning.
+
+Using an explicit-likelihood (Gaussian state-space model) setting, we train the model in an self-predictive manner. In (8), we present the derivation for the loss of the generative observation model. This derivation is adapted from [7] Chapter 12, where we integrate action variables.
+
+$$
+p\left( {{o}_{1},\ldots ,{o}_{T} \mid \theta ,{a}_{0},\ldots ,{a}_{T - 1}}\right) \tag{4}
+$$
+
+$$
+= \mathop{\prod }\limits_{{t = 1}}^{T}p\left( {{o}_{t} \mid \theta ,{o}_{1},\ldots ,{o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right) \tag{5}
+$$
+
+$$
+= \mathop{\prod }\limits_{{t = 1}}^{T}{\int }_{{\mathbb{R}}^{n}}p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) p\left( {{s}_{t} \mid \theta ,{o}_{1},\ldots ,{o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right) d{s}_{t}
+$$
+
+(6)
+
+$$
+\approx \mathop{\prod }\limits_{{t = 1}}^{T}{\int }_{{\mathbb{R}}^{n}}p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) {\bar{b}}_{t}\left( {{s}_{t} \mid \theta }\right) d{s}_{t} \tag{7}
+$$
+
+$$
+= \mathop{\prod }\limits_{{t = 1}}^{T}{\mathbf{E}}_{{s}_{t} \sim \bar{{b}_{t}}\left( {{s}_{t} \mid \theta }\right) }p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) \tag{8}
+$$
+
+Here ${\bar{b}}_{t}$ is the output of the predict step of our filter with input ${b}_{t - 1}$ and ${a}_{t - 1}$ . It is only an approximation of $p\left( {{s}_{t} \mid \theta ,{o}_{1},\ldots {o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right)$ . If we take the $\log$ , get a lower bound from Jensen's inequality and compute the empirical mean, we get:
+
+$$
+\log p\left( {{o}_{1},\ldots ,{o}_{T} \mid \theta ,{a}_{0},\ldots ,{a}_{T - 1}}\right) \tag{9}
+$$
+
+$$
+\gtrapprox \mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\log p\left( {{o}_{t} \mid \theta ,{s}_{t}^{i}}\right) \;{s}_{t}^{i} \sim {\bar{b}}_{t}\left( {{s}_{t} \mid \theta }\right) \tag{10}
+$$
+
+$$
+\mathrel{\text{:=}} \text{ELBO} \tag{11}
+$$
+
+Equation 9 gives us a lower bound of the log likelihood (similarly to the ELBO loss in VAEs [8]) to train our model leveraging the differentiable approximate inference used to compute ${\bar{b}}_{t}$ . Because ${\bar{b}}_{t} = \mathcal{N}\left( {{s}_{t} \mid {\bar{\mu }}_{t},{\bar{\sum }}_{t}}\right)$ , we can use the reparametrization trick to sample ${s}_{t}^{i}$ by sampling ${\xi }^{i}$ from a $n$ -dimensional standard Gaussian, and then letting
+
+$$
+{s}_{t}^{i} = {\bar{\mu }}_{t} + {\bar{\sum }}_{t}{\xi }^{i} \tag{12}
+$$
+
+$\theta$ represents the parameters for $f,\sum , h,\Gamma$ which are neural networks. We jointly perform state-estimation and parameter optimization by estimating ${b}_{t} = \left( {{\mu }_{t},{\sum }_{t}}\right)$ using a extended Kalman filter (EKF), the operation of which are all differentiable (as shown for example by Lee et al. [2]), and maximizing the likelihood of the ground-truth object property of interest. For example, if mass is of interest, the loss for one timestep for an episode where the ground-truth mass is $m$ would be:
+
+$$
+{\mathcal{L}}_{m} = - \mathop{\sum }\limits_{{t = 1}}^{T}\log \mathcal{N}\left( {m \mid {\mu }_{t}^{1},{\sum }_{t}^{11}}\right) \tag{13}
+$$
+
+Where $\mathcal{N}\left( {\cdot \mid \mu ,\sigma }\right)$ is a Gaussian pdf with mean $\mu$ and variance $\sigma$ . The first element of the state represents the mass, and we are maximizing its log-likelihood.
+
+The loss we minimize is a combination of the self-predictive loss for the observation, and the likelihood of the mass in the state representation:
+
+$$
+\mathcal{L} = \mathrm{{ELBO}} + {\mathcal{L}}_{m} \tag{14}
+$$
+
+In practice, we sample sequences of length less that $T$ , and initialize the filter using stored beliefs in the dataset, in a truncated backpropagation through time fashion.
+
+## B. Information-gathering model-predictive controller
+
+The goal is to control the belief space process in a way that collects information about the property we're trying to perceive. The belief space for continuous systems is generally infinite dimensional (the space of probability distributions over the state space) thus intractable to work with using traditional control tools. However, by approximating the belief space using a parametric family (a Gaussian in our case), the problem can be formulated as a standard finite-dimensional continuous control problem. This is what we tackle here.
+
+a) Belief dynamics: We can use the learned world model to simulate the belief space dynamics, as illustrated in Figure 1. The key is to be able to use the learned observation model to predict the future uncertainty about the state, rather than merely predict future states.
+
+
+
+Fig. 1. Illustration of the sampling process for belief-space planning using a generative model. First, states are sampled from the current belief. We can then use our dynamics model and candidate actions to sample future states. These future states are given to our generative observation model to generate observations. We can then feed the generated observations and candidate actions to the state estimator to simulate the belief-space dynamics.
+
+b) Cost function: We want our controller to minimize the entropy $H$ of the system:
+
+$$
+J = \mathop{\sum }\limits_{{t = 1}}^{T}H\left( {b}_{t}^{1}\right) \tag{15}
+$$
+
+to minimize the uncertainty about the property of the object as soon as possible in the episode (compared to a final cost formulation). Minimizing this cost, for a Gaussian belief ${b}_{t} = \left( {{\mu }_{t},{\sum }_{t}}\right)$ , is equivalent to minimizing the cost
+
+$$
+J = \mathop{\sum }\limits_{{t = 1}}^{T}\log {\sum }_{t}^{11} \tag{16}
+$$
+
+c) Optimizer: In this work, we used a sampling-based optimizer which selected the randomly-generated sequence of actions, minimizing the cost. The actions were generated using a Gaussian random walk in three dimensions, with a standard deviation of ${10}\mathrm{\;{cm}}$ . Following the model-predictive control framework, we only execute the first action of the sequence and then re-optimize.
+
+## C. Full training and control loop
+
+During training, we follow the procedure:
+
+1) Collect data using current controller for one epoch (randomizing the object property of interest), saving the observations, actions and estimated beliefs as well as the ground truth object property for this epoch
+
+2) Train the state estimator using the dataset
+
+3) Update stored beliefs in the dataset (by replaying the actions and observations)
+
+Step 3) does not have to be done every epoch and can be costly as the dataset grows, but it is important to perform truncated backpropagation through time and initialize our state estimate during training.
+
+
+
+Fig. 2. MAE for the property estimation tasks, at the end of the episode averaged over 5 runs, as learning progresses. The hand engineered policy gives an upper bound on what can be achieved when the behavior must not be discovered, and we simply have to extract the mass from a sequence of sensor readings.
+
+## IV. EXPERIMENTS
+
+We set up a custom robosuite [9] environment for our experiments. The robot is a Franka Emika arm with a palm-shaped end-effector (as shown in Figure 3) and a force-torque sensor at the wrist. At each episode, a cube of the same size and visual appearance is laid down at the same location, with only its mass changing. We use position control, only translation. The observations are low-level for now: joint pose and velocity, object pose, force and torque at the wrist.
+
+## A. Mass estimation
+
+The first task is to learn to estimate the mass of a cube. The cube has constant size and friction coefficient, but its mass changes randomly between $1\mathrm{\;{kg}}$ and $2\mathrm{\;{kg}}$ in between episodes. Because the robot has no gripper, just a palm, it can't pick up the object, but it should be able to push it and extract mass from the force and torque readings generated by the push.
+
+## B. Height estimation
+
+The second task is to learn to estimate the height of a block, randomized between $1\mathrm{\;{cm}}$ and ${15}\mathrm{\;{cm}}$ . The force torque sensor, in this scenario, also acts has a contact detector. The expected behavior would be to come down until contact is made, at which point you can extract the height from forward kinematics (keep in mind that our method has no concept of forward kinematics embedded into it). One subtlety is that the arm must position itself above the box before moving down, as it can otherwise make contact with the table instead.
+
+
+
+Fig. 3. Demonstration of the learned controller for mass estimation. We can see that it learns to stably push the object to extract mass from force torque readings. Notice how the uncertainty goes down as the arm starts pushing the block.
+
+## V. Results
+
+Every 5000 environment steps, we run the evaluation procedure. It consists of running 5 episodes with randomized object property, and computing the MAE, where the absolute error is computed using the estimate at the last timestep of the episode. The training curve, showing the evolution of the MAE for the different tasks is shown in Figure 2. In the graph, a line is shown where a information-gathering policy was hand-coded by a human and we trained the state-estimator; straight pushing for mass and coming down to touch the block for height. It is meant as an approximate upper-bound for the information-gathering controller.
+
+We can see that as learning progresses, two things happen concurrently:
+
+1) the agent learn to perform informative actions. In the case of mass estimation, the policy pushes the block stably as shown in Figure 3. In the case of height estimation, the policy goes down in a straight line until it touches the blocks.
+
+
+
+Fig. 4. Demonstration of the learned controller for height estimation. We can see that it learns to come down and adjust its estimate as it moves through free space, until touching the block.
+
+2) the state-estimator learns to extract mass from the raw observations generated by the informative actions. For example during height estimation, the uncertainty remains high until the end-effector touches the block, at which point the estimate peaks at the correct height.
+
+It is important to note that the pushing strategy is in no way encoded in the agent; initial trajectories are simply random walks in the workspace.
+
+## VI. CONCLUSION
+
+With the goal of discovering active tactile perception behaviors to measure object properties, we designed a learning-based state estimator and an information-gathering controller. Together, these two pieces allowed a simulated robot to discover a pushing strategy for mass estimation and a top-down patting strategy for height estimation, without any prior on what should the trajectory be. This opens up the door to learning more complex information-gathering policies, such as those for estimating the center of mass, hardness, friction coefficient and more.
+
+## REFERENCES
+
+[1] S. J. Lederman and R. L. Klatzky. "Hand movements: A window into haptic object recognition". In: Cognitive Psychology 19.3 (1987), pp. 342-368.
+
+[2] M. A. Lee, B. Yi, R. Martín-Martín, S. Savarese, and J. Bohg. "Multimodal Sensor Fusion with Differentiable Filters". In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2020, pp. 10444-10451.
+
+[3] T. Haarnoja, A. Ajay, S. Levine, and P. Abbeel. "Backprop KF: Learning Discriminative Deterministic State Estimators". In: Advances in Neural Information Processing Systems. Ed. by D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett. Vol. 29. Curran Associates, Inc., 2016.
+
+[4] M. C. Burkhart, D. M. Brandman, B. Franco, L. R. Hochberg, and M. T. Harrison. "The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models". In: Neural Computation 32.5 (2020), pp. 969-1017.
+
+[5] M. Denil, P. Agrawal, T. D. Kulkarni, T. Erez, P. Battaglia, and N. De Freitas. "Learning to perform physics experiments via deep reinforcement learning". In: ICLR (2017).
+
+[6] C. Wang, S. Wang, B. Romero, F. Veiga, and E. Adelson. "SwingBot: Learning Physical Features from In-hand Tactile Exploration for Dynamic Swing-up Manipulation". In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2020, pp. 5633-5640.
+
+[7] S. Särkkä. Bayesian filtering and smoothing. Cambridge university press, 2013.
+
+[8] D. P. Kingma and M. Welling. "Auto-encoding variational bayes". In: International Conference on Learning Representations (ICLR) (2013).
+
+[9] Y. Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín. "robosuite: A Modular Simulation Framework and Benchmark for Robot Learning". In: arXiv preprint arXiv:2009.12293. 2020.
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..cfdb0c651c7a6997011069f9413a618917824b0f
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,195 @@
+§ LEARNING ACTIVE TACTILE PERCEPTION THROUGH BELIEF-SPACE CONTROL
+
+Jean-François Tremblay, Johanna Hansen, David Meger, Francois Hogan, Gregory Dudek
+
+Abstract- Robots operating in an open world can encounter novel objects with unknown physical properties, such as mass, friction, or size. It is desirable to be able to sense those property through contact-rich interaction, before performing downstream tasks with the objects. We propose a method for autonomously learning active tactile perception policies, by learning a generative world model leveraging a differentiable bayesian filtering algorithm, and designing an information-gathering model predictive controller. We test the method on two simulated tasks: mass estimation and height estimation. Our method is able to discover policies which gather information about the desired property in an intuitive manner.
+
+§ I. INTRODUCTION
+
+Robots operating in an open world can encounter arbitrary, unseen objects and are expected to manipulate them effectively. To achieve this, robots must have the ability to infer the physical properties of unknown objects through physical interactions. The online measurement of these properties is key for robots to operate robustly in the real-world with open-ended object categories.
+
+Psychology literature refers to the way humans measure these properties as exploratory procedures [1]. These procedures, for example, include pressing to test for object hardness and lifting to estimate object mass. These exploratory procedures are challenging to hand-engineer and vary based on the object class. This work focuses on learning such exploratory procedures to estimate object properties through belief-space control. Using a combination of 1) learning-based state-estimation to infer the property from a sequence of observations and actions 2) information-gathering model-predictive control (MPC), we demonstrate that it is possible to learn to execute actions that are informative about the property of interest and to discover exploratory procedure without any human priors.
+
+§ II. RELATED WORKS
+
+§ A. LEARNING FOR STATE-ESTIMATION
+
+There are several works proposing the fusion of Bayesian filtering methods with deep learning, where the dynamics and observation models used are learned neural networks.
+
+Lee et al. [2] provide a good overview of learning Bayesian filtering models for robotics applications, and release torchfilter, a library of algorithm for this purpose which we build on for our belief-space control algorithm.
+
+In [3], the authors present the Backprop Kalman filter described as a discriminative approach to filtering. Discriminative filtering does away with learning an observation model (a mapping from state to observation) and learns a mapping from observation to state instead. Here, we argue that learning a generative observation model, while more computationally challenging, is key to predicting future state uncertainty and planning for informative actions.
+
+Burkhart et al. [4] present the discriminative Kalman filter concurrently to [3]. This approach assumes linear dynamics and models the prior over observations as Gaussian. It can only handle stationary observation processes.
+
+§ B. ACTIVE PERCEPTION
+
+Active perception consist of acting in a way that assists perception and can incorporate learning, including the learning methods above. Denil et al. [5] use reinforcement learning for "Which is Heavier" and "Tower" environments, where the goal of the former is to push blocks and, after a certain interaction period, take a "labelling action" to guess which block is heavier. You get a reward if the label is correct. They then train a recurrent deep reinforcement learning policy on that environment. The action space for these problems is constrained and designed to act such that the blocks are pushed with a fixed force towards their center of mass. While this method enables the robot to effectively retrieve mass using human priors and intuition, our work differs where the robot is tasked with discovering such behaviors autonomously with unconstrained action spaces.
+
+More specifically to robotics, Wang et al. [6] introduce SwingBot, a robotic system that swings up an object with changing physical properties (moments, center of mass). Before the swing up phase, the system follows a hand-engineered exploratory procedure that shakes and tilts the object in the hand to extract the necessary information for a successful swing up. Rather than engineering the exploration phase, we propose a generic framework for extracting such information before accomplishing a given task.
+
+§ III. METHODS
+
+We are in a controlled hidden Markov model (HMM) setting (a partially observable Markov decision process (POMDP) without a reward function), where each observation ${o}_{t}$ gives us partial information about the state of the robot and object we are interested in. More formally a controlled HMM is a tuple $\left( {\mathcal{S},\mathcal{A},p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right) ,\Omega ,p\left( {{o}_{t} \mid {s}_{t}}\right) }\right)$ , where the state, action and observation space $(\mathcal{S},\mathcal{A}$ and $\Omega$ respectively) are in ${\mathbb{R}}^{n},{\mathbb{R}}^{m},{\mathbb{R}}^{d}$ respectively. It is important to note that in this context, the state can contain robot pose and velocity, object pose and velocity, object properties, and all properties that describes the environment and that are subject to change either during or in between episodes. The representation for the state will be learned in a self-supervised fashion, as described in $§$ III-A, and will be learned in such a way that the first element of the state represents the object property of interest:
+
+$$
+{s}_{t} = \left( {{m}_{t},{z}_{t}}\right) ,{m}_{t} \in \mathbb{R},{z}_{t} \in {\mathbb{R}}^{n - 1}. \tag{1}
+$$
+
+We are in an episodic setting with ending timestep $T$ , and where at each episode the object is randomized. For mass estimation as an example, at each episode, an object with a different mass is presented and the goal is to infer the mass of this new object.
+
+In $\sharp$ III-A we describe how to infer the belief state (containing an estimate of the object property of interest) ${b}_{t} \approx p\left( {{s}_{t} \mid {a}_{0},\ldots ,{a}_{t - 1},{o}_{1},\ldots {o}_{t}}\right) ,{\bar{b}}_{t} \approx$ $p\left( {{s}_{t} \mid {a}_{0},\ldots ,{a}_{t - 1},{o}_{1},\ldots {o}_{t - 1}}\right)$ . In $§$ III-B we use that estimate to design an information-gathering controller. Finally, in $§$ III-C we present how to integrate these two things in a data-collection/training and control loop.
+
+§ A. LEARNING-BASED KALMAN FILTER
+
+Here the goal is to learn a dynamics and observation model while performing belief-state inference. The dynamics model representing $p\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1}}\right)$ is
+
+$$
+{s}_{t} = {f}_{\theta }\left( {{s}_{t - 1},{a}_{t - 1}}\right) + {\sum }_{\theta }\left( {{s}_{t - 1},{a}_{t - 1}}\right) {w}_{t} \tag{2}
+$$
+
+where ${w}_{t}$ are independent and identically distributed (IID) standard Gaussian random variable in ${\mathbb{R}}^{n}$ .
+
+Generative filtering (as opposed to discriminative filtering $\left\lbrack {2,3}\right\rbrack )$ implies learning a generative world-model, able to fully simulate the system and generate observations via the equation
+
+$$
+{o}_{t} = {h}_{\theta }\left( {s}_{t}\right) + {\Gamma }_{\theta }\left( {s}_{t}\right) {v}_{t}. \tag{3}
+$$
+
+where ${v}_{t}$ are IID standard Gaussian random variables in ${\mathbb{R}}^{d}$ . While learning this model can be more challenging in the face of high-dimensional and complex observation spaces (e.g. images), it opens up new avenues for forward belief-space planning.
+
+Using an explicit-likelihood (Gaussian state-space model) setting, we train the model in an self-predictive manner. In (8), we present the derivation for the loss of the generative observation model. This derivation is adapted from [7] Chapter 12, where we integrate action variables.
+
+$$
+p\left( {{o}_{1},\ldots ,{o}_{T} \mid \theta ,{a}_{0},\ldots ,{a}_{T - 1}}\right) \tag{4}
+$$
+
+$$
+= \mathop{\prod }\limits_{{t = 1}}^{T}p\left( {{o}_{t} \mid \theta ,{o}_{1},\ldots ,{o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right) \tag{5}
+$$
+
+$$
+= \mathop{\prod }\limits_{{t = 1}}^{T}{\int }_{{\mathbb{R}}^{n}}p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) p\left( {{s}_{t} \mid \theta ,{o}_{1},\ldots ,{o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right) d{s}_{t}
+$$
+
+(6)
+
+$$
+\approx \mathop{\prod }\limits_{{t = 1}}^{T}{\int }_{{\mathbb{R}}^{n}}p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) {\bar{b}}_{t}\left( {{s}_{t} \mid \theta }\right) d{s}_{t} \tag{7}
+$$
+
+$$
+= \mathop{\prod }\limits_{{t = 1}}^{T}{\mathbf{E}}_{{s}_{t} \sim \bar{{b}_{t}}\left( {{s}_{t} \mid \theta }\right) }p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) \tag{8}
+$$
+
+Here ${\bar{b}}_{t}$ is the output of the predict step of our filter with input ${b}_{t - 1}$ and ${a}_{t - 1}$ . It is only an approximation of $p\left( {{s}_{t} \mid \theta ,{o}_{1},\ldots {o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right)$ . If we take the $\log$ , get a lower bound from Jensen's inequality and compute the empirical mean, we get:
+
+$$
+\log p\left( {{o}_{1},\ldots ,{o}_{T} \mid \theta ,{a}_{0},\ldots ,{a}_{T - 1}}\right) \tag{9}
+$$
+
+$$
+\gtrapprox \mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\log p\left( {{o}_{t} \mid \theta ,{s}_{t}^{i}}\right) \;{s}_{t}^{i} \sim {\bar{b}}_{t}\left( {{s}_{t} \mid \theta }\right) \tag{10}
+$$
+
+$$
+\mathrel{\text{ := }} \text{ ELBO } \tag{11}
+$$
+
+Equation 9 gives us a lower bound of the log likelihood (similarly to the ELBO loss in VAEs [8]) to train our model leveraging the differentiable approximate inference used to compute ${\bar{b}}_{t}$ . Because ${\bar{b}}_{t} = \mathcal{N}\left( {{s}_{t} \mid {\bar{\mu }}_{t},{\bar{\sum }}_{t}}\right)$ , we can use the reparametrization trick to sample ${s}_{t}^{i}$ by sampling ${\xi }^{i}$ from a $n$ -dimensional standard Gaussian, and then letting
+
+$$
+{s}_{t}^{i} = {\bar{\mu }}_{t} + {\bar{\sum }}_{t}{\xi }^{i} \tag{12}
+$$
+
+$\theta$ represents the parameters for $f,\sum ,h,\Gamma$ which are neural networks. We jointly perform state-estimation and parameter optimization by estimating ${b}_{t} = \left( {{\mu }_{t},{\sum }_{t}}\right)$ using a extended Kalman filter (EKF), the operation of which are all differentiable (as shown for example by Lee et al. [2]), and maximizing the likelihood of the ground-truth object property of interest. For example, if mass is of interest, the loss for one timestep for an episode where the ground-truth mass is $m$ would be:
+
+$$
+{\mathcal{L}}_{m} = - \mathop{\sum }\limits_{{t = 1}}^{T}\log \mathcal{N}\left( {m \mid {\mu }_{t}^{1},{\sum }_{t}^{11}}\right) \tag{13}
+$$
+
+Where $\mathcal{N}\left( {\cdot \mid \mu ,\sigma }\right)$ is a Gaussian pdf with mean $\mu$ and variance $\sigma$ . The first element of the state represents the mass, and we are maximizing its log-likelihood.
+
+The loss we minimize is a combination of the self-predictive loss for the observation, and the likelihood of the mass in the state representation:
+
+$$
+\mathcal{L} = \mathrm{{ELBO}} + {\mathcal{L}}_{m} \tag{14}
+$$
+
+In practice, we sample sequences of length less that $T$ , and initialize the filter using stored beliefs in the dataset, in a truncated backpropagation through time fashion.
+
+§ B. INFORMATION-GATHERING MODEL-PREDICTIVE CONTROLLER
+
+The goal is to control the belief space process in a way that collects information about the property we're trying to perceive. The belief space for continuous systems is generally infinite dimensional (the space of probability distributions over the state space) thus intractable to work with using traditional control tools. However, by approximating the belief space using a parametric family (a Gaussian in our case), the problem can be formulated as a standard finite-dimensional continuous control problem. This is what we tackle here.
+
+a) Belief dynamics: We can use the learned world model to simulate the belief space dynamics, as illustrated in Figure 1. The key is to be able to use the learned observation model to predict the future uncertainty about the state, rather than merely predict future states.
+
+ < g r a p h i c s >
+
+Fig. 1. Illustration of the sampling process for belief-space planning using a generative model. First, states are sampled from the current belief. We can then use our dynamics model and candidate actions to sample future states. These future states are given to our generative observation model to generate observations. We can then feed the generated observations and candidate actions to the state estimator to simulate the belief-space dynamics.
+
+b) Cost function: We want our controller to minimize the entropy $H$ of the system:
+
+$$
+J = \mathop{\sum }\limits_{{t = 1}}^{T}H\left( {b}_{t}^{1}\right) \tag{15}
+$$
+
+to minimize the uncertainty about the property of the object as soon as possible in the episode (compared to a final cost formulation). Minimizing this cost, for a Gaussian belief ${b}_{t} = \left( {{\mu }_{t},{\sum }_{t}}\right)$ , is equivalent to minimizing the cost
+
+$$
+J = \mathop{\sum }\limits_{{t = 1}}^{T}\log {\sum }_{t}^{11} \tag{16}
+$$
+
+c) Optimizer: In this work, we used a sampling-based optimizer which selected the randomly-generated sequence of actions, minimizing the cost. The actions were generated using a Gaussian random walk in three dimensions, with a standard deviation of ${10}\mathrm{\;{cm}}$ . Following the model-predictive control framework, we only execute the first action of the sequence and then re-optimize.
+
+§ C. FULL TRAINING AND CONTROL LOOP
+
+During training, we follow the procedure:
+
+1) Collect data using current controller for one epoch (randomizing the object property of interest), saving the observations, actions and estimated beliefs as well as the ground truth object property for this epoch
+
+2) Train the state estimator using the dataset
+
+3) Update stored beliefs in the dataset (by replaying the actions and observations)
+
+Step 3) does not have to be done every epoch and can be costly as the dataset grows, but it is important to perform truncated backpropagation through time and initialize our state estimate during training.
+
+ < g r a p h i c s >
+
+Fig. 2. MAE for the property estimation tasks, at the end of the episode averaged over 5 runs, as learning progresses. The hand engineered policy gives an upper bound on what can be achieved when the behavior must not be discovered, and we simply have to extract the mass from a sequence of sensor readings.
+
+§ IV. EXPERIMENTS
+
+We set up a custom robosuite [9] environment for our experiments. The robot is a Franka Emika arm with a palm-shaped end-effector (as shown in Figure 3) and a force-torque sensor at the wrist. At each episode, a cube of the same size and visual appearance is laid down at the same location, with only its mass changing. We use position control, only translation. The observations are low-level for now: joint pose and velocity, object pose, force and torque at the wrist.
+
+§ A. MASS ESTIMATION
+
+The first task is to learn to estimate the mass of a cube. The cube has constant size and friction coefficient, but its mass changes randomly between $1\mathrm{\;{kg}}$ and $2\mathrm{\;{kg}}$ in between episodes. Because the robot has no gripper, just a palm, it can't pick up the object, but it should be able to push it and extract mass from the force and torque readings generated by the push.
+
+§ B. HEIGHT ESTIMATION
+
+The second task is to learn to estimate the height of a block, randomized between $1\mathrm{\;{cm}}$ and ${15}\mathrm{\;{cm}}$ . The force torque sensor, in this scenario, also acts has a contact detector. The expected behavior would be to come down until contact is made, at which point you can extract the height from forward kinematics (keep in mind that our method has no concept of forward kinematics embedded into it). One subtlety is that the arm must position itself above the box before moving down, as it can otherwise make contact with the table instead.
+
+ < g r a p h i c s >
+
+Fig. 3. Demonstration of the learned controller for mass estimation. We can see that it learns to stably push the object to extract mass from force torque readings. Notice how the uncertainty goes down as the arm starts pushing the block.
+
+§ V. RESULTS
+
+Every 5000 environment steps, we run the evaluation procedure. It consists of running 5 episodes with randomized object property, and computing the MAE, where the absolute error is computed using the estimate at the last timestep of the episode. The training curve, showing the evolution of the MAE for the different tasks is shown in Figure 2. In the graph, a line is shown where a information-gathering policy was hand-coded by a human and we trained the state-estimator; straight pushing for mass and coming down to touch the block for height. It is meant as an approximate upper-bound for the information-gathering controller.
+
+We can see that as learning progresses, two things happen concurrently:
+
+1) the agent learn to perform informative actions. In the case of mass estimation, the policy pushes the block stably as shown in Figure 3. In the case of height estimation, the policy goes down in a straight line until it touches the blocks.
+
+ < g r a p h i c s >
+
+Fig. 4. Demonstration of the learned controller for height estimation. We can see that it learns to come down and adjust its estimate as it moves through free space, until touching the block.
+
+2) the state-estimator learns to extract mass from the raw observations generated by the informative actions. For example during height estimation, the uncertainty remains high until the end-effector touches the block, at which point the estimate peaks at the correct height.
+
+It is important to note that the pushing strategy is in no way encoded in the agent; initial trajectories are simply random walks in the workspace.
+
+§ VI. CONCLUSION
+
+With the goal of discovering active tactile perception behaviors to measure object properties, we designed a learning-based state estimator and an information-gathering controller. Together, these two pieces allowed a simulated robot to discover a pushing strategy for mass estimation and a top-down patting strategy for height estimation, without any prior on what should the trajectory be. This opens up the door to learning more complex information-gathering policies, such as those for estimating the center of mass, hardness, friction coefficient and more.
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd64728c5b9ab3f71f7dc8645beb5b8b4a4b7d2a
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,309 @@
+# Pathologies and Challenges of Using Differentiable Simulators in Policy Optimization for Contact-Rich Manipulation
+
+H.J. Terry Suh, Max Simchowitz, Kaiqing Zhang, Tao Pang, Russ Tedrake
+
+Abstract-Policy search methods in Reinforcement Learning (RL) have shown impressive results in contact-rich tasks such as dexterous manipulation. However, the high variance of zero-order Monte-Carlo gradient estimates results in slow convergence and a requirement for a high number of samples. By replacing these zero-order gradient estimates with first-order ones, differentiable simulators promise faster computation time for policy gradient methods when the model is known. Contrary to this belief, we highlight some of the pathologies of using first-order gradients and show that in many physical scenarios involving rich contact, using zero-order gradients result in better performance. Building on these pathologies and lessons, we propose guidelines for designing differentiable simulators, as well as policy optimization algorithms that use these simulators. By doing so, we hope to reap the benefits of first-order gradients while avoiding the potential pitfalls.
+
+## I. INTRODUCTION
+
+Reinforcement Learning (RL) is fundamentally concerned with the problem of minimizing a stochastic objective,
+
+$$
+\mathop{\min }\limits_{\mathbf{\theta }}F\left( \mathbf{\theta }\right) = \mathop{\min }\limits_{\mathbf{\theta }}{\mathbb{E}}_{\mathbf{w}}f\left( {\mathbf{\theta },\mathbf{w}}\right) .
+$$
+
+Many algorithms in RL heavily rely on zeroth-order Monte-Carlo estimation of the gradient $\nabla F\left\lbrack {{27},{22}}\right\rbrack$ . Yet, in contact-rich robotic manipulation where we have model knowledge and structure of the dynamics, it is possible to differentiate through the physics and obtain exact gradients of $f$ , which can also be used to construct a first-order estimate of $\nabla F$ . The availability of both options begs the question: given access to gradients of $f$ , which estimator should we prefer?
+
+In stochastic optimization, the theoretical benefits of using first-order estimates of $\nabla F$ over zeroth-order ones have mainly been understood through the lens of variance and convergence rates $\left\lbrack {{10},{16}}\right\rbrack$ : the first-order estimator often (not always) results in much less variance compared to the zeroth-order one, which leads to faster convergence rates to a local minima of nonconvex smooth objective functions. However, the landscape of RL objectives that involve long-horizon sequential decision making (e.g. policy optimization) is challenging to analyze, and convergence properties in these landscapes are relatively poorly understood. In particular, contact-rich systems can display complex characteristics including nonlinearities, non-smoothness, and discontinuities (Figure 1) [29, 17, 25].
+
+Nevertheless, lessons from convergence rate analysis tell us that there may be benefits to using the exact gradients even for these complex physical systems. Such ideas have been championed through the term "differentiable simulation", where forward simulation of physics is programmed in a manner that is consistent with automatic differentiation $\left\lbrack {8,{12},{28},{30},9}\right\rbrack$ , or computation of analytic derivatives [3]. These methods have shown promising results in decreasing computation time compared to zeroth-order methods [13, 8, 11, 6, 5, 19].
+
+
+
+Fig. 1. Examples of simple optimization problems on physical systems. Goal is to: A. maximize $y$ position of the ball after dropping. B. maximize distance thrown, with a wall that results in inelastic impact. C. maximize transferred angular momentum to the pivoting bar through collision. Second row: the original objective and the stochastic objective after randomized smoothing.
+
+However, due to the complex characteristics of contact dynamics, we show that the belief that first-order gradients improve performance over zero-order ones is not always true for contact-rich manipulation. We illustrate this phenomenon through couple pathologies: first, even under sufficient regularity conditions of continuity, the choice of contact modeling can cause the first-order gradient estimate to have higher variance compared to the zeroth-order one. In particular, this may occur in approaches that utilize the penalty method [14], which requires stiff dynamics to realistically simulate contact [9].
+
+In addition, we show that many contact-rich systems display nearly/strictly discontinuous behavior in the underlying landscape. The presence of such discontinuities causes the first-order gradient estimator to be biased, while the zeroth-order one still remains unbiased. Furthermore, we show that even when continuous approximations are made, such approximations are often stiff and highly-Lipschitz. In these settings, the first order estimator still suffer from what we call empirical bias under finite-sample settings. The compromise of the first order estimator in the face of more accurate description of contact dynamics hints at a fundamental tension between realism of the dynamics and the performance of first-order gradients.
+
+From these pathologies, we suggest methods in simulation, as well as algorithms, that may improve the efficacy of first-order gradient estimates obtained using differentiable simulation. We advocate for the use of implicit contact models that are less stiff, and thus have low variance of the first-order gradient. In addition, we show they can be analytically smoothed out to mitigate discontinuities. Finally, we introduce a method to interpolate gradients that escapes these identified pitfalls.
+
+## II. Preliminaries
+
+## A. Policy Optimization Setting
+
+We study a discrete-time, finite-horizon, continuous-state control problem with states $\mathbf{x} \in {\mathbb{R}}^{n}$ , inputs $\mathbf{u} \in {\mathbb{R}}^{m}$ , transition function $\phi : {\mathbb{R}}^{n} \times {\mathbb{R}}^{m} \rightarrow {\mathbb{R}}^{n}$ , and horizon $H \in \mathbb{N}$ . Given a sequence of costs ${c}_{h} : {\mathbb{R}}^{n} \times {\mathbb{R}}^{m} \rightarrow \mathbb{R}$ , a family of policies ${\pi }_{h}\left( {\cdot , \cdot }\right) : {\mathbb{R}}^{n} \times {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{m}$ parameterized by $\mathbf{\theta } \in {\mathbb{R}}^{d}$ , and a sequence of injected noise terms ${\mathbf{w}}_{1 : H} \in {\left( {\mathbb{R}}^{m}\right) }^{H}$ , we define the cost-to-go functions
+
+$$
+{V}_{h}\left( {{\mathbf{x}}_{h},{\mathbf{w}}_{h : H},\mathbf{\theta }}\right) = \mathop{\sum }\limits_{{{h}^{\prime } = h}}^{H}{c}_{h}\left( {{\mathbf{x}}_{{h}^{\prime }},{\mathbf{u}}_{{h}^{\prime }}}\right) ,
+$$
+
+$$
+\text{s.t.}{\mathbf{x}}_{{h}^{\prime } + 1} = \phi \left( {{\mathbf{x}}_{{h}^{\prime }},{\mathbf{u}}_{{h}^{\prime }}}\right) ,{\mathbf{u}}_{{h}^{\prime }} = \pi \left( {{\mathbf{x}}_{{h}^{\prime }},\mathbf{\theta }}\right) + {\mathbf{w}}_{{h}^{\prime }},{h}^{\prime } \geq h\text{.}
+$$
+
+Our aim is to minimize the policy optimization objective
+
+$$
+F\left( \mathbf{\theta }\right) \mathrel{\text{:=}} {\mathbb{E}}_{{\mathbf{x}}_{1} \sim \rho }{\mathbb{E}}_{{\mathbf{w}}_{h}\overset{\text{ i.i.d. }}{ \sim }p}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H},\mathbf{\theta }}\right) , \tag{1}
+$$
+
+where $\rho$ is a distribution over initial states ${\mathbf{x}}_{1}$ , and ${\mathbf{w}}_{1},\ldots ,{\mathbf{w}}_{H}$ are i.i.d. according to $p$ which we assume to be a zero-mean Gaussian with covariance ${\sigma }^{2}{I}_{n}$ .
+
+## B. Zeroth-order estimator:
+
+The policy gradient can be estimated only using samples of the function values [31].
+
+Definition II.1. Given a single zeroth-order estimate of the policy gradient ${\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right)$ , we define the zeroth-order batched gradient (ZoBG) ${\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right)$ as the sample mean,
+
+$$
+{\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{:=}} \frac{1}{{\sigma }^{2}}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right) \left\lbrack {\mathop{\sum }\limits_{{h = 1}}^{H}{\mathrm{D}}_{\mathbf{\theta }}\pi {\left( {\mathbf{x}}_{h}^{i},\mathbf{\theta }\right) }^{\top }{\mathbf{w}}_{h}^{i}}\right\rbrack
+$$
+
+$$
+{\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right) \mathrel{\text{:=}} \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) ,
+$$
+
+where ${\mathbf{x}}_{h}^{i}$ is the state at time $h$ of a trajectory induced by the noise ${\mathbf{w}}_{1 : H}^{i}, i$ is the index of the sample trajectory, and ${\mathrm{D}}_{\mathbf{\theta }}\pi$ is the Jacobian matrix $\partial \pi /\partial \mathbf{\theta } \in {\mathbb{R}}^{m \times d}$ .
+
+The hat notation denotes a per-sample Monte-Carlo estimate, and bar-notation a sample mean. The ZoBG is also referred to as the REINFORCE [31], score function, or the likelihood-ratio gradient. In practice, a baseline term $b$ is subtracted from ${V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right)$ for variance reduction. One example is the zero-noise rollout as the baseline $b = {V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{0}}_{1 : H},\mathbf{\theta }}\right)$ :
+
+## C. First-Order Estimator.
+
+In differentiable simulators, the gradients of the dynamics $\phi$ and costs ${c}_{h}$ are available almost surely (i.e., with probability one). Hence, one may compute the exact gradient ${\nabla }_{\mathbf{\theta }}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H},\mathbf{\theta }}\right)$ by automatic differentiation and average them to estimate $\nabla F\left( \mathbf{\theta }\right)$ .
+
+Definition II.2. Given a single first-order gradient estimate ${\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right)$ , we define the first-order batched gradient (FoBG) as the sample mean:
+
+$$
+{\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{:=}} {\nabla }_{\mathbf{\theta }}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right)
+$$
+
+$$
+{\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right) \mathrel{\text{:=}} \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) .
+$$
+
+The FoBG is also referred to as the reparametrization gradient [15], the pathwise derivative [21], or Back Propagation through Time (BPTT).
+
+## III. PITFALLS OF FIRST-ORDER GRADIENTS
+
+In this section, we shows pathologies in contact-rich systems for which the FoBG can perform worse than the ZoBG.
+
+## A. Bias under discontinuities
+
+Under standard regularity conditions, it is well-known that both estimators are unbiased estimators of the true gradient $\nabla F\left( \mathbf{\theta }\right)$ . However, care must be taken to define these conditions precisely, as such conditions are broken for contact-rich systems. Fortunately, the ZoBG is still unbiased under mild assumptions,
+
+$$
+\mathbb{E}\left\lbrack {{\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right) }\right\rbrack = \nabla F\left( \mathbf{\theta }\right) .
+$$
+
+In contrast, the FoBG requires strong continuity conditions in order to satisfy the requirement for unbiasedness. However, under Lipschitz continuity, it is indeed unbiased.
+
+Lemma III.1. If $\phi \left( {\cdot , \cdot }\right)$ is locally Lipschitz and ${c}_{h}\left( {\cdot , \cdot }\right) \in {C}^{\infty }$ , then ${\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right)$ is defined almost surely, and
+
+$$
+\mathbb{E}\left\lbrack {{\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right) }\right\rbrack = \nabla F\left( \mathbf{\theta }\right) .
+$$
+
+Lemma III.1 tells us that FoBG can fail when applied to discontinuous landscapes. We illustrate a simple case of biasedness through a counterexample.
+
+Example III.2 (Heaviside). $\left\lbrack {2,{25}}\right\rbrack$ Consider the Heaviside function,
+
+$$
+f\left( {\mathbf{\theta },\mathbf{w}}\right) = H\left( {\mathbf{\theta } + \mathbf{w}}\right) ,\;H\left( t\right) = {\mathbb{1}}_{t \geq 0}
+$$
+
+whose stochastic objective becomes the error function
+
+$$
+F\left( \mathbf{\theta }\right) = {\mathbb{E}}_{\mathbf{w}}\left\lbrack {H\left( {\mathbf{\theta } + \mathbf{w}}\right) }\right\rbrack = \operatorname{erf}\left( {-\mathbf{\theta };{\sigma }^{2}}\right) ,
+$$
+
+However, since ${\nabla }_{\mathbf{\theta }}H\left( {\mathbf{\theta } + \mathbf{w}}\right) = 0$ for all $\mathbf{\theta } \neq - \mathbf{w}$ , we have ${\mathbb{E}}_{{\mathbf{w}}_{i}}\delta \left( {\mathbf{\theta } + {\mathbf{w}}_{i}}\right) = 0$ . Hence, the Law of Large Numbers does not hold, and FoBG is biased as the gradient of the stochastic objective, a Gaussian, is non-zero at any $\mathbf{\theta }$ . We further note that the empirical variance of the FoBG estimator in this example is zero. On the other hand, the ZoBG escapes this problem and provides an unbiased estimate, since it always takes finite intervals that include the integral of the delta.
+
+
+
+Fig. 2. From left: heaviside objective $f\left( {\mathbf{\theta },\mathbf{w}}\right)$ and stochastic objective $F\left( \mathbf{\theta }\right)$ , empirical values of the gradient estimates, and their empirical variance.
+
+### B.The "Empirical bias" phenomenon
+
+One might argue that strict discontinuity is simply an artifact of modeling choice in simulators; indeed, many simulators approximate discontinuous dynamics as a limit of continuous ones with growing Lipschitz constant $\left\lbrack {9,7}\right\rbrack$ . In this section, we explain how this can lead to a phenomenon we call empirical bias, where the FoBG appears to have low empirical variance, but is still highly inaccurate; i.e. it "looks" biased when a finite number of samples are used. Through this phenomenon, we claim that performance degradation of first-order gradient estimates do not require strict discontinuity, but is also present in continuous, yet stiff approximations of discontinuities.
+
+Definition III. 3 (Empirical bias). Let $\mathbf{z}$ be a vector-valued random variable with $\mathbb{E}\left\lbrack {\parallel \mathbf{z}\parallel }\right\rbrack < \infty$ . We say $\mathbf{z}$ has $\left( {\beta ,\Delta , S}\right)$ - empirical bias if there is a random event $\mathcal{E}$ such that $\Pr \left\lbrack \mathcal{E}\right\rbrack \geq$ $1 - \beta$ , and $\parallel \mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack - \mathbb{E}\left\lbrack \mathbf{z}\right\rbrack \parallel \geq \Delta$ , but $\parallel \mathbf{z} - \mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack \parallel \leq S$ almost surely on $\mathcal{E}$ .
+
+A paradigmatic example of empirical bias is a random scalar $\mathbf{z}$ which takes the value 0 with probability $1 - \beta$ , and $\frac{1}{\beta }$ with probability $\beta$ . Setting $\mathcal{E} = \{ \mathbf{z} = 0\}$ , we see $\mathbb{E}\left\lbrack \mathbf{z}\right\rbrack = 1$ , $\mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack = 0$ , and so $\mathbf{z}$ satisfies $\left( {\beta ,1,0}\right)$ -empirical bias. Note that $\operatorname{Var}\left\lbrack \mathbf{z}\right\rbrack = 1/\beta - 1$ ; in fact, small- $\beta$ empirical bias implies large variance more generally.
+
+Lemma III.4. Suppose $\mathbf{z}$ has $\left( {\beta ,\Delta , S}\right)$ -empirical bias. Then $\operatorname{Var}\left\lbrack \mathbf{z}\right\rbrack \geq \frac{{\Delta }_{0}^{2}}{\beta }$ , where ${\Delta }_{0} \mathrel{\text{:=}} \max \{ 0,\left( {1 - \beta }\right) \Delta - \beta \parallel \mathbb{E}\left\lbrack \mathbf{z}\right\rbrack \parallel \}$ .
+
+Empirical bias naturally arises for discontinuities or stiff continuous approximations.
+
+Example III.5 (Coulomb friction). The Coulomb model of friction is discontinuous in the relative tangential velocity between two bodies. In many simulators $\left\lbrack {9,4}\right\rbrack$ , it is common to consider a continuous approximation instead. We idealize such approximations through a piecewise linear relaxation of the Heaviside that is continuous, parametrized by the width of the middle linear region $\nu$ (which corresponds to slip tolerance).
+
+$$
+{\bar{H}}_{\nu }\left( t\right) = \left\{ {\begin{array}{ll} {2t}/\nu & \text{ if }\left| t\right| \leq \nu /2 \\ {2H}\left( t\right) - 1 & \text{ else } \end{array}.}\right.
+$$
+
+In practice, lower values of $\nu$ lead to more realistic behavior in simulation [28], but this has adverse effects for empirical bias. Considering ${f}_{\nu }\left( {\mathbf{\theta },\mathbf{w}}\right) = {\bar{H}}_{\nu }\left( {\mathbf{\theta } + \mathbf{w}}\right)$ , we have ${F}_{\nu }\left( \mathbf{\theta }\right) =$ ${\mathbb{E}}_{\mathbf{w}}\left\lbrack {{\bar{H}}_{\nu }\left( {\mathbf{\theta } + \mathbf{w}}\right) }\right\rbrack \mathrel{\text{:=}} \operatorname{erf}\left( {\nu /2 - \theta ;{\sigma }^{2}}\right)$ . In particular, setting ${c}_{\sigma } \mathrel{\text{:=}} \frac{1}{\sqrt{2\pi }\sigma }$ , then at $\mathbf{\theta } = \nu /2,\nabla {F}_{\nu }\left( \mathbf{\theta }\right) = {c}_{\sigma }$ , whereas, with probability at least ${c}_{\sigma }\nu ,\nabla {f}_{\nu }\left( {\mathbf{\theta },\mathbf{w}}\right) = 0$ . Hence, the FoBG has $\left( {{c}_{\sigma }\nu ,{c}_{\sigma },0}\right)$ empirical bias, and its variance scales with $1/\nu$ as $\nu \rightarrow 0$ . The limiting $\nu = 0$ case, corresponding to the Coulomb model, is the Heaviside from Example III.2, where the limit of high empirical bias, as well as variance, becomes biased in expectation (but, surprisingly, zero variance!). We empirically illustrate this effect in Figure 3. We also note that more complicated models of friction (e.g. that incorporates the Stribeck effect [24]) would suffer similar problems.
+
+Example III.6. (Discontinuity in geometry). Another source of discontinuity in simulators comes from the discontinuity of surface normals. We show this in Figure 4, where balls that collide with a rectangular geometry create discontinuities. It is possible to make a continuous relaxation [7] by considering a smoother geometry, depicted by the addition of the dome in Figure 4. While this makes FoBG no longer biased asymptotically, the stiffness of the relaxation still results in high empirical bias.
+
+
+
+Fig. 3. Top column: illustration of the physical system and the relaxation of Coulomb friction. Bottom column: the values of estimators and their empirical variances depending on number of samples and slip tolerance. Values of FoBG are zero in low-sample regimes due to empirical bias. As $\nu \rightarrow 0$ , the empirical variance of FoBG goes to zero, which shows as empty in the log-scale. Expected variance, however, blows up as it scales with $1/\nu$ .
+
+
+
+Fig. 4. Left: example of ball hitting the wall. The green trajectories hit a rectangular wall, displaying discontinuities. Right: the pink trajectories collide with the dome on top, and show continuous but stiff behavior.
+
+## C. High Variance from Stiffness
+
+Even without the phenomenon of empirical bias, we show that certain choices of contact models can cause the FoBG to suffer from high variance. In particular, approximations of rigid contact with high-stiffness spring models (i.e. penalty method) causes the gradient may have a high norm.
+
+Example III.7. (Pushing with stiff contact). We demonstrate this phenomenon through a simple 1D pushing example in Figure 5, where the ZoBG has lower variance than the FoBG as stiffness increases, until numerical semi-implicit integration becomes unstable under a fixed timestep.
+
+
+
+Fig. 5. The variance of the gradient of ${V}_{1}$ , with running cost ${c}_{h} = \parallel {\mathbf{x}}_{h}^{2} -$ ${\mathbf{x}}^{g}{\parallel }^{2}$ , with respect to input trajectory as spring constant $k$ increases. Mass $m$ and damping coefficient $c$ are fixed.
+
+## IV. TACKLING THE PATHOLOGIES: A PATH FORWARD
+
+In this section, we comment on methods that can alleviate the pathologies that were found in the previous section.
+
+## A. Less Stiff Formulations of Contact Dynamics
+
+In order to avoid high variance of the FoBG, we must ensure that the norm of the gradient is low. Yet, as illustrated by Example III.7., approximating contact using stiff springs, as done in works that model contact with the penalty method, inevitably results in trading off stiffness and physical realism.
+
+Therefore, we advocate less stiff contact models that are based on implicit time-stepping [23], whose per time-step computation relies on solving optimization problems such as the Linear Complementary Problem (LCP), which can be further relaxed into solving convex Quadratic Programs (QP)s [1]. The derivatives of such systems can be obtained by the implicit function theorem, differentiating through the optimality conditions of the problems. We give one example of such a convex QP as below. Correctly using gradients from implicit time-stepping can vastly improve the efficacy of FoBG by ensuring that their norm stays reasonably bounded.
+
+Example IV.1. (Implicit Time-Stepping for Pushing). We illustrate implicit time-stepping with a 1-dimensional example consisting of a point mass and a wall. The state of the system is $\left( {x, v}\right) \in {\mathbb{R}}^{2}$ , where $x$ is the position and $v$ the velocity of the point mass.The non-penetrable wall occupies $x \leq 0$ .
+
+The equations of motion of the system is
+
+$$
+m\left( {{v}_{ + } - v}\right) = u + \lambda \tag{2a}
+$$
+
+$$
+{x}_{ + } = x + h{v}_{ + }, \tag{2b}
+$$
+
+$$
+0 \leq {x}_{ + } \bot \lambda \geq 0, \tag{2c}
+$$
+
+where $\left( {{x}_{ + },{v}_{ + }}\right)$ represent the system state at the next time step; $h$ is the step size; $m$ is the mass; $u$ is the impulse applied to the point mass by actuation; and $\lambda$ is the impulse due to contact with wall. Constraint (2a) is the momentum balance of the point mass. Constraint (2c) is the complementarity constraint that ensures the wall can only push on the point mass when they are in contact. We can indeed see that the equations of motion (2) is the KKT condition of the following QP:
+
+$$
+\mathop{\operatorname{minimize}}\limits_{{v}_{ + }}\;\frac{1}{2}m{\left( {v}_{ + } - v\right) }^{2} - u{v}_{ + } \tag{3a}
+$$
+
+$$
+\text{subject to}\frac{x}{h} + {v}_{ + } \geq 0 \tag{3b}
+$$
+
+## B. Smooth Analytic Approximations of Dynamics
+
+Although we show that strict discontinuity is not required to have degradation of performance for the FoBG, soft relaxations of discontinuities still behave much better. To this end, we also advocate for analytically providing a smooth surrogates of the discontinuous dynamics in simulation, and increasingly lowering the relaxation during the policy optimization step. To overcome the pathologies of using FoBGs, we believe that providing such a feature should be a requirement for differentiable simulators for them to be useful in policy optimization.
+
+
+
+Fig. 6. Left: Visualization of wall and block examples in Example IV.1 and Example IV.2. Note that both schemes do not require using the spring constant $k$ , where as the penalty method will. This alleviates problems associated with stiffness of the gradients. Right: Results of simulating the methods of Example IV. 1 and Example IV.2 at $\left( {x, v}\right) = 0$ . The resulting positions ${x}^{ + }$ are plotted as functions of input impulse $u$ .
+
+Previous works have provided smooth surrogates to the penalty method of contact $\left\lbrack {9,{13},{32}}\right\rbrack$ , which reasonably addresses discontinuities, yet still suffers from stiffness. Instead, we show that a smooth approximation can be made to implicit time-stepping methods by using common constraint relaxation methods such as the log-barrier function used in interior-point method.
+
+Example IV.2. (Smooth Relaxation for Pushing). The optimization-based dynamics of Example IV. 1 can be smoothed by replacing the non-penetration constraint (3b) with an additional log-barrier term in the objective (3a):
+
+$$
+\mathop{\operatorname{minimize}}\limits_{{v}_{ + }}\frac{1}{2}m{\left( {v}_{ + } - v\right) }^{2} - u{v}_{ + } - \frac{1}{\kappa }\log \left( {\frac{x}{h} + {v}_{ + }}\right) , \tag{4}
+$$
+
+which is an unconstrained convex optimization program, whose optimality condition can be obtained by setting the derivative of the objective (4) to 0 :
+
+$$
+m\left( {{v}_{ + } - v}\right) = u + {\left\lbrack \kappa \left( x/h + {v}_{ + }\right) \right\rbrack }^{-1}. \tag{5}
+$$
+
+The optimality condition (5) can be interpreted as the momentum balance of the point mass, but the wall now acts as a force field, exerting on the object a force whose magnitude is inversely proportional to the distance to the wall. The strength of the force field is controlled by the log-barrier weight $\kappa$ . As $\kappa \rightarrow \infty$ , the solution of (4) converges to that of (3).
+
+## C. Gradient Interpolation
+
+Finally, we mention some recent advances on the algorithm side. If we can compute both the FoBG and the ZoBG using uncorrelated samples, we can consider an interpolated gradient,
+
+$$
+{\widehat{\nabla }}^{\left\lbrack \alpha \right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{:=}} \alpha {\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) + \left( {1 - \alpha }\right) {\widehat{\nabla }}^{\left\lbrack 1\right\rbrack },{F}_{i}\left( \mathbf{\theta }\right) \tag{6}
+$$
+
+where $\alpha \in \left\lbrack {0,1}\right\rbrack$ . Previous works on gradient interpolation $\left\lbrack {{20},{18}}\right\rbrack$ shows that we can optimally interpolate the two gradients based on computing empirical variance. However, as Example III. 2 shows, the empirical variance can be an unreliable estimate if FoBG is biased under discontinuities.
+
+To mitigate this problem, we can test the correctness of the FoBG against the unbiased ZoBG by constructing a confidence interval based on samples of the ZoBG, and choosing an optimal value of $\alpha$ subject to a chance constraint on the allowable value of the interpolated gradient [26].
+
+## REFERENCES
+
+[1] Mihai Anitescu. Optimization-based simulation of nonsmooth rigid multibody dynamics. Mathematical Programming, 105(1):113-143, 2006.
+
+[2] Sai Praveen Bangaru, Jesse Michel, Kevin Mu, Gilbert Bernstein, Tzu-Mao Li, and Jonathan Ragan-Kelley. Systematically differentiating parametric discontinuities. ACM Trans. Graph., 40(4), July 2021. ISSN 0730-0301. doi: 10.1145/3450626. 3459775.
+
+[3] Justin Carpentier, Guilhem Saurel, Gabriele Buondonno, Joseph Mirabel, Florent Lamiraux, Olivier Stasse, and Nicolas Mansard. The pinocchio c++ library : A fast and flexible implementation of rigid body dynamics algorithms and their analytical derivatives. In 2019 IEEE/SICE International Symposium on System Integration (SII), pages 614-619, 2019. doi: 10.1109/SII.2019.8700380.
+
+[4] Alejandro M. Castro, Ante Qu, Naveen Kuppuswamy, Alex Alspach, and Michael Sherman. A transition-aware method for the simulation of compliant contact with regularized friction. IEEE Robotics and Automation Letters, 5(2):1859-1866, Apr 2020. ISSN 2377-3774. doi: 10.1109/lra.2020.2969933. URL http://dx.doi.org/10.1109/LRA.2020.2969933.
+
+[5] Filipe de Avila Belbute-Peres, Kevin Smith, Kelsey Allen, Josh Tenenbaum, and J. Zico Kolter. End-to-end differentiable physics for learning and control. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 842424a1d0595b76ec4fa03c46e8d755-Paper.pdf.
+
+[6] Tao Du, Yunfei Li, Jie Xu, Andrew Spielberg, Kui Wu, Daniela Rus, and Wojciech Matusik. D3\{pg\}: Deep differentiable deterministic policy gradients, 2020. URL https://openreview.net/forum?id=rkxZCJrtwS.
+
+[7] Ryan Elandt, Evan Drumwright, Michael Sherman, and A. Ruina. A pressure field model for fast, robust approximation of net contact force and moment between nominally rigid objects. IROS, pages 8238-8245, 2019.
+
+[8] C. Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, and Olivier Bachem. Brax - a differentiable physics engine for large scale rigid body simulation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021. URL https://openreview.net/ forum?id=VdvDlnnjzIN.
+
+[9] Moritz Geilinger, David Hahn, Jonas Zehnder, Moritz Bächer, Bernhard Thomaszewski, and Stelian Coros. Add: Analytically differentiable dynamics for multi-body systems with frictional contact, 2020.
+
+[10] Saeed Ghadimi and Guanghui Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341- 2368, 2013. doi: 10.1137/120880811. URL https://doi.org/10.1137/120880811.
+
+[11] Paula Gradu, John Hallman, Daniel Suo, Alex Yu, Naman Agarwal, Udaya Ghai, Karan Singh, Cyril Zhang, Anirudha Majumdar, and Elad Hazan. Deluca - a differentiable control library: Environments, methods, and benchmarking, 2021.
+
+[12] Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Frédo Durand. Difftaichi: Differentiable programming for physical simulation. ICLR, 2020.
+
+[13] Zhiao Huang, Yuanming Hu, Tao Du, Siyuan Zhou, Hao Su, Joshua B. Tenenbaum, and Chuang Gan. Plasticinelab: A soft-body manipulation benchmark with differentiable physics. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=xCcdBRQEDW.
+
+[14] K. H. Hunt and F. R. E. Crossley. Coefficient of Restitution Interpreted as Damping in Vibroimpact. Journal of Applied Mechanics, 42(2):440-445, 06 1975. ISSN 0021-8936. doi: 10.1115/1.3423596. URL https://doi.org/10.1115/1.3423596.
+
+[15] Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local
+
+[23] David Stewart and J.C. (Jeff) Trinkle. An implicit time-stepping scheme for rigid
+
+reparameterization trick. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28.
+
+Curran Associates, Inc., 2015.
+
+[16] Shakir Mahamed, Mihaela Rosca, Michael Figurnov, and Andriy Mnih. Monte carlo gradient estimation in machine learning. In Jennifer Dy and Andreas Krause, editors, Journal of Machine Learning Research, volume 21, pages 1-63, 4 2020.
+
+[17] Matthew T. Mason. Mechanics of Robotic Manipulation. The MIT Press, 06 2001. ISBN 9780262256629. doi: 10.7551/mitpress/4527.001.0001. URL https: //doi.org/10.7551/mitpress/4527.001.0001.
+
+[18] Luke Metz, C. Daniel Freeman, Samuel S. Schoenholz, and Tal Kachman. Gradients are not all you need, 2021.
+
+[19] Miguel Angel Zamora Mora, Momchil Peychev, Sehoon Ha, Martin Vechev, and Stelian Coros. Pods: Policy optimization via differentiable simulation. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 7805-7817. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/ mora21a.html.
+
+[20] Paavo Parmas, Carl Edward Rasmussen, Jan Peters, and Kenji Doya. PIPPS: Flexible model-based policy search robust to the curse of chaos. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4065-4074. PMLR, 10-15 Jul 2018.
+
+[21] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
+
+[22] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017.
+
+body dynamics with coulomb friction. volume 1, pages 162-169, 012000. doi: 10.1109/ROBOT.2000.844054.
+
+[24] R. Stribeck. Die wesentlichen Eigenschaften der Gleit- und Rollenlager. Mitteilungen über Forschungsarbeiten auf dem Gebiete des Ingenieurwesens, insbesondere aus den Laboratorien der technischen Hochschulen. Julius Springer, 1903.
+
+[25] H. J. Terry Suh, Tao Pang, and Russ Tedrake. Bundled gradients through contact via randomized smoothing. arXiv pre-print, 2021.
+
+[26] H. J. Terry Suh, Max Simchowitz, Kaiqing Zhang, and Russ Tedrake. Do differentiable simulators give better policy gradients?, 2022. URL https://arxiv.org/ abs/2202.00817.
+
+[27] Richard Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. Adv. Neural Inf. Process. Syst, 12, 02 2000.
+
+[28] Russ Tedrake. Drake: A planning, control, and analysis toolbox for nonlinear dynamical systems, 2022. URL http://drake.mit.edu.
+
+[29] Arjan van der Schaft and Hans Schumacher. An Introduction to Hybrid Dynamical Systems. Springer Publishing Company, Incorporated, 1st edition, 2000. ISBN 978-1-4471-3916-4.
+
+[30] Keenon Werling, Dalton Omens, Jeongseok Lee, Ioannis Exarchos, and C. Karen Liu. Fast and feature-complete differentiable physics for articulated rigid bodies with contact, 2021.
+
+[31] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 3, 05 1992.
+
+[32] Jie Xu, Viktor Makoviychuk, Yashraj Narang, Fabio Ramos, Wojciech Matusik, Animesh Garg, and Miles Macklin. Accelerated policy learning with parallel differentiable simulation, 2022. URL https://arxiv.org/abs/2204.07137.
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..82108fa5219b8cb534e782fdab051257c7f5b745
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,237 @@
+§ PATHOLOGIES AND CHALLENGES OF USING DIFFERENTIABLE SIMULATORS IN POLICY OPTIMIZATION FOR CONTACT-RICH MANIPULATION
+
+H.J. Terry Suh, Max Simchowitz, Kaiqing Zhang, Tao Pang, Russ Tedrake
+
+Abstract-Policy search methods in Reinforcement Learning (RL) have shown impressive results in contact-rich tasks such as dexterous manipulation. However, the high variance of zero-order Monte-Carlo gradient estimates results in slow convergence and a requirement for a high number of samples. By replacing these zero-order gradient estimates with first-order ones, differentiable simulators promise faster computation time for policy gradient methods when the model is known. Contrary to this belief, we highlight some of the pathologies of using first-order gradients and show that in many physical scenarios involving rich contact, using zero-order gradients result in better performance. Building on these pathologies and lessons, we propose guidelines for designing differentiable simulators, as well as policy optimization algorithms that use these simulators. By doing so, we hope to reap the benefits of first-order gradients while avoiding the potential pitfalls.
+
+§ I. INTRODUCTION
+
+Reinforcement Learning (RL) is fundamentally concerned with the problem of minimizing a stochastic objective,
+
+$$
+\mathop{\min }\limits_{\mathbf{\theta }}F\left( \mathbf{\theta }\right) = \mathop{\min }\limits_{\mathbf{\theta }}{\mathbb{E}}_{\mathbf{w}}f\left( {\mathbf{\theta },\mathbf{w}}\right) .
+$$
+
+Many algorithms in RL heavily rely on zeroth-order Monte-Carlo estimation of the gradient $\nabla F\left\lbrack {{27},{22}}\right\rbrack$ . Yet, in contact-rich robotic manipulation where we have model knowledge and structure of the dynamics, it is possible to differentiate through the physics and obtain exact gradients of $f$ , which can also be used to construct a first-order estimate of $\nabla F$ . The availability of both options begs the question: given access to gradients of $f$ , which estimator should we prefer?
+
+In stochastic optimization, the theoretical benefits of using first-order estimates of $\nabla F$ over zeroth-order ones have mainly been understood through the lens of variance and convergence rates $\left\lbrack {{10},{16}}\right\rbrack$ : the first-order estimator often (not always) results in much less variance compared to the zeroth-order one, which leads to faster convergence rates to a local minima of nonconvex smooth objective functions. However, the landscape of RL objectives that involve long-horizon sequential decision making (e.g. policy optimization) is challenging to analyze, and convergence properties in these landscapes are relatively poorly understood. In particular, contact-rich systems can display complex characteristics including nonlinearities, non-smoothness, and discontinuities (Figure 1) [29, 17, 25].
+
+Nevertheless, lessons from convergence rate analysis tell us that there may be benefits to using the exact gradients even for these complex physical systems. Such ideas have been championed through the term "differentiable simulation", where forward simulation of physics is programmed in a manner that is consistent with automatic differentiation $\left\lbrack {8,{12},{28},{30},9}\right\rbrack$ , or computation of analytic derivatives [3]. These methods have shown promising results in decreasing computation time compared to zeroth-order methods [13, 8, 11, 6, 5, 19].
+
+ < g r a p h i c s >
+
+Fig. 1. Examples of simple optimization problems on physical systems. Goal is to: A. maximize $y$ position of the ball after dropping. B. maximize distance thrown, with a wall that results in inelastic impact. C. maximize transferred angular momentum to the pivoting bar through collision. Second row: the original objective and the stochastic objective after randomized smoothing.
+
+However, due to the complex characteristics of contact dynamics, we show that the belief that first-order gradients improve performance over zero-order ones is not always true for contact-rich manipulation. We illustrate this phenomenon through couple pathologies: first, even under sufficient regularity conditions of continuity, the choice of contact modeling can cause the first-order gradient estimate to have higher variance compared to the zeroth-order one. In particular, this may occur in approaches that utilize the penalty method [14], which requires stiff dynamics to realistically simulate contact [9].
+
+In addition, we show that many contact-rich systems display nearly/strictly discontinuous behavior in the underlying landscape. The presence of such discontinuities causes the first-order gradient estimator to be biased, while the zeroth-order one still remains unbiased. Furthermore, we show that even when continuous approximations are made, such approximations are often stiff and highly-Lipschitz. In these settings, the first order estimator still suffer from what we call empirical bias under finite-sample settings. The compromise of the first order estimator in the face of more accurate description of contact dynamics hints at a fundamental tension between realism of the dynamics and the performance of first-order gradients.
+
+From these pathologies, we suggest methods in simulation, as well as algorithms, that may improve the efficacy of first-order gradient estimates obtained using differentiable simulation. We advocate for the use of implicit contact models that are less stiff, and thus have low variance of the first-order gradient. In addition, we show they can be analytically smoothed out to mitigate discontinuities. Finally, we introduce a method to interpolate gradients that escapes these identified pitfalls.
+
+§ II. PRELIMINARIES
+
+§ A. POLICY OPTIMIZATION SETTING
+
+We study a discrete-time, finite-horizon, continuous-state control problem with states $\mathbf{x} \in {\mathbb{R}}^{n}$ , inputs $\mathbf{u} \in {\mathbb{R}}^{m}$ , transition function $\phi : {\mathbb{R}}^{n} \times {\mathbb{R}}^{m} \rightarrow {\mathbb{R}}^{n}$ , and horizon $H \in \mathbb{N}$ . Given a sequence of costs ${c}_{h} : {\mathbb{R}}^{n} \times {\mathbb{R}}^{m} \rightarrow \mathbb{R}$ , a family of policies ${\pi }_{h}\left( {\cdot , \cdot }\right) : {\mathbb{R}}^{n} \times {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{m}$ parameterized by $\mathbf{\theta } \in {\mathbb{R}}^{d}$ , and a sequence of injected noise terms ${\mathbf{w}}_{1 : H} \in {\left( {\mathbb{R}}^{m}\right) }^{H}$ , we define the cost-to-go functions
+
+$$
+{V}_{h}\left( {{\mathbf{x}}_{h},{\mathbf{w}}_{h : H},\mathbf{\theta }}\right) = \mathop{\sum }\limits_{{{h}^{\prime } = h}}^{H}{c}_{h}\left( {{\mathbf{x}}_{{h}^{\prime }},{\mathbf{u}}_{{h}^{\prime }}}\right) ,
+$$
+
+$$
+\text{ s.t. }{\mathbf{x}}_{{h}^{\prime } + 1} = \phi \left( {{\mathbf{x}}_{{h}^{\prime }},{\mathbf{u}}_{{h}^{\prime }}}\right) ,{\mathbf{u}}_{{h}^{\prime }} = \pi \left( {{\mathbf{x}}_{{h}^{\prime }},\mathbf{\theta }}\right) + {\mathbf{w}}_{{h}^{\prime }},{h}^{\prime } \geq h\text{ . }
+$$
+
+Our aim is to minimize the policy optimization objective
+
+$$
+F\left( \mathbf{\theta }\right) \mathrel{\text{ := }} {\mathbb{E}}_{{\mathbf{x}}_{1} \sim \rho }{\mathbb{E}}_{{\mathbf{w}}_{h}\overset{\text{ i.i.d. }}{ \sim }p}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H},\mathbf{\theta }}\right) , \tag{1}
+$$
+
+where $\rho$ is a distribution over initial states ${\mathbf{x}}_{1}$ , and ${\mathbf{w}}_{1},\ldots ,{\mathbf{w}}_{H}$ are i.i.d. according to $p$ which we assume to be a zero-mean Gaussian with covariance ${\sigma }^{2}{I}_{n}$ .
+
+§ B. ZEROTH-ORDER ESTIMATOR:
+
+The policy gradient can be estimated only using samples of the function values [31].
+
+Definition II.1. Given a single zeroth-order estimate of the policy gradient ${\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right)$ , we define the zeroth-order batched gradient (ZoBG) ${\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right)$ as the sample mean,
+
+$$
+{\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{ := }} \frac{1}{{\sigma }^{2}}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right) \left\lbrack {\mathop{\sum }\limits_{{h = 1}}^{H}{\mathrm{D}}_{\mathbf{\theta }}\pi {\left( {\mathbf{x}}_{h}^{i},\mathbf{\theta }\right) }^{\top }{\mathbf{w}}_{h}^{i}}\right\rbrack
+$$
+
+$$
+{\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right) \mathrel{\text{ := }} \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) ,
+$$
+
+where ${\mathbf{x}}_{h}^{i}$ is the state at time $h$ of a trajectory induced by the noise ${\mathbf{w}}_{1 : H}^{i},i$ is the index of the sample trajectory, and ${\mathrm{D}}_{\mathbf{\theta }}\pi$ is the Jacobian matrix $\partial \pi /\partial \mathbf{\theta } \in {\mathbb{R}}^{m \times d}$ .
+
+The hat notation denotes a per-sample Monte-Carlo estimate, and bar-notation a sample mean. The ZoBG is also referred to as the REINFORCE [31], score function, or the likelihood-ratio gradient. In practice, a baseline term $b$ is subtracted from ${V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right)$ for variance reduction. One example is the zero-noise rollout as the baseline $b = {V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{0}}_{1 : H},\mathbf{\theta }}\right)$ :
+
+§ C. FIRST-ORDER ESTIMATOR.
+
+In differentiable simulators, the gradients of the dynamics $\phi$ and costs ${c}_{h}$ are available almost surely (i.e., with probability one). Hence, one may compute the exact gradient ${\nabla }_{\mathbf{\theta }}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H},\mathbf{\theta }}\right)$ by automatic differentiation and average them to estimate $\nabla F\left( \mathbf{\theta }\right)$ .
+
+Definition II.2. Given a single first-order gradient estimate ${\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right)$ , we define the first-order batched gradient (FoBG) as the sample mean:
+
+$$
+{\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{ := }} {\nabla }_{\mathbf{\theta }}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right)
+$$
+
+$$
+{\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right) \mathrel{\text{ := }} \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) .
+$$
+
+The FoBG is also referred to as the reparametrization gradient [15], the pathwise derivative [21], or Back Propagation through Time (BPTT).
+
+§ III. PITFALLS OF FIRST-ORDER GRADIENTS
+
+In this section, we shows pathologies in contact-rich systems for which the FoBG can perform worse than the ZoBG.
+
+§ A. BIAS UNDER DISCONTINUITIES
+
+Under standard regularity conditions, it is well-known that both estimators are unbiased estimators of the true gradient $\nabla F\left( \mathbf{\theta }\right)$ . However, care must be taken to define these conditions precisely, as such conditions are broken for contact-rich systems. Fortunately, the ZoBG is still unbiased under mild assumptions,
+
+$$
+\mathbb{E}\left\lbrack {{\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right) }\right\rbrack = \nabla F\left( \mathbf{\theta }\right) .
+$$
+
+In contrast, the FoBG requires strong continuity conditions in order to satisfy the requirement for unbiasedness. However, under Lipschitz continuity, it is indeed unbiased.
+
+Lemma III.1. If $\phi \left( {\cdot , \cdot }\right)$ is locally Lipschitz and ${c}_{h}\left( {\cdot , \cdot }\right) \in {C}^{\infty }$ , then ${\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right)$ is defined almost surely, and
+
+$$
+\mathbb{E}\left\lbrack {{\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right) }\right\rbrack = \nabla F\left( \mathbf{\theta }\right) .
+$$
+
+Lemma III.1 tells us that FoBG can fail when applied to discontinuous landscapes. We illustrate a simple case of biasedness through a counterexample.
+
+Example III.2 (Heaviside). $\left\lbrack {2,{25}}\right\rbrack$ Consider the Heaviside function,
+
+$$
+f\left( {\mathbf{\theta },\mathbf{w}}\right) = H\left( {\mathbf{\theta } + \mathbf{w}}\right) ,\;H\left( t\right) = {\mathbb{1}}_{t \geq 0}
+$$
+
+whose stochastic objective becomes the error function
+
+$$
+F\left( \mathbf{\theta }\right) = {\mathbb{E}}_{\mathbf{w}}\left\lbrack {H\left( {\mathbf{\theta } + \mathbf{w}}\right) }\right\rbrack = \operatorname{erf}\left( {-\mathbf{\theta };{\sigma }^{2}}\right) ,
+$$
+
+However, since ${\nabla }_{\mathbf{\theta }}H\left( {\mathbf{\theta } + \mathbf{w}}\right) = 0$ for all $\mathbf{\theta } \neq - \mathbf{w}$ , we have ${\mathbb{E}}_{{\mathbf{w}}_{i}}\delta \left( {\mathbf{\theta } + {\mathbf{w}}_{i}}\right) = 0$ . Hence, the Law of Large Numbers does not hold, and FoBG is biased as the gradient of the stochastic objective, a Gaussian, is non-zero at any $\mathbf{\theta }$ . We further note that the empirical variance of the FoBG estimator in this example is zero. On the other hand, the ZoBG escapes this problem and provides an unbiased estimate, since it always takes finite intervals that include the integral of the delta.
+
+ < g r a p h i c s >
+
+Fig. 2. From left: heaviside objective $f\left( {\mathbf{\theta },\mathbf{w}}\right)$ and stochastic objective $F\left( \mathbf{\theta }\right)$ , empirical values of the gradient estimates, and their empirical variance.
+
+§ B.THE "EMPIRICAL BIAS" PHENOMENON
+
+One might argue that strict discontinuity is simply an artifact of modeling choice in simulators; indeed, many simulators approximate discontinuous dynamics as a limit of continuous ones with growing Lipschitz constant $\left\lbrack {9,7}\right\rbrack$ . In this section, we explain how this can lead to a phenomenon we call empirical bias, where the FoBG appears to have low empirical variance, but is still highly inaccurate; i.e. it "looks" biased when a finite number of samples are used. Through this phenomenon, we claim that performance degradation of first-order gradient estimates do not require strict discontinuity, but is also present in continuous, yet stiff approximations of discontinuities.
+
+Definition III. 3 (Empirical bias). Let $\mathbf{z}$ be a vector-valued random variable with $\mathbb{E}\left\lbrack {\parallel \mathbf{z}\parallel }\right\rbrack < \infty$ . We say $\mathbf{z}$ has $\left( {\beta ,\Delta ,S}\right)$ - empirical bias if there is a random event $\mathcal{E}$ such that $\Pr \left\lbrack \mathcal{E}\right\rbrack \geq$ $1 - \beta$ , and $\parallel \mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack - \mathbb{E}\left\lbrack \mathbf{z}\right\rbrack \parallel \geq \Delta$ , but $\parallel \mathbf{z} - \mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack \parallel \leq S$ almost surely on $\mathcal{E}$ .
+
+A paradigmatic example of empirical bias is a random scalar $\mathbf{z}$ which takes the value 0 with probability $1 - \beta$ , and $\frac{1}{\beta }$ with probability $\beta$ . Setting $\mathcal{E} = \{ \mathbf{z} = 0\}$ , we see $\mathbb{E}\left\lbrack \mathbf{z}\right\rbrack = 1$ , $\mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack = 0$ , and so $\mathbf{z}$ satisfies $\left( {\beta ,1,0}\right)$ -empirical bias. Note that $\operatorname{Var}\left\lbrack \mathbf{z}\right\rbrack = 1/\beta - 1$ ; in fact, small- $\beta$ empirical bias implies large variance more generally.
+
+Lemma III.4. Suppose $\mathbf{z}$ has $\left( {\beta ,\Delta ,S}\right)$ -empirical bias. Then $\operatorname{Var}\left\lbrack \mathbf{z}\right\rbrack \geq \frac{{\Delta }_{0}^{2}}{\beta }$ , where ${\Delta }_{0} \mathrel{\text{ := }} \max \{ 0,\left( {1 - \beta }\right) \Delta - \beta \parallel \mathbb{E}\left\lbrack \mathbf{z}\right\rbrack \parallel \}$ .
+
+Empirical bias naturally arises for discontinuities or stiff continuous approximations.
+
+Example III.5 (Coulomb friction). The Coulomb model of friction is discontinuous in the relative tangential velocity between two bodies. In many simulators $\left\lbrack {9,4}\right\rbrack$ , it is common to consider a continuous approximation instead. We idealize such approximations through a piecewise linear relaxation of the Heaviside that is continuous, parametrized by the width of the middle linear region $\nu$ (which corresponds to slip tolerance).
+
+$$
+{\bar{H}}_{\nu }\left( t\right) = \left\{ {\begin{array}{ll} {2t}/\nu & \text{ if }\left| t\right| \leq \nu /2 \\ {2H}\left( t\right) - 1 & \text{ else } \end{array}.}\right.
+$$
+
+In practice, lower values of $\nu$ lead to more realistic behavior in simulation [28], but this has adverse effects for empirical bias. Considering ${f}_{\nu }\left( {\mathbf{\theta },\mathbf{w}}\right) = {\bar{H}}_{\nu }\left( {\mathbf{\theta } + \mathbf{w}}\right)$ , we have ${F}_{\nu }\left( \mathbf{\theta }\right) =$ ${\mathbb{E}}_{\mathbf{w}}\left\lbrack {{\bar{H}}_{\nu }\left( {\mathbf{\theta } + \mathbf{w}}\right) }\right\rbrack \mathrel{\text{ := }} \operatorname{erf}\left( {\nu /2 - \theta ;{\sigma }^{2}}\right)$ . In particular, setting ${c}_{\sigma } \mathrel{\text{ := }} \frac{1}{\sqrt{2\pi }\sigma }$ , then at $\mathbf{\theta } = \nu /2,\nabla {F}_{\nu }\left( \mathbf{\theta }\right) = {c}_{\sigma }$ , whereas, with probability at least ${c}_{\sigma }\nu ,\nabla {f}_{\nu }\left( {\mathbf{\theta },\mathbf{w}}\right) = 0$ . Hence, the FoBG has $\left( {{c}_{\sigma }\nu ,{c}_{\sigma },0}\right)$ empirical bias, and its variance scales with $1/\nu$ as $\nu \rightarrow 0$ . The limiting $\nu = 0$ case, corresponding to the Coulomb model, is the Heaviside from Example III.2, where the limit of high empirical bias, as well as variance, becomes biased in expectation (but, surprisingly, zero variance!). We empirically illustrate this effect in Figure 3. We also note that more complicated models of friction (e.g. that incorporates the Stribeck effect [24]) would suffer similar problems.
+
+Example III.6. (Discontinuity in geometry). Another source of discontinuity in simulators comes from the discontinuity of surface normals. We show this in Figure 4, where balls that collide with a rectangular geometry create discontinuities. It is possible to make a continuous relaxation [7] by considering a smoother geometry, depicted by the addition of the dome in Figure 4. While this makes FoBG no longer biased asymptotically, the stiffness of the relaxation still results in high empirical bias.
+
+ < g r a p h i c s >
+
+Fig. 3. Top column: illustration of the physical system and the relaxation of Coulomb friction. Bottom column: the values of estimators and their empirical variances depending on number of samples and slip tolerance. Values of FoBG are zero in low-sample regimes due to empirical bias. As $\nu \rightarrow 0$ , the empirical variance of FoBG goes to zero, which shows as empty in the log-scale. Expected variance, however, blows up as it scales with $1/\nu$ .
+
+ < g r a p h i c s >
+
+Fig. 4. Left: example of ball hitting the wall. The green trajectories hit a rectangular wall, displaying discontinuities. Right: the pink trajectories collide with the dome on top, and show continuous but stiff behavior.
+
+§ C. HIGH VARIANCE FROM STIFFNESS
+
+Even without the phenomenon of empirical bias, we show that certain choices of contact models can cause the FoBG to suffer from high variance. In particular, approximations of rigid contact with high-stiffness spring models (i.e. penalty method) causes the gradient may have a high norm.
+
+Example III.7. (Pushing with stiff contact). We demonstrate this phenomenon through a simple 1D pushing example in Figure 5, where the ZoBG has lower variance than the FoBG as stiffness increases, until numerical semi-implicit integration becomes unstable under a fixed timestep.
+
+ < g r a p h i c s >
+
+Fig. 5. The variance of the gradient of ${V}_{1}$ , with running cost ${c}_{h} = \parallel {\mathbf{x}}_{h}^{2} -$ ${\mathbf{x}}^{g}{\parallel }^{2}$ , with respect to input trajectory as spring constant $k$ increases. Mass $m$ and damping coefficient $c$ are fixed.
+
+§ IV. TACKLING THE PATHOLOGIES: A PATH FORWARD
+
+In this section, we comment on methods that can alleviate the pathologies that were found in the previous section.
+
+§ A. LESS STIFF FORMULATIONS OF CONTACT DYNAMICS
+
+In order to avoid high variance of the FoBG, we must ensure that the norm of the gradient is low. Yet, as illustrated by Example III.7., approximating contact using stiff springs, as done in works that model contact with the penalty method, inevitably results in trading off stiffness and physical realism.
+
+Therefore, we advocate less stiff contact models that are based on implicit time-stepping [23], whose per time-step computation relies on solving optimization problems such as the Linear Complementary Problem (LCP), which can be further relaxed into solving convex Quadratic Programs (QP)s [1]. The derivatives of such systems can be obtained by the implicit function theorem, differentiating through the optimality conditions of the problems. We give one example of such a convex QP as below. Correctly using gradients from implicit time-stepping can vastly improve the efficacy of FoBG by ensuring that their norm stays reasonably bounded.
+
+Example IV.1. (Implicit Time-Stepping for Pushing). We illustrate implicit time-stepping with a 1-dimensional example consisting of a point mass and a wall. The state of the system is $\left( {x,v}\right) \in {\mathbb{R}}^{2}$ , where $x$ is the position and $v$ the velocity of the point mass.The non-penetrable wall occupies $x \leq 0$ .
+
+The equations of motion of the system is
+
+$$
+m\left( {{v}_{ + } - v}\right) = u + \lambda \tag{2a}
+$$
+
+$$
+{x}_{ + } = x + h{v}_{ + }, \tag{2b}
+$$
+
+$$
+0 \leq {x}_{ + } \bot \lambda \geq 0, \tag{2c}
+$$
+
+where $\left( {{x}_{ + },{v}_{ + }}\right)$ represent the system state at the next time step; $h$ is the step size; $m$ is the mass; $u$ is the impulse applied to the point mass by actuation; and $\lambda$ is the impulse due to contact with wall. Constraint (2a) is the momentum balance of the point mass. Constraint (2c) is the complementarity constraint that ensures the wall can only push on the point mass when they are in contact. We can indeed see that the equations of motion (2) is the KKT condition of the following QP:
+
+$$
+\mathop{\operatorname{minimize}}\limits_{{v}_{ + }}\;\frac{1}{2}m{\left( {v}_{ + } - v\right) }^{2} - u{v}_{ + } \tag{3a}
+$$
+
+$$
+\text{ subject to }\frac{x}{h} + {v}_{ + } \geq 0 \tag{3b}
+$$
+
+§ B. SMOOTH ANALYTIC APPROXIMATIONS OF DYNAMICS
+
+Although we show that strict discontinuity is not required to have degradation of performance for the FoBG, soft relaxations of discontinuities still behave much better. To this end, we also advocate for analytically providing a smooth surrogates of the discontinuous dynamics in simulation, and increasingly lowering the relaxation during the policy optimization step. To overcome the pathologies of using FoBGs, we believe that providing such a feature should be a requirement for differentiable simulators for them to be useful in policy optimization.
+
+ < g r a p h i c s >
+
+Fig. 6. Left: Visualization of wall and block examples in Example IV.1 and Example IV.2. Note that both schemes do not require using the spring constant $k$ , where as the penalty method will. This alleviates problems associated with stiffness of the gradients. Right: Results of simulating the methods of Example IV. 1 and Example IV.2 at $\left( {x,v}\right) = 0$ . The resulting positions ${x}^{ + }$ are plotted as functions of input impulse $u$ .
+
+Previous works have provided smooth surrogates to the penalty method of contact $\left\lbrack {9,{13},{32}}\right\rbrack$ , which reasonably addresses discontinuities, yet still suffers from stiffness. Instead, we show that a smooth approximation can be made to implicit time-stepping methods by using common constraint relaxation methods such as the log-barrier function used in interior-point method.
+
+Example IV.2. (Smooth Relaxation for Pushing). The optimization-based dynamics of Example IV. 1 can be smoothed by replacing the non-penetration constraint (3b) with an additional log-barrier term in the objective (3a):
+
+$$
+\mathop{\operatorname{minimize}}\limits_{{v}_{ + }}\frac{1}{2}m{\left( {v}_{ + } - v\right) }^{2} - u{v}_{ + } - \frac{1}{\kappa }\log \left( {\frac{x}{h} + {v}_{ + }}\right) , \tag{4}
+$$
+
+which is an unconstrained convex optimization program, whose optimality condition can be obtained by setting the derivative of the objective (4) to 0 :
+
+$$
+m\left( {{v}_{ + } - v}\right) = u + {\left\lbrack \kappa \left( x/h + {v}_{ + }\right) \right\rbrack }^{-1}. \tag{5}
+$$
+
+The optimality condition (5) can be interpreted as the momentum balance of the point mass, but the wall now acts as a force field, exerting on the object a force whose magnitude is inversely proportional to the distance to the wall. The strength of the force field is controlled by the log-barrier weight $\kappa$ . As $\kappa \rightarrow \infty$ , the solution of (4) converges to that of (3).
+
+§ C. GRADIENT INTERPOLATION
+
+Finally, we mention some recent advances on the algorithm side. If we can compute both the FoBG and the ZoBG using uncorrelated samples, we can consider an interpolated gradient,
+
+$$
+{\widehat{\nabla }}^{\left\lbrack \alpha \right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{ := }} \alpha {\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) + \left( {1 - \alpha }\right) {\widehat{\nabla }}^{\left\lbrack 1\right\rbrack },{F}_{i}\left( \mathbf{\theta }\right) \tag{6}
+$$
+
+where $\alpha \in \left\lbrack {0,1}\right\rbrack$ . Previous works on gradient interpolation $\left\lbrack {{20},{18}}\right\rbrack$ shows that we can optimally interpolate the two gradients based on computing empirical variance. However, as Example III. 2 shows, the empirical variance can be an unreliable estimate if FoBG is biased under discontinuities.
+
+To mitigate this problem, we can test the correctness of the FoBG against the unbiased ZoBG by constructing a confidence interval based on samples of the ZoBG, and choosing an optimal value of $\alpha$ subject to a chance constraint on the allowable value of the interpolated gradient [26].
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e444463a621da3b60490e1343568e4d4be6827f
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,95 @@
+# Learning Slip with a Patterned Capacitive Tactile Sensor
+
+Yuri Gloumakov, Member, IEEE, Tae Myung Huh, Member, IEEE, Hannah Stuart, Member, IEEE
+
+Abstract- The task of dynamically manipulating objects within a robotic hand presents ongoing challenges. In particular, friction and slip often dictate task success yet remain difficult to measure directly, quickly, and accurately; this includes both the detection of slip events and slip speed. Complex solutions exist that involve training a control policy using neural networks, with image-based sensors or external cameras, or when contact geometry can be inferred. Using only a capacitive sensor with a `nib`-patterned structure, we attempt to demonstrate the sensor's ability to detect slip speed during uninterrupted contact where geometry cannot be inferred, while benefitting from faster sensing, cheaper construction, and smaller profile. We hope that by collecting vibration amplitude and frequency and applying supervised learning techniques to directly measure slip speed we can guide an implementation of manipulation controls without a priori assumptions about object properties, such as friction or geometry.
+
+Index Terms-Tactile Sensing, In-Hand Manipulation.
+
+## I. INTRODUCTION
+
+Robotic within-hand manipulation [1] affords robot systems to manipulate objects in tight spaces and avoid gross arm movements, a particularly useful ability in cluttered or constrained environments. However, due to uncertainties in object properties, like friction, successful reorientations can prove to be a challenging task. Some approaches have used inverse kinematics with a highly constrained rigid hand and taking advantage of overcoming friction during sliding to reorient an object [2], while others have taken advantage of compliant or under-actuated systems [3]. However, controlling for object slip directly, without such models, can enable a much faster reorientations with unknown objects, an important feature in situations that necessitate faster response time such as in assembly lines or active disaster zones.
+
+Thus far, aggressive dynamic manipulation has been accomplished using learned control policies, whether exploring real-world object contacts [4] or in simulation [5]. However, using a nibbed capacitive tactile sensor developed by Huh et al. [6] (Fig. 1) we hope to demonstrate that dynamic manipulations can be performed using simple control policies by only training for object motion recognition, thus making the sensor more generalizable to different scenarios while reducing the need for complex computing.
+
+In this letter we explore the sensor's ability to detect speed of a slipping object as it slides across the sensor. While incipient slip has been demonstrated in various systems [7], [8], slip detection and regrasping can be leveraged to quickly reposition an object within the hand with minimal arm or finger movement [9], [10]. Meanwhile, steady-state slipping speed has only been demonstrated when objects are either much smaller than the sensor or not making contact with its entire surface [11], [12], so that the geometry or forces of an edge contact can be tracked over time. However, objects in a factory setting or during sorting are often fully flush and flat with the sensor and controlling the slip is necessary for dynamic manipulation. We hypothesize that the deflection of the sensor's nib interface would undergo a stick-slip interaction yielding characteristic frequencies and deflection amplitudes unique to each combination of material and slip speed.
+
+
+
+Figure 1. On the left, the sensor can be seen mounted on the tip of a robotic finger. The tactile sensor is made up of a grid of nibs according to dimensions in (a), where the deflection of each nib is tracked in 4 directions. These deflections are used to track pressure (b), sheer (c), and vibrations (d) that can be used detect slipping. The conductive fabric that is embedded in the nibs and deflected changes the capacitive signal between itself and the electrodes. Figure images were borrowed from [6].
+
+## II. METHODS
+
+To discover how the sensor detects slipping speed, we created a testbed that allowed us to test different slipping speeds and materials. The testbed was designed to maintain a constant distance between the sensor and a sliding object (Fig. 2); keeping the pressure constant was another consideration. Three rectangular objects made of different materials were tested: cherry, basswood, and acrylic with dimensions of ${200} \times {40} \times 3$ $\mathrm{{mm}}$ . The objects were pulled ${134}\mathrm{\;{mm}}$ by a string attached to a UR-10 robotic arm. The objects were then pushed back to the starting point and pulled again while sensor data was recorded at ${600}\mathrm{\;{Hz}}$ . This push-pull cycle lasted for 2 minutes for each speed setting, and speeds were varied from ${10} - {100}\mathrm{\;{mm}}/\mathrm{s}$ in 5 $\mathrm{{mm}}/\mathrm{s}$ increments. Since only the steady-state speed regime was of interest, the data from the acceleration and deceleration were spliced out. The termination of acceleration and initiation of deceleration were estimated to occur within the first $1/{8}^{\text{th }}$ and the last $1/{6}^{\text{th }}$ of the slipping period, respectively, with a conservative margin.
+
+---
+
+Y. Gloumakov, T. Huh, and H. Stuart are with the Mechanical Engineering Department, University of California, Berkeley, CA 06511 USA, (email: \{yurigloum, thuh, hstuart\} @berkeley.edu).
+
+---
+
+
+
+Figure 2. The left figure depicts the testbed that hosts the sensor and allows the object to slide through, rolling over a set of smooth bearings. On the right the robot arm can be seen to pull on the object by a string. The acrylic piece is placed on the end effect to push the object back into place.
+
+A feature of the nibbed sensor is its Programmable System on Chip (PSoC) infrastructure that enables us to couple any desired set of electrodes that result in a faster signal at the cost of resolution. Because we constrained the slip to a single linear direction, the nib deflection only needed to be tracked along a single axis (Fig. 3). Using a fast Fourier transform (FFT) the signal was converted into the frequency spectrum. Linear regressions are used to create a model using both the amplitude signal and the frequency spectrum separately to discover a fit that could identify the speed and material properties from a new signal. To obtain the frequency spectrum, a 300-frame sliding window was used, with an overlap of 1 frame to maximize the amount of extracted data.
+
+Due to steady state slipping, the frequency responses were regarded as independent samples. Here, a frequency sample is a vector of length $n$ , which corresponds to 300 (half the sampling rate) divided by the bin size, varied from 1 to 300 , and where vector values correspond to their respective frequency amplitudes. Both the frequency response, as well as the raw signal amplitude, were averaged during each pull cycle; this meant that during the 2-minute data collection, the slower speed trials yielded fewer cycles and therefore less data. The data was used in building a regression and exploring classification and clustering methods.
+
+## III. RESULTS
+
+An example of amplitude data during one of the trials is shown in figure 3. At the lowest speed, over the course of 2 minutes, only 7 pull cycles were collected, while at the fastest speed, up to 44 cycles were collected over the course of the same period. The mean amplitude of each cycle is plotted in figure 4. The linear fit ${\mathrm{R}}^{2}$ values were 0.475,0.280, and 0.399 cherry, basswood, and acrylic objects, respectively. Although this corresponds to weak correlations, at speeds below ${50}\mathrm{\;{mm}}/\mathrm{s}$ the correlation appears stronger.
+
+
+
+Figure 4. The raw signal amplitude is plotted against the pull speed for each of the three materials. The average amplitude during each pull cycle is plotted as a single point. A linear regression fit is overlayed.
+
+In the frequency domain, linear fits have weaker correlations still when looking at individual frequency bins. In figure 5, we explore the correlation between speed and frequency bands, which consisted of the signal across any number of frequency bins simultaneously; in the figure only the highest and lowest correlations are displayed. Only weak correlation persisted.
+
+## IV. DISCUSSION
+
+In this work we observed that neither signal amplitude nor frequency responses yielded a strong correlation. Nevertheless, a negative correlation persisted, suggesting that there is an exploitable relationship which can be used to identify the speed at which an object is slipping. However, at speeds below 50 $\mathrm{{mm}}/\mathrm{s}$ , a stronger relationship can be seen, and therefore, this would likely be the region that should be explored further in future data collections. This was not an unexpected result, as the difference between speeds was likely to plateau above a critical speed; nibs experience shorter stick times with increasing substrate speed, likely leading to a saturation in the amplitude signal [13]. Additionally, it appears there are differences in the amplitude response between materials that we believe can be used to train a classifier.
+
+The results related to raw amplitude signal can be seen to have a sinusoidal feature over speeds. We suspect that this corresponds to a resonant frequency related to the testbed. Alternatively, this could be due to the nonlinearity of the robotic arm as it moves in a straight line.
+
+Although the raw amplitude signals display a correlation with speed, it is highly susceptible to changes in grasp force, a factor that we deliberately accounted for by holding the distance constant. In an active controller, a sufficient grasp controller would need to be implement. However, frequency responses are less susceptible to grasp force, and the observation that certain frequency bands appear to find a correlation between the signal and sliding speed suggests that this would be a more reliable metric. Some short frequency bands appear to have very little correlations with speed, while others have a correlation. Out of all the tested frequency bins for the basswood and acrylic materials, 99.73% and 92.58%, respectively, exhibit a positive linear relationship between frequency bin amplitude and speed, while for the cherry material 100% of the tested frequency bins have a negative relationship. This suggests that material can likewise be determined by analyzing the frequency response.
+
+
+
+Figure 3. Example of a 2-minute push-pull cycle is shown for a single trial. Initiation and termination of the of the pull corresponds to the first green and the second red vertical line pairs. Accelerations and decelerations are spliced out, therefore only the region between second green and the first red vertical line pairs is considered (highlighted region is shown for the first two pull cycles). The filtered signal is displayed for reference only. A brief pause in motion can be seen immediately after the second vertical red line, then a brief high amplitude signal generated by the object being pushed back to its starting point (the highest amplitude signal), and finally followed by a prolonged pause corresponding to a re-tensioning of the string.
+
+
+
+Figure 5. The maximum and minimum R2 value of the linear fit for each frequency band is displayed; these correspond to the highest and lowest correlations between specific frequency bands and slip speed. These values converge when the whole frequency spectrum is considered simultaneously, since there is only one frequency band.
+
+Follow up work will include implementing classifiers that are capable of precisely distinguishing between materials and slipping speeds using, likely, the frequency signals. Ultimately, we hope to build a model capable of interpolating the data and identifying the speed with higher precision.
+
+## REFERENCES
+
+[1] A. Bicchi, "Hands for dexterous manipulation and robust grasping: A difficult road toward simplicity," IEEE Trans. Robot. Autom., vol. 16, no. 6, pp. 652-662, 2000, doi: 10.1109/70.897777.
+
+[2] A. A. Cole, P. Hsu, and S. S. Sastry, "Dynamic control of sliding by robot hands for regrasping," Trans. Robot., vol. 8, no. 1, 1992.
+
+[3] A. Sintov, A. S. Morgan, A. Kimmel, A. M. Dollar, K. E. Bekris, and A. Boularias, "Learning a State Transition Model of an Underactuated Adaptive Hand," IEEE Robot. Autom. Lett., vol. 4, no. 2, pp. 1287- 1294, 2019, doi: 10.1109/LRA.2019.2894875.
+
+[4] C. Wang, S. Wang, B. Romero, F. Veiga, and E. Adelson, "SwingBot: Learning physical features from in-hand tactile exploration for dynamic swing-up manipulation," IEEE Int. Conf. Intell. Robot. Syst., no. 2, pp. 5633-5640, 2020, doi: 10.1109/IROS45743.2020.9341006.
+
+[5] T. Bi and C. Sferrazza, "Zero-Shot Sim-to-Real Transfer of Tactile Control Policies for Aggressive Swing-Up Manipulation," IEEE Robot. Autom. Lett., vol. 6, no. 3, pp. 5761-5768, 2021, doi: 10.1109/LRA.2021.3084880.
+
+[6] T. M. Huh, H. Choi, S. Willcox, S. Moon, and M. R. Cutkosky, "Dynamically Reconfigurable Tactile Sensor for Robotic Manipulation," IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 2562-2569, 2020.
+
+[7] M. R. Tremblay and M. R. Cutkosky, "Estimating Friction Using Incipient Slip Sensing During a Manipulation Task," pp. 429-434, 1993.
+
+[8] W. Yuan, R. Li, M. A. Srinivasan, and E. H. Adelson, "Measurement of shear and slip with a GelSight tactile sensor," Proc. - IEEE Int. Conf. Robot. Autom., vol. 2015-June, no. June, pp. 304-311, 2015, doi: 10.1109/ICRA.2015.7139016.
+
+[9] F. Veiga, H. Van Hoof, J. Peters, and T. Hermans, "Stabilizing novel objects by learning to predict tactile slip," IEEE Int. Conf. Intell. Robot. Syst., pp. 5065-5072, 2015, doi: 10.1109/IROS.2015.7354090.
+
+[10] J. W. James and N. F. Lepora, "Slip detection for grasp stabilization with a multifingered tactile robot hand," IEEE Trans. Robot., vol. 37, no. 2, pp. 506-519, 2021, doi: 10.1109/TRO.2020.3031245.
+
+[11] D. D. Damian, T. H. Newton, R. Pfeifer, and A. M. Okamura, "Artificial
+
+tactile sensing of position and slip speed by exploiting geometrical features," IEEE/ASME Trans. Mechatronics, vol. 20, no. 1, pp. 263-274, 2015, doi: 10.1109/TMECH.2014.2321680.
+
+[12] H. Chen et al., "Hybrid porous micro structured finger skin inspired self-powered electronic skin system for pressure sensing and sliding detection," Nano Energy, vol. 51, no. July, pp. 496-503, 2018, doi: 10.1016/j.nanoen.2018.07.001.
+
+[13] D. D. Make, C. Gao, and D. Kuhlmann-Wilsdorf, "Fundamentals of stick-slip," Wear, vol. 164, pp. 1139-1149, 1993.
\ No newline at end of file
diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..653e8adae872f09af152912b5b5de0e5591a3d4e
--- /dev/null
+++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,61 @@
+§ LEARNING SLIP WITH A PATTERNED CAPACITIVE TACTILE SENSOR
+
+Yuri Gloumakov, Member, IEEE, Tae Myung Huh, Member, IEEE, Hannah Stuart, Member, IEEE
+
+Abstract- The task of dynamically manipulating objects within a robotic hand presents ongoing challenges. In particular, friction and slip often dictate task success yet remain difficult to measure directly, quickly, and accurately; this includes both the detection of slip events and slip speed. Complex solutions exist that involve training a control policy using neural networks, with image-based sensors or external cameras, or when contact geometry can be inferred. Using only a capacitive sensor with a `nib`-patterned structure, we attempt to demonstrate the sensor's ability to detect slip speed during uninterrupted contact where geometry cannot be inferred, while benefitting from faster sensing, cheaper construction, and smaller profile. We hope that by collecting vibration amplitude and frequency and applying supervised learning techniques to directly measure slip speed we can guide an implementation of manipulation controls without a priori assumptions about object properties, such as friction or geometry.
+
+Index Terms-Tactile Sensing, In-Hand Manipulation.
+
+§ I. INTRODUCTION
+
+Robotic within-hand manipulation [1] affords robot systems to manipulate objects in tight spaces and avoid gross arm movements, a particularly useful ability in cluttered or constrained environments. However, due to uncertainties in object properties, like friction, successful reorientations can prove to be a challenging task. Some approaches have used inverse kinematics with a highly constrained rigid hand and taking advantage of overcoming friction during sliding to reorient an object [2], while others have taken advantage of compliant or under-actuated systems [3]. However, controlling for object slip directly, without such models, can enable a much faster reorientations with unknown objects, an important feature in situations that necessitate faster response time such as in assembly lines or active disaster zones.
+
+Thus far, aggressive dynamic manipulation has been accomplished using learned control policies, whether exploring real-world object contacts [4] or in simulation [5]. However, using a nibbed capacitive tactile sensor developed by Huh et al. [6] (Fig. 1) we hope to demonstrate that dynamic manipulations can be performed using simple control policies by only training for object motion recognition, thus making the sensor more generalizable to different scenarios while reducing the need for complex computing.
+
+In this letter we explore the sensor's ability to detect speed of a slipping object as it slides across the sensor. While incipient slip has been demonstrated in various systems [7], [8], slip detection and regrasping can be leveraged to quickly reposition an object within the hand with minimal arm or finger movement [9], [10]. Meanwhile, steady-state slipping speed has only been demonstrated when objects are either much smaller than the sensor or not making contact with its entire surface [11], [12], so that the geometry or forces of an edge contact can be tracked over time. However, objects in a factory setting or during sorting are often fully flush and flat with the sensor and controlling the slip is necessary for dynamic manipulation. We hypothesize that the deflection of the sensor's nib interface would undergo a stick-slip interaction yielding characteristic frequencies and deflection amplitudes unique to each combination of material and slip speed.
+
+ < g r a p h i c s >
+
+Figure 1. On the left, the sensor can be seen mounted on the tip of a robotic finger. The tactile sensor is made up of a grid of nibs according to dimensions in (a), where the deflection of each nib is tracked in 4 directions. These deflections are used to track pressure (b), sheer (c), and vibrations (d) that can be used detect slipping. The conductive fabric that is embedded in the nibs and deflected changes the capacitive signal between itself and the electrodes. Figure images were borrowed from [6].
+
+§ II. METHODS
+
+To discover how the sensor detects slipping speed, we created a testbed that allowed us to test different slipping speeds and materials. The testbed was designed to maintain a constant distance between the sensor and a sliding object (Fig. 2); keeping the pressure constant was another consideration. Three rectangular objects made of different materials were tested: cherry, basswood, and acrylic with dimensions of ${200} \times {40} \times 3$ $\mathrm{{mm}}$ . The objects were pulled ${134}\mathrm{\;{mm}}$ by a string attached to a UR-10 robotic arm. The objects were then pushed back to the starting point and pulled again while sensor data was recorded at ${600}\mathrm{\;{Hz}}$ . This push-pull cycle lasted for 2 minutes for each speed setting, and speeds were varied from ${10} - {100}\mathrm{\;{mm}}/\mathrm{s}$ in 5 $\mathrm{{mm}}/\mathrm{s}$ increments. Since only the steady-state speed regime was of interest, the data from the acceleration and deceleration were spliced out. The termination of acceleration and initiation of deceleration were estimated to occur within the first $1/{8}^{\text{ th }}$ and the last $1/{6}^{\text{ th }}$ of the slipping period, respectively, with a conservative margin.
+
+Y. Gloumakov, T. Huh, and H. Stuart are with the Mechanical Engineering Department, University of California, Berkeley, CA 06511 USA, (email: {yurigloum, thuh, hstuart} @berkeley.edu).
+
+ < g r a p h i c s >
+
+Figure 2. The left figure depicts the testbed that hosts the sensor and allows the object to slide through, rolling over a set of smooth bearings. On the right the robot arm can be seen to pull on the object by a string. The acrylic piece is placed on the end effect to push the object back into place.
+
+A feature of the nibbed sensor is its Programmable System on Chip (PSoC) infrastructure that enables us to couple any desired set of electrodes that result in a faster signal at the cost of resolution. Because we constrained the slip to a single linear direction, the nib deflection only needed to be tracked along a single axis (Fig. 3). Using a fast Fourier transform (FFT) the signal was converted into the frequency spectrum. Linear regressions are used to create a model using both the amplitude signal and the frequency spectrum separately to discover a fit that could identify the speed and material properties from a new signal. To obtain the frequency spectrum, a 300-frame sliding window was used, with an overlap of 1 frame to maximize the amount of extracted data.
+
+Due to steady state slipping, the frequency responses were regarded as independent samples. Here, a frequency sample is a vector of length $n$ , which corresponds to 300 (half the sampling rate) divided by the bin size, varied from 1 to 300, and where vector values correspond to their respective frequency amplitudes. Both the frequency response, as well as the raw signal amplitude, were averaged during each pull cycle; this meant that during the 2-minute data collection, the slower speed trials yielded fewer cycles and therefore less data. The data was used in building a regression and exploring classification and clustering methods.
+
+§ III. RESULTS
+
+An example of amplitude data during one of the trials is shown in figure 3. At the lowest speed, over the course of 2 minutes, only 7 pull cycles were collected, while at the fastest speed, up to 44 cycles were collected over the course of the same period. The mean amplitude of each cycle is plotted in figure 4. The linear fit ${\mathrm{R}}^{2}$ values were 0.475,0.280, and 0.399 cherry, basswood, and acrylic objects, respectively. Although this corresponds to weak correlations, at speeds below ${50}\mathrm{\;{mm}}/\mathrm{s}$ the correlation appears stronger.
+
+ < g r a p h i c s >
+
+Figure 4. The raw signal amplitude is plotted against the pull speed for each of the three materials. The average amplitude during each pull cycle is plotted as a single point. A linear regression fit is overlayed.
+
+In the frequency domain, linear fits have weaker correlations still when looking at individual frequency bins. In figure 5, we explore the correlation between speed and frequency bands, which consisted of the signal across any number of frequency bins simultaneously; in the figure only the highest and lowest correlations are displayed. Only weak correlation persisted.
+
+§ IV. DISCUSSION
+
+In this work we observed that neither signal amplitude nor frequency responses yielded a strong correlation. Nevertheless, a negative correlation persisted, suggesting that there is an exploitable relationship which can be used to identify the speed at which an object is slipping. However, at speeds below 50 $\mathrm{{mm}}/\mathrm{s}$ , a stronger relationship can be seen, and therefore, this would likely be the region that should be explored further in future data collections. This was not an unexpected result, as the difference between speeds was likely to plateau above a critical speed; nibs experience shorter stick times with increasing substrate speed, likely leading to a saturation in the amplitude signal [13]. Additionally, it appears there are differences in the amplitude response between materials that we believe can be used to train a classifier.
+
+The results related to raw amplitude signal can be seen to have a sinusoidal feature over speeds. We suspect that this corresponds to a resonant frequency related to the testbed. Alternatively, this could be due to the nonlinearity of the robotic arm as it moves in a straight line.
+
+Although the raw amplitude signals display a correlation with speed, it is highly susceptible to changes in grasp force, a factor that we deliberately accounted for by holding the distance constant. In an active controller, a sufficient grasp controller would need to be implement. However, frequency responses are less susceptible to grasp force, and the observation that certain frequency bands appear to find a correlation between the signal and sliding speed suggests that this would be a more reliable metric. Some short frequency bands appear to have very little correlations with speed, while others have a correlation. Out of all the tested frequency bins for the basswood and acrylic materials, 99.73% and 92.58%, respectively, exhibit a positive linear relationship between frequency bin amplitude and speed, while for the cherry material 100% of the tested frequency bins have a negative relationship. This suggests that material can likewise be determined by analyzing the frequency response.
+
+ < g r a p h i c s >
+
+Figure 3. Example of a 2-minute push-pull cycle is shown for a single trial. Initiation and termination of the of the pull corresponds to the first green and the second red vertical line pairs. Accelerations and decelerations are spliced out, therefore only the region between second green and the first red vertical line pairs is considered (highlighted region is shown for the first two pull cycles). The filtered signal is displayed for reference only. A brief pause in motion can be seen immediately after the second vertical red line, then a brief high amplitude signal generated by the object being pushed back to its starting point (the highest amplitude signal), and finally followed by a prolonged pause corresponding to a re-tensioning of the string.
+
+ < g r a p h i c s >
+
+Figure 5. The maximum and minimum R2 value of the linear fit for each frequency band is displayed; these correspond to the highest and lowest correlations between specific frequency bands and slip speed. These values converge when the whole frequency spectrum is considered simultaneously, since there is only one frequency band.
+
+Follow up work will include implementing classifiers that are capable of precisely distinguishing between materials and slipping speeds using, likely, the frequency signals. Ultimately, we hope to build a model capable of interpolating the data and identifying the speed with higher precision.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..ef1a1f1eeca877a06c32211445b452814b84c0df
--- /dev/null
+++ b/papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,26 @@
+## Other models for data visualisations Paul Heinicker
+
+Other models of visualising aim at a (re)formulation of contemporary expectations and narratives concerning data and their visualisations as a very specific model of thinking data visualisation. It is precisely how and with what intention we work on and discuss visualisations that defines the conceptual space we open to this cultural technique. The concept of "other models" first points to the consequences and limitations of these ways of thinking. My positioning of the "other" consists first of the description of what it wants to distinguish itself from. I understand the "other visualising" as a chance to make the normative mode of data visualisation visible and discussable. In the discourse of visualisation, there is not yet an established language for critiquing the expectations of data images. The "other visualising" therefore establishes a negative way of reading the cultural and image phenomenon. As a first concretisation of these models, I formulate in the following a differently directed definition: data visualisation as intended violence.
+
+## Data = Intention
+
+Ideas and hopes around data visualisations are essentially oriented around two fundamental ideas of data visualisation: data and visualisation. With regard to data, I tend to describe contemporary data narratives using the figure of data exceptionalism as reproducers of a normative model of the imagination, practice, and reflection of data.
+
+The concept of data exceptionalism enables to make visible a data positivist perspective, which is essentially defined by the rhetoric of the exception - the data phenomenon as a cultural turning point, a reductionist notion of data - solely numerical and technical, and a data forgetfulness in the sense of forgetting original - non-technical or mathematical approaches. A potential counter-position aims at broadening a narrowed notion of data, and this broadening has also been done by returning to existing concepts of data. Thus, in my perspective, it is primarily intentionality that characterises data. Data are not natural phenomena, but cultural artefacts of ordering structures. Data are not simply there, rather they are intentional. They are created from a particular perspective, in an artificial process, and for an application or reception. This data intention can be concretised in the reflection of the models that produce this data. Thus, at least two model applications are found in the intentional use of data. On the one hand, data - defined by me as abstractions - are not to be understood as images of reality, but as conscious projections of one or more models about this reality. On the other hand, I also understand the various modes of data practices as models applied with a purpose. Data exceptionalism is then understood as dealing with data in a particular model, namely in a positivist way. The ideas and intentions about what can be considered or produced as data and how to work with data are primarily shaped by models.
+
+Probably the most important insight that comes from considering data exceptionalism is the aspect of modelling. The added value of data does not lie in the longed-for automated analysis of patterns in them, but more tellingly in the reflection of the models they produce. Data are both mirrors and producers of social reality. From this perspective, data are not the cause of social asymmetries, but rather an effect of a particular conception of what to do with the data. Data exceptionalism then only describes a certain model to proceed in a data positivist way. The questions about this model, i.e. questions why and for what purpose data is used, then promises possibly even more epistemic value than the analysis of the data itself. What is needed, according to this line of reasoning, is not another algorithmic, computational, or digital turn, but a return to the ideas, notions, and concepts, in short, the modelling of data. Data, by definition, are understood as abstractions, not images of reality, but always projections of a model about that reality. The deficiency of data is not that they are reduced in capacity, but that the confidence of completeness is ascribed to them by society.
+
+## Visualisation $=$ Violence
+
+In relation to the object of visualisation, I distinguish the practice of visualisation in two central forms. In a dichotomous arrangement, I differentiate affirmative and, opposite to that, critical approaches. "Affirmative" I interpret as an attitude toward the data to be visualised that takes them as given and their visualisation as unqualifiedly necessary. Instead of this efficiency- and optimization-driven idea of an image-driven visibility of data, more agile concepts or models should be found that can grasp the process of visualisation more profoundly in terms of its epistemic potential. What is problematised with this conceptual "immobility" is the tendency of the affirmative visualisation model to seem hopeless. Visualisation should rather be understood in its transformative processes, which independently of the object design their own reality and thus their own knowledge, which needs to be reflected accordingly. Therefore, alternative models are needed that attempt to describe the limits and possibilities of the cultural technique of visualisation.
+
+In this context, my ideal of the "other visualising" also concretis-es itself. The "other" means approaches to the idea of visualisation that, apart from the affirmative visualisation models, is based on the critical reflection of the underlying models of thought. In addition to the critique of established conventions, it is primarily a diagrammatic position that understands visualisations as a projection of models. In contrast to a passive understanding of visualised diagrams as a rigid and (re)clarifying order, the diagrammatic is thought of as an active process that designs new arrangements or models in the relation of structures. What unites all these diagrammatics is that they push a certain structure through the filter of a conceptual model or world order onto its object. It is the purposeful transformation of data into a particular order that can be described as violent. Thus, again, there are at least two types of models that shape the process of visualisations. First, it is the notion of how visualisations are conceived: as an affirmative form of legible visualisation, the structural reading as diagrammatic reordering, or even the cosmogrammatic projection. Second, it is then the violent transformation of a data base, shaped via a particular model, that can result in any number of visualisations, depending on which model is chosen.
+
+## Data Visualisation $=$ Intended Violence
+
+As a consequence, I understand data visualisations in their intentional and enforced implementation as intended violence. Data is abstracted from an arbitrary object through a particular model, and then in turn made perceptible through the model of a transformation. In this double model arrangement, the relational aspect of visualisations becomes clear, inscribing itself as a process of projection. Data visualisations do not represent, but rather design their very own images in a cascading transformation of structures. The interpretive directions of this insight are, however, open. A designer or recipient of a visualisation can open up to this circumstance, but these phenomena function intrinsically without this awareness. The model perspective on visualisations is only one possible form of critical questioning. However, it enables diverse moments of insight.
+
+Other models are ultimately intended to give indications of how visualisations are to be conceived as a cultural technique. The goal is not the search for the one visualisation that is to be optimised ever further in its readability and mediation efficiency. Rather, of relevance is an inefficiency that can allow and open up the diversity and complexity of visualisation culture. Instead of the contemporary culture of exclusion by a dominant (and affirmative) model, ideas that deviate from it should also be involved in the creation of visualisations.
+
+
+
diff --git a/papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..295e6f8510accb1ec6dbfac96b4a6a5691546c01
--- /dev/null
+++ b/papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,25 @@
+§ OTHER MODELS FOR DATA VISUALISATIONS PAUL HEINICKER
+
+Other models of visualising aim at a (re)formulation of contemporary expectations and narratives concerning data and their visualisations as a very specific model of thinking data visualisation. It is precisely how and with what intention we work on and discuss visualisations that defines the conceptual space we open to this cultural technique. The concept of "other models" first points to the consequences and limitations of these ways of thinking. My positioning of the "other" consists first of the description of what it wants to distinguish itself from. I understand the "other visualising" as a chance to make the normative mode of data visualisation visible and discussable. In the discourse of visualisation, there is not yet an established language for critiquing the expectations of data images. The "other visualising" therefore establishes a negative way of reading the cultural and image phenomenon. As a first concretisation of these models, I formulate in the following a differently directed definition: data visualisation as intended violence.
+
+§ DATA = INTENTION
+
+Ideas and hopes around data visualisations are essentially oriented around two fundamental ideas of data visualisation: data and visualisation. With regard to data, I tend to describe contemporary data narratives using the figure of data exceptionalism as reproducers of a normative model of the imagination, practice, and reflection of data.
+
+The concept of data exceptionalism enables to make visible a data positivist perspective, which is essentially defined by the rhetoric of the exception - the data phenomenon as a cultural turning point, a reductionist notion of data - solely numerical and technical, and a data forgetfulness in the sense of forgetting original - non-technical or mathematical approaches. A potential counter-position aims at broadening a narrowed notion of data, and this broadening has also been done by returning to existing concepts of data. Thus, in my perspective, it is primarily intentionality that characterises data. Data are not natural phenomena, but cultural artefacts of ordering structures. Data are not simply there, rather they are intentional. They are created from a particular perspective, in an artificial process, and for an application or reception. This data intention can be concretised in the reflection of the models that produce this data. Thus, at least two model applications are found in the intentional use of data. On the one hand, data - defined by me as abstractions - are not to be understood as images of reality, but as conscious projections of one or more models about this reality. On the other hand, I also understand the various modes of data practices as models applied with a purpose. Data exceptionalism is then understood as dealing with data in a particular model, namely in a positivist way. The ideas and intentions about what can be considered or produced as data and how to work with data are primarily shaped by models.
+
+Probably the most important insight that comes from considering data exceptionalism is the aspect of modelling. The added value of data does not lie in the longed-for automated analysis of patterns in them, but more tellingly in the reflection of the models they produce. Data are both mirrors and producers of social reality. From this perspective, data are not the cause of social asymmetries, but rather an effect of a particular conception of what to do with the data. Data exceptionalism then only describes a certain model to proceed in a data positivist way. The questions about this model, i.e. questions why and for what purpose data is used, then promises possibly even more epistemic value than the analysis of the data itself. What is needed, according to this line of reasoning, is not another algorithmic, computational, or digital turn, but a return to the ideas, notions, and concepts, in short, the modelling of data. Data, by definition, are understood as abstractions, not images of reality, but always projections of a model about that reality. The deficiency of data is not that they are reduced in capacity, but that the confidence of completeness is ascribed to them by society.
+
+§ VISUALISATION $=$ VIOLENCE
+
+In relation to the object of visualisation, I distinguish the practice of visualisation in two central forms. In a dichotomous arrangement, I differentiate affirmative and, opposite to that, critical approaches. "Affirmative" I interpret as an attitude toward the data to be visualised that takes them as given and their visualisation as unqualifiedly necessary. Instead of this efficiency- and optimization-driven idea of an image-driven visibility of data, more agile concepts or models should be found that can grasp the process of visualisation more profoundly in terms of its epistemic potential. What is problematised with this conceptual "immobility" is the tendency of the affirmative visualisation model to seem hopeless. Visualisation should rather be understood in its transformative processes, which independently of the object design their own reality and thus their own knowledge, which needs to be reflected accordingly. Therefore, alternative models are needed that attempt to describe the limits and possibilities of the cultural technique of visualisation.
+
+In this context, my ideal of the "other visualising" also concretis-es itself. The "other" means approaches to the idea of visualisation that, apart from the affirmative visualisation models, is based on the critical reflection of the underlying models of thought. In addition to the critique of established conventions, it is primarily a diagrammatic position that understands visualisations as a projection of models. In contrast to a passive understanding of visualised diagrams as a rigid and (re)clarifying order, the diagrammatic is thought of as an active process that designs new arrangements or models in the relation of structures. What unites all these diagrammatics is that they push a certain structure through the filter of a conceptual model or world order onto its object. It is the purposeful transformation of data into a particular order that can be described as violent. Thus, again, there are at least two types of models that shape the process of visualisations. First, it is the notion of how visualisations are conceived: as an affirmative form of legible visualisation, the structural reading as diagrammatic reordering, or even the cosmogrammatic projection. Second, it is then the violent transformation of a data base, shaped via a particular model, that can result in any number of visualisations, depending on which model is chosen.
+
+§ DATA VISUALISATION $=$ INTENDED VIOLENCE
+
+As a consequence, I understand data visualisations in their intentional and enforced implementation as intended violence. Data is abstracted from an arbitrary object through a particular model, and then in turn made perceptible through the model of a transformation. In this double model arrangement, the relational aspect of visualisations becomes clear, inscribing itself as a process of projection. Data visualisations do not represent, but rather design their very own images in a cascading transformation of structures. The interpretive directions of this insight are, however, open. A designer or recipient of a visualisation can open up to this circumstance, but these phenomena function intrinsically without this awareness. The model perspective on visualisations is only one possible form of critical questioning. However, it enables diverse moments of insight.
+
+Other models are ultimately intended to give indications of how visualisations are to be conceived as a cultural technique. The goal is not the search for the one visualisation that is to be optimised ever further in its readability and mediation efficiency. Rather, of relevance is an inefficiency that can allow and open up the diversity and complexity of visualisation culture. Instead of the contemporary culture of exclusion by a dominant (and affirmative) model, ideas that deviate from it should also be involved in the creation of visualisations.
+
+ < g r a p h i c s >
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7c32f0fd7cd0cc5f0c97da40ff31398b7c730162
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,351 @@
+# A Hybrid Approach to Network Intrusion Detection Based On Graph Neural Networks and Transformer Architectures
+
+${1}^{\text{st }}$ Hongrun Zhang
+
+College of Computer Technology
+
+and Applications
+
+Qinghai University
+
+QingHai, China
+
+ys220854040277@qhu.edu.cn
+
+2nd Tengfei Cao
+
+College of Computer Technology
+
+and Applications
+
+Qinghai University
+
+QingHai, China
+
+caotf@qhu.edu.cn
+
+${Abstract}$ -In this paper, we propose a model of a Network Intrusion Detection System (NIDS) named E-T-GraphSAGE (ETG), which fuses Graph Neural Network (GNN) and Transformer techniques. With the widespread adoption of the Internet of Things (IoT) and cloud computing, network structures have become complex and vulnerable. The efficacy of traditional intrusion detection systems is limited in the context of novel and unconventional cyber-attacks. This paper proposes a novel approach to address this challenge. GNN is used to capture the complex relationships between network nodes and edges, analyze network traffic graphs, and identify anomalous behaviors. By introducing the Transformer, the model enhances its ability to handle long-range dependencies in network streaming data and to understand network dynamics at a macro level. The GraphSAGE-Transformer (ETG) model is proposed to optimize the edge features through the self-attention mechanism to exploit the potential of network streaming data and improve the accuracy of intrusion detection. The experimental results show that the model outperforms the existing techniques in key performance metrics Tests on several standard datasets (BoT-IoT, NF-BoT-IoT, NF-ToN-IoT) validate the broad applicability and robustness of the ETG model, especially in complex network environments.
+
+Keywords—GNN, GraphSAGE, Transformer, NIDS
+
+## I. INTRODUCTION
+
+With the widespread adoption of the Internet of Things (IoT) and cloud computing, the structure of network systems is becoming more complex, and the types and numbers of devices are increasing dramatically. This environment provides more vulnerabilities and points of entry for cyber attackers, making traditional cyber defense systems face serious challenges ${}^{\left\lbrack 1\right\rbrack }$ . Modern network attacks are varied, including distributed denial-of-service (DDoS) attacks, malware spread, and data breaches but also more subtle and adaptable, frequently targeting multiple layers of the network and various nodes. In addition, with the rapid development of attack techniques, new and unknown zero-day vulnerability attacks frequently appear, and these attacks are able to bypass the signature-based intrusion detection system easily ${}^{\left\lbrack 2\right\rbrack }$ . Therefore, there is a need to develop new detection techniques that not only recognize known attack patterns but also can predict and adapt to unknown threats.
+
+To overcome these limitations, recent research has increasingly focused on leveraging machine learning and deep learning techniques. Among these, Transformer architectures have gained attention for their self-attention mechanism, which effectively captures long-range dependencies in sequential data. Originally developed for natural language processing, Transformers have been successfully adapted for cybersecurity applications, offering the ability to analyze complex interdependencies within network traffic.
+
+Graph neural networks (GNNs), known for their ability to handle graph-structured data, offer significant potential in cybersecurity applications. By capturing the complex relationships between nodes (e.g., IP addresses or devices) and edges (i.e., data transmissions or sessions) in a network, GNNs are able to efficiently map the overall pattern of network behavior. This capability makes GNN particularly suitable for identifying and analyzing complex network intrusions that are difficult to detect through conventional detection means ${}^{\left\lbrack 3\right\rbrack }$ . GNNs can analyze network traffic graphs by representing hosts or servers as nodes and their communications as edges. By learning the normal and abnormal characteristics of these communication patterns, the GNN is able to identify anomalous behavior in the network, such as unauthorized data access or abnormal data traffic. In addition, a key advantage of GNNs is their ability to integrate data from multiple sources and extract deep network characteristics, which is particularly important for detecting advanced persistent threats (APTs) and multi-stage attacks.
+
+GNN not only enhances the detection of known threats, but more importantly, it provides a mechanism to understand and predict new or variant attack behaviors that are difficult to identify with traditional methods. Therefore, the introduction of GNN into network security systems, especially network intrusion detection systems, will greatly enhance the system's ability to defend against complex network threats ${}^{\left\lbrack 4\right\rbrack }$ .
+
+This research aims to develop an enhanced Network Intrusion Detection System (NIDS) by integrating Graph Neural Networks (GNNs) with Transformer architectures. The goal is to improve the efficiency and accuracy of detecting complex and previously unknown attack patterns by leveraging the Transformer's ability to capture long-range dependencies in network traffic. This integration seeks to enhance the model's capability to analyze network flows on both local and global scales, improving overall performance in detecting sophisticated cyber threats.
+
+The proposed study will use a hybrid approach, combining GNNs and Transformers to analyze network traffic. GNNs will be employed to construct graph representations of network entities and interactions, while the Transformer's self-attention mechanism will capture long-range dependencies and global patterns ${}^{\left\lbrack 5\right\rbrack }$ . This integrated model aims to enhance understanding of network dynamics and improve detection and prediction of both known and emerging threats. The model's effectiveness will be evaluated through experiments on benchmark datasets, comparing its performance with existing intrusion detection systems.
+
+
+
+Fig. 1. Network flow data graph structuring.
+
+As shown in Fig. 1, we utilize both GNN and Transformer to encode the raw stream data successively to obtain the desired graph data structure, which is input to the model for training.
+
+(1) The core contribution of this research is the development of a NIDS model that combines GNN and Transformer. The edge features optimized by the self-attention mechanism fully exploit the potential of network streaming data and significantly improve the detection accuracy of network intrusion.
+
+(2) Tests on multiple standard datasets show that our model outperforms existing techniques in key performance metrics such as accuracy, recall, and F1 score.
+
+The rest of the paper's organizational sequel will detail the design and experimental evaluation of the E-T-GraphSAGE (ETG) model. Part II will explore the development of NIDS, as well as research related to GNNs and Transformers. Part III details the model architecture and key technologies. The fourth section shows the experimental results on a variety of cyberattack datasets and compares them with other methods. The concluding section will summarize the research results and discuss future research directions.
+
+## II. RELATED WORK
+
+In recent years, various approaches have been proposed to enhance the performance of Intrusion Detection Systems (IDS) Alowaidi et al. ${}^{\left\lbrack 6\right\rbrack }$ proposed a hybrid Intrusion Detection System (IDS) combining Machine Learning (ML) and Deep Learning (DL) techniques, which enhances IDS performance and prediction accuracy while lowering computational costs. However, the model's generalization relies on the diversity and representativeness of the training data. If the training data is biased, it negatively impacts the model's real-world performance Gupta et al. ${}^{\left\lbrack 7\right\rbrack }$ proposed an anomaly-based NIDS, this approach considers multiple performance metrics, along with training time and resource usage, but remains limited by dataset dependency and average generalization capabilities. Kumar et al. ${}^{\left\lbrack 8\right\rbrack }$ proposed a bi-directional long short-term memory (BiLSTM) based anomaly detection system for Internet of Things (IoT) networks. The BiLSTM model effectively improves the accuracy by preprocessing and feature selection through normalization and gain ratio.
+
+Suárez-Varela et al. ${}^{\left\lbrack 9\right\rbrack }$ introduced the use of GNNs in the modeling control, and management of communication networks, demonstrated their advantages in terms of generalization capabilities and data-driven solutions, and discussed their potential in network modeling control and management. Hnamte et al. ${}^{\left\lbrack {10}\right\rbrack }$ proposed an approach using Deep Convolutional Neural Networks (DCNN) and validated its performance with the InSDN dataset. While DCNN achieves high accuracy, it demands significant data and computational resources for training.
+
+Kisanga et al. ${}^{\left\lbrack {11}\right\rbrack }$ proposed a new Activity and Event Network (AEN) graph framework that focuses on capturing long-term stealthy threats that are difficult to detect by traditional security tools, and is very promising in detecting long-term threats in cybersecurity. L et al. ${}^{\left\lbrack {12}\right\rbrack }$ proposed an end-to-end anomalous edge detection method based on unified graph embedding, which enhances the model's ability to learn task-relevant patterns by combining embedding learning and anomaly detection into the same objective function, and accurately estimates the probability distributions of edges through the local structure of the graph to identify anomalous edges. Superior accuracy and scalability are demonstrated on multiple publicly available datasets.
+
+Sun et al. ${}^{\left\lbrack {13}\right\rbrack }$ proposed a framework combining Graph Neural Network (GNN) and Transformer for self-supervised heterogeneous graph representation learning. The Metapath-aware Hop2Token method is designed to efficiently convert neighbors with different hop counts in heterogeneous graphs into Token sequences, reducing the computational complexity in Transformer processing. GTC enhances information fusion, improves learning efficiency, and reduces the demand for computational resources by contrasting learning tasks between graph pattern views and hop count views.
+
+Nguyen et al. ${}^{\left\lbrack {14}\right\rbrack }$ proposed a Transformer-based GNN model for learning graph representation. With an unsupervised conduction learning approach, UGformer is able to solve the problem of limited category labels, but for large-scale datasets to construct graphs, UGformer may still need to be optimized to deal with extremely large graph structures, despite the sampling mechanism that UGformer is designed for.
+
+Unlike previous studies, our method focuses on extracting data edge features from network streams and develops E-GraphSAGE models that incorporate transformer modules. Combining local and global features to achieve more accurate feature representations, making full use of the structural and topological information and inherent in network streaming data to achieve better feature representations and network intrusion detection performance. The T-E-GraphSAGE method introduced in this paper addresses the shortcomings of traditional graph embedding techniques by capturing topological details and edge features in network flow data, leading to more precise detection. while its ability to effectively classify samples with unseen node features. Three NIDS standard datasets are used to evaluate our model, which verifies the broad applicability accuracy, and robustness of our model in different types of network scenarios, which is effective in comparison with traditional ML methods, especially in complex network environments. Through these improvements, the performance of our system in network intrusion detection has been significantly improved, and it is able to effectively respond to various network attacks in complex network environments.
+
+### III.The Proposed Method
+
+## A. GraphSAGE
+
+Graph Neural Networks (GNN) are becoming increasingly popular in the field of machine learning. Its power stems from the effective utilization of graph-structured data. These data are widely available in application areas such as social media networks, biological research, and telecommunication systems ${}^{\left\lbrack {15}\right\rbrack }$ . The primary reason for using GNN in NIDS is their capability to leverage the structural data present in network streams, which can be represented graphically. Although some conventional machine learning approaches also handle graph data, they usually involve intricate processes and depend heavily on manually crafted features, leading to more cumbersome and less efficient applications.
+
+GraphSAGE ${}^{\left\lbrack {16}\right\rbrack }$ is an efficient graph neural network technique that generates embedded representations of nodes by sampling and aggregating the features of their neighbors. It is particularly suitable for processing large-scale graph data. The main steps include sampling neighboring nodes, aggregating features, and updating node features, which effectively solve the computation and storage bottlenecks of traditional graph neural networks. As a result, GraphSAGE has been widely used in many fields.
+
+GraphSAGE : learning node representation through local aggregation, and its core steps include three aspects: neighbor node sampling, feature aggregation, and node feature update, as shown in Fig. 2.
+
+In neighbor node sampling, for each node, a fixed number of neighbor nodes are randomly sampled to reduce the computation and storage requirements. Suppose a node in the graph is $v$ , and its set of neighbor nodes is $N\left( v\right)$ , and the set of neighbor nodes obtained from sampling is $\widetilde{N}\left( v\right)$ . This process can be represented as:
+
+$$
+\widetilde{N}\left( v\right) = \operatorname{Sample}\left( {N\left( v\right) , K}\right) \tag{1}
+$$
+
+where $K$ denotes the number of neighbor nodes sampled. This phase seeks to manage computational complexity by limiting the number of adjacent nodes for each vertex in extensive graphs.
+
+
+
+### Fig.2. GraphSAGE model diagram
+
+In feature aggregation, a feature aggregation operation is performed on the sampled set of neighbor nodes $\widetilde{N}\left( v\right)$ to generate neighbor feature representations. Common aggregation methods include mean value aggregation, pooling, and LSTM. The following are the formulas for several aggregation methods:
+
+1) Mean aggregation: Mean aggregation computes the average of neighboring node features. Its formula is:
+
+$$
+{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{mean}\left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{2}
+$$
+
+where ${h}_{u}^{\left( k - 1\right) }$ denotes the feature representation of the neighboring node at the $k - 1$ th layer of $u$ , and ${h}_{\widetilde{N}\left( v\right) }^{\left( k\right) }$ denotes the representation of the node $v$ after aggregating the features of its neighboring nodes at the $k$ layer.
+
+2) Maximum pooling: Maximum pooling is used to take the maximum value in the features of neighboring nodes. The formula for this is:
+
+$$
+{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \max \left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{3}
+$$
+
+3) LSTM aggregation: LSTM aggregation uses LSTM network for neighbor node features with the formula:
+
+$$
+{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{LSTM}\left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{4}
+$$
+
+For node feature update, the algorithm combines the node's own features with the aggregated neighbor features and updates the node feature representation through a neural network. A common way of combining is a concatenation operation (concatenation) followed by a transformation through a fully connected layer. Its formula is:
+
+$$
+{h}_{v}^{\left( k\right) } = \sigma \left( {{W}^{\left( k\right) } \cdot \operatorname{concat}\left( {{h}_{v}^{\left( k - 1\right) },{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) }}\right) }\right) \tag{5}
+$$
+
+where $\sigma$ denotes the activation function (e.g., ReLU), ${W}^{\left( k\right) }$ denotes the weight matrix of the $k$ -th layer, and ${h}_{v}^{\left( k\right) }$ denotes the feature representation of node $v$ in the $k$ -th layer.
+
+In the specific process, the features are first initialized and each node’s feature can be its attribute vector ${x}_{v}$ . Then multilayer sampling and aggregation is performed, for the $k$ -th layer, each node $v$ randomly samples a fixed number of $K$ neighbors from its neighborhood to form the sampling set $\widetilde{N}\left( v\right)$ and aggregates the features of the neighboring nodes using the selected aggregation function (e.g., mean, maximum pooling, or LSTM) to obtain ${h}_{\bar{N}\left( v\right) }^{\left( k\right) }$ . Then the node $v$ own features are connected to the aggregated neighboring features in a join operation and nonlinearly transformed through the fully connected layer to obtain a new node feature representation ${h}_{v}^{\left( k\right) }$ Finally, after multi-layer (usually 2 to 3 layers) sampling and aggregation operations, the embedding representation of each node is finally generated ${h}_{v}$ . Through the above steps, the GraphSAGE algorithm is able to efficiently deal with large-scale graph data, and generate high-quality node embedding representations through sampling and aggregation operations.
+
+## B. E-Transformer-GraphSAGE Methods
+
+The traditional GraphSAGE method mainly focuses on the analysis and utilization of node features for node classification, but is deficient in dealing with edge features. The primary objective of NIDS aims to detect and identify malicious traffic. In our study, we focus on the application of edge features and improve the GraphSAGE model by using the edge embedding method and introducing the Transformer layer method.
+
+1) E-GraphSAGE: In order to handle graph structure data efficiently, we designed and implemented the GraphSAGE layer (SAGELayer). This layer updates the representation of each node by aggregating the features of the node's neighbors to capture the relationships between nodes in the graph. GraphSAGE accomplishes the updating of node representations through message passing and apply updates, and employs the ReLU activation function to improve the model’s nonlinear representation ${}^{\left\lbrack {17}\right\rbrack }$ . The main differences from the original GraphSAGE algorithm are the algorithmic inputs, the message passing aggregation functions and the outputs. In the SAGE layer, edge embedding is incorporated into the messaging process to provide richer information. Unlike the traditional GraphSAGE module, the aggregated embedding of sampled neighboring edges is generated at the kth layer for edge features. using a mean aggregation function as shown in the following equation.
+
+$$
+{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{mean}\left( \left\{ {{e}_{uv}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) ,{uv} \in \varepsilon }\right\} \right) \tag{6}
+$$
+
+where ${e}_{uv}^{\left( k - 1\right) }$ is the feature of the edge ${uv}$ in the $\mathrm{k} - 1$ layer of the sampling neighborhood $\widetilde{N}\left( v\right)$ of node $v$ , and the set $\{ \forall u \in$ $\widetilde{N}\left( v\right) ,{uv} \in \varepsilon \}$ represents the sampling edges within the neighborhood $\widetilde{N}\left( v\right)$ . The edge features of the ${uv}$ of the kth layer are spliced by the following equation, which represents the final result of the forward propagation phase.
+
+$$
+{h}_{uv}^{k} = \operatorname{CONCAT}\left( {{h}_{u}^{k},{h}_{v}^{k}}\right) ,{uv} \in \mathcal{E} \tag{7}
+$$
+
+In our study, we constructed a two-layer E-GraphSAGE model with each layer consisting of an E-SAGELayer.
+
+Neighboring node features are aggregated to generate the embedded representation of the node and a mean value aggregation method is used, where the features of the node are the mean value of the features of its neighboring nodes. The first layer E-SAGELayer in this model aggregates the input features to generate the first layer of node embedding; The second layer takes the first layer of node embeddings as input and again performs aggregation to generate the final node embeddings. Through this multi-layer aggregation, we are able to capture more complex node characteristics and neighbor relationships. A Dropout operation is used to avoid overfitting. The advantage of stacking multiple layers of GraphSAGE is the ability to capture more complex node relationships and form richer node representations to improve the performance of the model.
+
+2) Transformer: The traditional GraphSAGE method mainly focuses on the analysis and utilization of node features for node classification, but is deficient in dealing with edge features. The primary aim of NIDS is to detect and identify malicious traffic, aligning with the edge classification problem in network flow classification. Our study emphasizes the use of edge features and enhances the GraphSAGE model by incorporating the edge embedding method and introducing the Transformer layer technique.
+
+The Transformer Encoder Layer (TEL) is the basic component of the Transformer model, which mainly consists of the MultiheadAttention mechanism, Feed-forward Neural Network (Linear Layer), and Normalization Layer (LayerNorm), and Dropout is applied between the layers to prevent overfitting. In the Transformer Encoder Layer, the inputs are node features (generated by the SAGE layer) and this layer does not explicitly process edge features. Its main function is to capture the dependencies between node features and global information through a multi-head attention mechanism along with a feed-forward neural network.
+
+a) Multi-head attention: The self-attention mechanism allows the model to capture global dependencies by focusing on all other elements in a sequence while processing each element in the sequence. The multi-head self-attention mechanism improves the model's sensitivity to different features by performing multiple self-attention computations in parallel. The specific formula is as follows:
+
+$$
+\left\{ \begin{matrix} \operatorname{Attemtion}\left( {Q, K, V}\right) = \operatorname{softmax}\left( \frac{Q{K}^{T}}{\sqrt{{d}_{k}}}\right) V \\ \operatorname{MultiHead}\left( {Q, K, V}\right) = \operatorname{Concat}\left( {{\operatorname{head}}_{1},\cdots {\operatorname{head}}_{i},\cdots ,{\operatorname{head}}_{h}}\right) {W}_{C} \end{matrix}\right. \tag{8}
+$$
+
+---
+
+Identify applicable funding agency here. If none, delete this text box.
+
+---
+
+where $\operatorname{Attemtion}\left( {Q, K, V}\right)$ is the single-head self-attention computation, $\mathrm{Q}$ denotes the computational query matrix, $\mathrm{K}$ denotes the key matrix, $\mathrm{V}$ denotes the value matrix, $\mathrm{d}$ denotes the input vector dimension, and MultiHead(Q, K, V)denotes the multi-head self-attention splicing the results of the $h$ heads together and obtaining the final output by a linear transformation, where ${\text{head}}_{i} =$ Attention $\left( {{Q}_{i},{K}_{i},{V}_{i}}\right)$ , and ${W}_{O} \in {\mathbb{R}}^{h{d}_{k} \times {d}_{\text{model }}}$ is the output weight matrix and ${d}_{\text{model }}$ is the input feature dimension.
+
+Specifically, the MultiheadAttention mechanism captures the global dependencies of the input data by processing the input data in parallel through multiple Attention Heads. Each Attention Head performs self-attention computation independently, which is able to focus on different features in the input data and enhance the sensitivity of the model to multiple features. The multi-head attention mechanism's output is linked to the feed-forward neural network via a linear transformation.
+
+b) Feed-forward neural network: Feed-forward neural networks (FFN) are fully connected neural networks applied independently at each position in each Transformer coding layer. The specific formula is as follows:
+
+$$
+\operatorname{FFN}\left( x\right) = \max \left( {0, x{W}_{1} + {b}_{1}}\right) {W}_{2} + {b}_{2} \tag{9}
+$$
+
+where ${W}_{1} \in {\mathbb{R}}^{{d}_{\text{model }} \times {dff}},{W}_{2} \in {\mathbb{R}}^{{d}_{ff} \times {d}_{\text{model }}},{b}_{1} \in {\mathbb{R}}^{{d}_{ff}},{b}_{2} \in$ ${\mathbb{R}}^{{d}_{\text{model }}}$ is the parameter of the science department and ${d}_{ff}$ is the hidden layer dimension of the FNN.
+
+The feedforward neural network used in this paper includes two fully connected layers with a ReLU activation function and Dropout applied in between. The first fully connected layer maps the input dimension from the embedded dimension (embed_dim) to a higher hidden dimension (ff_hidden_dim), the ReLU activation function introduces a nonlinear transformation, and the Dropout operation is used to prevent overfitting. The second fully connected layer maps the hidden dimension back to the embedded dimension, thus keeping the dimensionality of the inputs and outputs the same.
+
+c) Normalization layer: The normalization layer is implemented following each sublayer, including both self-attention and the feed-forward neural network, to ensure regularization and stabilize the training process. The specific formulas are as follows:
+
+$$
+\text{ LayerNorm }\left( x\right) = \frac{x - \mu }{\sigma + \varepsilon } \cdot \gamma + \beta \tag{10}
+$$
+
+where $\mu$ and $\sigma$ are the mean and standard deviation of the inputs respectively, $\gamma$ and $\beta$ are the learnable scaling and offset parameters and $\varepsilon$ is a small constant.
+
+Each coding layer undergoes Layer Normalization and Residual Connection between and after the multi-head self-attention mechanism and the feed-forward neural network. Layer Normalization helps to stabilize and speed up the training process, while Residual Connection helps to solve the problem of vanishing gradients in deep networks.
+
+d) Dropout: Dropout randomly discards a certain percentage of neurons during training to prevent overfitting. By stacking multiple such coding layers, the Transformer model is able to effectively capture the global dependencies of the input data and enhance the model's sensitivity to different features. The multi-head self-attention mechanism in each layer enables the model to focus on different features in the input data, and the feed-forward neural network further processes these features. Through the layer-by-layer processing of the multilayer structure, the model is able to capture more complex and deeper feature relationships in the input data, which improves its performance in various tasks.
+
+## C. NIDS
+
+Fig. 3 shows how the network stream data is constructed as graph data and the propagation process from the source node to the destination node. Fig. 4 shows an overview of our E-Transformer-GraphSAGE NIDS. Initially, a graph is created using the network flow data. Next, the generated network graph is fed into the E-Transformer-GraphSAGE model for supervised training. Edge embeddings are designed to classify network streams into benign or malicious categories. The following subsections explain these three steps in detail.
+
+Netflow Data
+
+ | IPV4 SRC ADDR | L4 SRC PORT | IPV4 DST ADDR | L4 DST PORT | PROTOC OL | L7 PROT 。 | IN BYTE S | OUT BY TES | IN PKTS | OUT PK TS | TCP FLA QS | FLOW D URATION MILLIS ECONDS | Label | Attack |
| 192,168.1.7 0 | 46800 | 239.255.25 5.250 | 15600 | 17 | 0 | 63 | 0 | 1 | 0 | 0 | 0 | 0 | Benign |
| 192,168,1.7 9 | 41361 | 192,168.1.1 | 15600 | 17 | 0 | 63 | 0 | 1 | 0 | 0 | 0 | 0 | Benign |
| 192,168.1.1 | 60641 | 192,168.1.3 1 | 53 | 17 | 5 | 100 | 100 | 2 | 2 | 0 | 2 | 1 | Injection |
| 192,168.1.1 | 43803 | 192,168.1.1 52 | 53 | 17 | 5 | 100 | 100 | 2 | 2 | 0 | 7 | 1 | Scanning |
| 192,168,1.3 1 | 63898 | 192,168.1.3 6 | 5355 | 17 | 154 | 122 | 0 | 2 | 0 | 0 | 0 | 0 | Benign |
| 192,168,1.3 6 | 53153 | 192,168.1.0 7 | 5355 | 17 | 154 | 122 | 0 | 2 | 0 | 0 | 0 | 0 | Benign |
| 192,168.1.3 6 | 44248 | 192,168.1.1 52 | 80 | 6 | 7 | 526 | 2816 | 6 | 6 | 27 | 1021 | 1 | XSS |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
+
+
+
+Fig. 3. Network flow data conversion diagram data
+
+
+
+Fig. 4. E-Transformer-graphsage-based Network Intrusion Detection System
+
+1) Graph data structure: Net-Flow is a commonly used format for logging network communications in production environments and is the predominant format in Network Intrusion Detection System (NIDS) environments. A flow record typically includes fields that identify the communication's source and destination, along with additional information like packet and byte counts, and flow duration. Graph structures naturally model this type of data. In this study, we use the source IP address, source port, destination IP address, and destination port. The first two fields form a binary group identifying the source node, and the last two form the destination node. The remaining data are used as features for that edge, making the graph nodes featureless. We assign a vector of all 1's to all nodes in the algorithm.
+
+2) E-Transformer-GraphSAGE: Our proposed model combines the sensitivity of GNN to local structures and the ability of Transformer to capture global dependencies by first processing the graph data through E-GraphSAGE to obtain node representations. Then, Transformer is utilized to further capture global dependencies. During the training process, we utilize a weighted cross-entropy loss function (CrossEntropyLoss) to address category imbalance. We use Adam optimizer (Adam optimizer) for parameter updating. The algorithm's output is compared with the labels from the NIDS dataset and the model's trainable parameters are adjusted in the backpropagation phase. After tuning the model parameters during training, the performance of the model can be evaluated by classifying unseen test samples. The process involves converting the test stream records into graph data structures. Edge embeddings are then generated using a trained E-Transformer-GraphSAGE layer. These edge embeddings are subsequently transformed into class probabilities via the Softmax layer. The predicted class probabilities are compared with the actual class labels to evaluate the classification performance metrics.
+
+## IV. EXPERIMENT
+
+In this section, We performed binary classification and multiclassification task comparisons to validate the effectiveness of our algorithm.
+
+## A. Experiment Setting
+
+We modeled using Python, Pytorch, and DGL, and the server environment was performed on an Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz total of 32 cores, a single A100 graphics card, and 192G RAM.
+
+## B. Datasets
+
+To evaluate our proposed GNN-based NIDS, we use three publicly available datasets that include various labeled attack flows and benign network flows. The first dataset is BoT-IoT, which is widely used for evaluating ML based network intrusion detection systems in the Internet of Things, with a proprietary format and feature set. The second and third datasets are NF-ToN-IoT and NF-BoT-IoT presented in Netflow format.
+
+1) BoT-IoT datasets: The BoT-IoT dataset ${}^{\left\lbrack {18}\right\rbrack }$ was generated by the Cyber Range Lab at the Australian Center for Cyber Security (ACCS) to evaluate the performance of cyber security tools. It simulates real network environments containing normal traffic and multiple types of malicious traffic such as DDoS, DoS, reconnaissance, and data theft for Intrusion Detection System (IDS) training and testing. Avoid combining SI units, like current in amperes, with CGS units, such as the magnetic field measured in oersteds, as this can cause dimensional imbalance and confusion. If using mixed units, clearly specify the units for each quantity in the equation.
+
+2) NF-BoT-IoT datasets: The NF-BoT-IoT dataset ${}^{\left\lbrack {19}\right\rbrack }$ is a NetFlow characterization dataset extracted from the BoT-IoT dataset to provide a more concise representation of network traffic by summarizing IP traffic flows. The dataset includes information such as source and destination IP addresses, ports, packet counts, byte counts, and timestamps, which helps in large-scale data analysis and real-time intrusion detection.
+
+3) NF-ToN-IoT datasets: The NF-ToN-IoT dataset is a NetFlow characterization dataset generated based on the ToN-IoT dataset and contains telemetry and operational network data from Internet of Things (IoT) devices. The dataset provides detailed traffic records that help detect network intrusions and understand traffic patterns in IoT environments and is suitable for IoT security research.
+
+## C. Results Of The Experiment
+
+To assess the effectiveness of the proposed neural network model, we employed the standard metrics outlined in Table I. Here, TP stands for true positives, TN for true negatives, FP for false positives, and FN for false negatives.
+
+TABLE I. EVALUATION INDICATORS
+
+| Accuracy | $\frac{\mathbf{{TP}} + \mathbf{{TN}}}{\mathbf{{TP}} + \mathbf{{FP}} + \mathbf{{TN}} + \mathbf{{FN}}} \times \mathbf{{100}}\%$ |
| Precision | $\mathbf{{TP}} + \mathbf{{FP}} \times \mathbf{{100}}\%$ |
| FAR | $\overline{{FP} + {TN}} \times {100}\%$ |
| Recall | TP + FN $\times$ 100% |
| F1-Score | $2 \times \frac{\text{ Precision } \times \text{ Recall }}{\text{ Precision } \times \text{ Recall }} \times {100}\%$ |
+
+1) Binary classification results: The datasets employed in our experiments contain dual-layer labels for each data instance The first layer indicates whether the network flow is benign or non-benign, while the second layer specifies the attack type. For the binary classification task, we use the first layer of labels, and for the multi-class classification task, we use the second layer of labels ${}^{\left\lbrack {20},{21}\right\rbrack }$ . across three datasets: BoT-IoT, NF-BoT-IoT, and NF-ToN-IoT. The findings demonstrate that our method performs exceptionally well in binary classification, a key factor for successful network intrusion detection.
+
+TABLE II. BINARY CLASSIFFCATION RESULTS
+
+| Dataset | Accuracy | Precision | F1-Score | Recall | $\mathbf{{FAR}}$ |
| BoT-IoT | 99.99% | 1.00 | 1.00 | 99.99% | 0.00% |
| NF-BoT- IoT | 94.52% | 1.00 | 0.99 | 97.32% | 0.24% |
| NF-ToN- IoT | 99.93% | 1.00 | 1.00 | 99.84% | 0.03% |
+
+Table II summarizes our model's performance metrics-accuracy, precision, F1-Score, and False Alarm Rate (FAR)-
+
+In cybersecurity, datasets frequently exhibit an imbalance, with fewer attack samples compared to normal traffic. The F1- Score is particularly important in such scenarios as it balances precision and recall, providing a more accurate assessment of the model's ability to differentiate between benign and malicious traffic than accuracy alone.
+
+Given the importance of precise intrusion detection, particularly in practical applications where the cost of missed detections is high, we prioritize the F1-Score as a more reliable indicator of our model's performance. In the following sections, we will compare our F1-Score with those from other studies to demonstrate how effectively our model handles the challenges of imbalanced datasets, ensuring dependable intrusion detection.
+
+TABLE III. COMPARISON OF BINARY-CLASSIFICATION ALGORITHMS F1
+
+| Method | Dataset | F1 |
| Ours CatBoost | BoT-IoT | 1.00 0.99 |
| Ours Extra Tree Classifier TS-IDS | NF-BoT-IoT | 0.99 0.97 0.95 |
| Ours Extra Tree Classifier | NF-ToN-IoT | 1.00 1.00 |
+
+Table III shows the F1 of our method compared with other algorithms ${}^{\left\lbrack {21},{22}\right\rbrack }$ . The results show that our method achieves F1- Scores that are either similar to or better than those of existing approaches. This indicates that our method performs effectively in both traffic classification and binary network intrusion detection.
+
+The comparable or superior F1-Scores demonstrate that our model is not only accurate in identifying malicious network traffic but also maintains a balanced performance across different datasets. This balance is crucial in practical applications, where high precision and recall are necessary to minimize false positives and ensure reliable intrusion detection.
+
+In summary, the data in Table III confirms that our method is competitive with, and in some cases superior to, other leading algorithms, highlighting its effectiveness in traffic classification and network intrusion detection tasks.
+
+2) Multiclass classiffcation results: Table IV presents the multi-classification results of our method across three standard datasets, where the classifier is tasked with distinguishing between various attack types. The multi-classification problem is more complex than binary classification, as it requires the model to accurately identify not just whether an attack is present, but also to specify the type of attack. The results in Table IV indicate that our model demonstrates strong performance, particularly on the BoT-IoT dataset. This superior performance is indicative of the model's capability to effectively differentiate between the distinct attack types within this dataset.
+
+Table V provides further insight into the model's performance by showing the recall and F1-Score values for different attacks in the multi-classification task, specifically focusing on the ToN-IoT dataset. These metrics are crucial for understanding the model's ability to correctly identify each attack type. High recall values suggest that the model is effective in identifying the majority of true positive instances for most attack types, minimizing the risk of undetected threats. Similarly, strong F1-Score values indicate a good balance between precision and recall, reinforcing the model's robustness in handling diverse attack scenarios.
+
+TABLE IV. COMPARISON OF BOT-IOT AND NF-BOT-IOT MULTI-CLASSIFICATION ALGORITHMS FI
+
+ | BoT-IoT | NF-BoT-IoT |
| Class Name | Recall | F1- Score | Class Name | Recall |
| Benign | 100.00% | 0.99 | Benign | 100.00% |
| DDos | 99.99% | 1.00 | DDos | 99.99% |
| Dos | 99.99% | 1.00 | Dos | 99.99% |
| Reconnaissance | 99.99% | 1.00 | Reconnaissance | 99.99% |
| Theft | 94.52% | 0.98 | Theft | 94.52% |
| Weighted Average | 99.99 | 1.00 | Weighted Average | 99.99 |
+
+ABLE V. COMPARISON OF NF-TON-IOT MULTI-CLASSIFICATION ALGORITHMS
+
+ | NF-ToN-IoT |
| Class Name | Recall | F1-Score |
| Benign | 98.33% | 0.99 |
| Backdoor | 98.46% | 0.99 |
| DDos | 57.47% | 0.73 |
| Dos | 99.72 | 0.46 |
| Injection | 30.59 | 0.46 |
| MIMT | 55.02 | 0.25 |
| Ransomware | 80.28 | 0.42 |
| Password | 100.00 | 0.99 |
| Scanning | 25.92 | 0.15 |
| XSS | 40.70% | 0.28 |
| Weighted Average | 68.65% | 0.67 |
+
+However, the experimental plots of confusion matrices shown in Figures 5 and 6 for the NF-BoT-IoT and NF-ToN-IoT datasets reveal some nuances in the model's performance. While the recognition rate is extremely high for several attack types, the model struggles with accurately classifying DDoS attacks. This issue likely stems from the fact that during model training, DDoS and DoS attacks shared similar features, leading to a significant overlap in their learned representations. As a result, the model occasionally misclassifies DDoS attacks as DoS attacks, which suggests that the feature extraction process may need refinement to better distinguish between these two attack types.
+
+The observed difficulty in separating DDoS from DoS attacks highlights a potential area for improvement. One possible solution could involve enhancing the feature engineering process to capture more distinctive characteristics of these attack types. Additionally, adjusting the training process to emphasize the differences between DDoS and DoS attacks, perhaps through the use of more advanced techniques like adversarial training or ensemble learning, could further improve classification accuracy.
+
+In summary, while our model excels in the multi-classification of several attack types, especially within the BoT-IoT dataset, there remains room for improvement in the classification of closely related attacks such as DDoS and DoS. Addressing these challenges will be crucial for further enhancing the model's overall reliability and effectiveness in real-world network security applications.
+
+
+
+Fig. 5. NF-BoT-IoT multiclassification results
+
+
+
+Fig. 6. NF-ToN-IoT multiclassification results
+
+As with binary classification, we compared the performance of our model's Network Intrusion Detection System (NIDS) with other classifiers, as shown in studies ${}^{\left\lbrack {23},{24}\right\rbrack }$ . Table VI presents the results of this comparison, focusing on the multi-classification task.
+
+The findings reveal that our algorithm consistently achieves higher average F1-Score values compared to all existing methods. This is particularly important in multi-classification, where the ability to accurately distinguish between multiple types of network attacks is crucial. The superior F1-Score suggests that our model not only identifies attacks effectively but also excels in correctly classifying the different types of attacks, a challenge where other classifiers often fall short.
+
+These results underscore the effectiveness of our approach in handling the complexities of multi-class network intrusion detection, proving that our model outperforms current leading methods in this critical area.
+
+TABLE VI. COMPARISON OF MULTI-CLASSIFICATION ALGORITHMS F 1
+
+| Method | Dataset | W-F1 |
| Ours CatBoost | BoT-IoT | 1.00 0.99 |
| Ours Extra Tree Classifier TS-IDS | NF-BoT-IoT | 0.88 0.77 0.83 |
| Ours Extra Tree Classifier | NF-ToN-IoT | 0.67 0.60 |
+
+Overall, our method demonstrates superior performance compared to other Network Intrusion Detection System (NIDS) approaches across both binary and multi-classification tasks, as evidenced by the results from the three datasets utilized in our study. Our model not only achieves higher accuracy and F1- Scores but also shows remarkable robustness and generalizability. This indicates that it is well-equipped to handle various types of network traffic and detect both known and emerging threats effectively.
+
+The model's ability to consistently outperform other methods highlights its advanced capabilities in accurately identifying and classifying different types of network attacks, whether it's simply distinguishing between benign and malicious traffic or correctly categorizing specific attack types. This robust performance across diverse datasets suggests that our method is adaptable to different network environments and can maintain its effectiveness even when faced with the complexities and variabilities of real-world data.
+
+## V. CONCLUSION AND FUTURE WORK
+
+In this paper, we have introduced a novel GNN-based network intrusion detection method called E-T-GraphSAGE, which has enhanced attack flow detection by capturing edge features and topology patterns within network flow graphs. Our focus has been on applying E-T-GraphSAGE to detect malicious network flows in the context of network intrusion detection. Experimental evaluations have shown that our model performs very well on the three NIDS benchmark datasets and generally outperforms currently available network intrusion detection methods. In the future, we plan to build unsupervised graph neural network intrusion detection models, as well as lighten the E-T-GraphSAGE model and apply it to edge network servers, especially small and medium-sized network devices, for better timely network intrusion detection at the edge.
+
+## ACKNOWLEDGMENT
+
+This work is supported by the National Natural Science Foundation of China under Grant 62101299.
+
+## REFERENCES
+
+[1] Chaabouni N, Mosbah M, Zemmari A, et al. Network intrusion detection for IoT security based on learning techniques[J]. IEEE Communications Surveys & Tutorials, 2019, 21(3): 2671-2701.
+
+[2] Naeem H. Analysis of Network Security in IoT-based Cloud Computing Using Machine Learning[J]. International Journal for Electronic Crime Investigation, 2023, 7(2).
+
+[3] Deng X, Zhu J, Pei X, et al. Flow topology-based graph convolutional network for intrusion detection in label-limited IoT networks[J]. IEEE Transactions on Network and Service Management, 2022, 20(1): 684- 696.
+
+[4] Zhong X, Wan G. Six-GraphSecurity: Industrial Internet Intrusion Detection Based On Graph Neural Network[C]//2023 IEEE 7th Information Technology and Mechatronics Engineering Conference (ITOEC). IEEE, 2023, 7: 1340-1344.
+
+[5] Sukhbaatar S, Grave E, Bojanowski P, et al. Adaptive attention span in transformers[J]. arXiv preprint arXiv:1905.07799, 2019.
+
+[6] Alowaidi M. Modified Intrusion Detection Tree with Hybrid Deep Learning Framework based Cyber Security Intrusion Detection Model[J]. International Journal of Advanced Computer Science and Applications, 2022, 13(10).
+
+[7] Gupta N, Jindal V, Bedi P. LIO-IDS: Handling class imbalance using LSTM and improved one-vs-one technique in intrusion detection system[J]. Computer Networks, 2021, 192: 108076.
+
+[8] Kumar P J, Neduncheliyan S, Adnan M M, et al. Anomaly-Based Intrusion Detection System Using Bidirectional Long Short-Term Memory for Internet of Things[C]//2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE). IEEE, 2024: 01-04..
+
+[9] Suárez-Varela J, Almasan P, Ferriol-Galmés M, et al. Graph neural networks for communication networks: Context, use cases and opportunities[J]. IEEE network, 2022, 37(3): 146-153.
+
+[10] Hnamte and J. Hussain, "Network Intrusion Detection using Deep Convolution Neural Network," 2023 4th International Conference for Emerging Technology (INCET), Belgaum, India, 2023, pp. 1-6, doi: 10.1109/INCET57972.2023.10170202.
+
+[11] Kisanga P, Woungang I, Traore I, et al. Network anomaly detection using a graph neural network[C]//2023 International Conference on Computing, Networking and Communications (ICNC). IEEE, 2023: 61- 65
+
+[12] Ouyang L, Zhang Y, Wang Y. Unified graph embedding-based anomalous edge detection[C]//2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020: 1-8.
+
+[13] Sun Y, Zhu D, Wang Y, et al. GTC: GNN-Transformer Co-contrastive Learning for Self-supervised Heterogeneous Graph Representation[J]. arXiv preprint arXiv:2403.15520, 2024.
+
+[14] Dai Quoc Nguyen, Tu Dinh Nguyen, and Dinh Phung. 2022. Universal Graph Transformer Self-Attention Networks. In Companion Proceedings of the Web Conference 2022 (WWW '22 Companion), April 25-29, 2022, Virtual Event, Lyon, France. ACM, New York, NY, USA,
+
+[15] Zhou J, Cui G, Hu S, et al. Graph neural networks: A review of methods and applications[J]. AI open, 2020, 1: 57-81.
+
+[16] Hamilton W, Ying Z, Leskovec J. Inductive representation learning on large graphs[J]. Advances in neural information processing systems, 2017,30.
+
+[17] Lo W W, Layeghy S, Sarhan M, et al. E-graphsage: A graph neural network based intrusion detection system for iot[C]//NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium. IEEE, 2022: 1-9.
+
+[18] Koroniotis N, Moustafa N, Sitnikova E, et al. Towards the development of realistic botnet dataset in the internet of things for network forensic analytics: Bot-iot dataset[J]. Future Generation Computer Systems, 2019, 100: 779-796.
+
+[19] Sarhan M, Layeghy S, Moustafa N, et al. Netflow datasets for machine learning-based network intrusion detection systems[C]//Big Data Technologies and Applications: 10th EAI International Conference, BDTA 2020, and 13th EAI International Conference on Wireless Internet, WiCON 2020, Virtual Event, December 11, 2020, Proceedings 10. Springer International Publishing, 2021: 117-135.
+
+[20] Sarhan M, Layeghy S, Portmann M. Evaluating standard feature sets towards increased generalisability and explainability of ML-based network intrusion detection[J]. Big Data Research, 2022, 30: 100359.
+
+[21] Tanha J, Abdi Y, Samadi N, et al. Boosting methods for multi-class imbalanced data classification: an experimental review[J]. Journal of Big data, 2020, 7: 1-47.
+
+[22] Lawal M A, Shaikh R A, Hassan S R. An anomaly mitigation framework for iot using fog computing[J]. Electronics, 2020, 9(10): 1565.
+
+[23] Churcher A, Ullah R, Ahmad J, et al. An experimental analysis of attack classification using machine learning in IoT networks[J]. Sensors, 2021, 21(2): 446.
+
+[24] Nguyen H, Kashef R. TS-IDS: Traffic-aware self-supervised learning for IoT Network Intrusion Detection[J]. Knowledge-Based Systems, 2023, 279: 110966.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6e45869d9a2a4fdf7a9da5991400f583c8ef9760
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,445 @@
+§ A HYBRID APPROACH TO NETWORK INTRUSION DETECTION BASED ON GRAPH NEURAL NETWORKS AND TRANSFORMER ARCHITECTURES
+
+${1}^{\text{ st }}$ Hongrun Zhang
+
+College of Computer Technology
+
+and Applications
+
+Qinghai University
+
+QingHai, China
+
+ys220854040277@qhu.edu.cn
+
+2nd Tengfei Cao
+
+College of Computer Technology
+
+and Applications
+
+Qinghai University
+
+QingHai, China
+
+caotf@qhu.edu.cn
+
+${Abstract}$ -In this paper, we propose a model of a Network Intrusion Detection System (NIDS) named E-T-GraphSAGE (ETG), which fuses Graph Neural Network (GNN) and Transformer techniques. With the widespread adoption of the Internet of Things (IoT) and cloud computing, network structures have become complex and vulnerable. The efficacy of traditional intrusion detection systems is limited in the context of novel and unconventional cyber-attacks. This paper proposes a novel approach to address this challenge. GNN is used to capture the complex relationships between network nodes and edges, analyze network traffic graphs, and identify anomalous behaviors. By introducing the Transformer, the model enhances its ability to handle long-range dependencies in network streaming data and to understand network dynamics at a macro level. The GraphSAGE-Transformer (ETG) model is proposed to optimize the edge features through the self-attention mechanism to exploit the potential of network streaming data and improve the accuracy of intrusion detection. The experimental results show that the model outperforms the existing techniques in key performance metrics Tests on several standard datasets (BoT-IoT, NF-BoT-IoT, NF-ToN-IoT) validate the broad applicability and robustness of the ETG model, especially in complex network environments.
+
+Keywords—GNN, GraphSAGE, Transformer, NIDS
+
+§ I. INTRODUCTION
+
+With the widespread adoption of the Internet of Things (IoT) and cloud computing, the structure of network systems is becoming more complex, and the types and numbers of devices are increasing dramatically. This environment provides more vulnerabilities and points of entry for cyber attackers, making traditional cyber defense systems face serious challenges ${}^{\left\lbrack 1\right\rbrack }$ . Modern network attacks are varied, including distributed denial-of-service (DDoS) attacks, malware spread, and data breaches but also more subtle and adaptable, frequently targeting multiple layers of the network and various nodes. In addition, with the rapid development of attack techniques, new and unknown zero-day vulnerability attacks frequently appear, and these attacks are able to bypass the signature-based intrusion detection system easily ${}^{\left\lbrack 2\right\rbrack }$ . Therefore, there is a need to develop new detection techniques that not only recognize known attack patterns but also can predict and adapt to unknown threats.
+
+To overcome these limitations, recent research has increasingly focused on leveraging machine learning and deep learning techniques. Among these, Transformer architectures have gained attention for their self-attention mechanism, which effectively captures long-range dependencies in sequential data. Originally developed for natural language processing, Transformers have been successfully adapted for cybersecurity applications, offering the ability to analyze complex interdependencies within network traffic.
+
+Graph neural networks (GNNs), known for their ability to handle graph-structured data, offer significant potential in cybersecurity applications. By capturing the complex relationships between nodes (e.g., IP addresses or devices) and edges (i.e., data transmissions or sessions) in a network, GNNs are able to efficiently map the overall pattern of network behavior. This capability makes GNN particularly suitable for identifying and analyzing complex network intrusions that are difficult to detect through conventional detection means ${}^{\left\lbrack 3\right\rbrack }$ . GNNs can analyze network traffic graphs by representing hosts or servers as nodes and their communications as edges. By learning the normal and abnormal characteristics of these communication patterns, the GNN is able to identify anomalous behavior in the network, such as unauthorized data access or abnormal data traffic. In addition, a key advantage of GNNs is their ability to integrate data from multiple sources and extract deep network characteristics, which is particularly important for detecting advanced persistent threats (APTs) and multi-stage attacks.
+
+GNN not only enhances the detection of known threats, but more importantly, it provides a mechanism to understand and predict new or variant attack behaviors that are difficult to identify with traditional methods. Therefore, the introduction of GNN into network security systems, especially network intrusion detection systems, will greatly enhance the system's ability to defend against complex network threats ${}^{\left\lbrack 4\right\rbrack }$ .
+
+This research aims to develop an enhanced Network Intrusion Detection System (NIDS) by integrating Graph Neural Networks (GNNs) with Transformer architectures. The goal is to improve the efficiency and accuracy of detecting complex and previously unknown attack patterns by leveraging the Transformer's ability to capture long-range dependencies in network traffic. This integration seeks to enhance the model's capability to analyze network flows on both local and global scales, improving overall performance in detecting sophisticated cyber threats.
+
+The proposed study will use a hybrid approach, combining GNNs and Transformers to analyze network traffic. GNNs will be employed to construct graph representations of network entities and interactions, while the Transformer's self-attention mechanism will capture long-range dependencies and global patterns ${}^{\left\lbrack 5\right\rbrack }$ . This integrated model aims to enhance understanding of network dynamics and improve detection and prediction of both known and emerging threats. The model's effectiveness will be evaluated through experiments on benchmark datasets, comparing its performance with existing intrusion detection systems.
+
+ < g r a p h i c s >
+
+Fig. 1. Network flow data graph structuring.
+
+As shown in Fig. 1, we utilize both GNN and Transformer to encode the raw stream data successively to obtain the desired graph data structure, which is input to the model for training.
+
+(1) The core contribution of this research is the development of a NIDS model that combines GNN and Transformer. The edge features optimized by the self-attention mechanism fully exploit the potential of network streaming data and significantly improve the detection accuracy of network intrusion.
+
+(2) Tests on multiple standard datasets show that our model outperforms existing techniques in key performance metrics such as accuracy, recall, and F1 score.
+
+The rest of the paper's organizational sequel will detail the design and experimental evaluation of the E-T-GraphSAGE (ETG) model. Part II will explore the development of NIDS, as well as research related to GNNs and Transformers. Part III details the model architecture and key technologies. The fourth section shows the experimental results on a variety of cyberattack datasets and compares them with other methods. The concluding section will summarize the research results and discuss future research directions.
+
+§ II. RELATED WORK
+
+In recent years, various approaches have been proposed to enhance the performance of Intrusion Detection Systems (IDS) Alowaidi et al. ${}^{\left\lbrack 6\right\rbrack }$ proposed a hybrid Intrusion Detection System (IDS) combining Machine Learning (ML) and Deep Learning (DL) techniques, which enhances IDS performance and prediction accuracy while lowering computational costs. However, the model's generalization relies on the diversity and representativeness of the training data. If the training data is biased, it negatively impacts the model's real-world performance Gupta et al. ${}^{\left\lbrack 7\right\rbrack }$ proposed an anomaly-based NIDS, this approach considers multiple performance metrics, along with training time and resource usage, but remains limited by dataset dependency and average generalization capabilities. Kumar et al. ${}^{\left\lbrack 8\right\rbrack }$ proposed a bi-directional long short-term memory (BiLSTM) based anomaly detection system for Internet of Things (IoT) networks. The BiLSTM model effectively improves the accuracy by preprocessing and feature selection through normalization and gain ratio.
+
+Suárez-Varela et al. ${}^{\left\lbrack 9\right\rbrack }$ introduced the use of GNNs in the modeling control, and management of communication networks, demonstrated their advantages in terms of generalization capabilities and data-driven solutions, and discussed their potential in network modeling control and management. Hnamte et al. ${}^{\left\lbrack {10}\right\rbrack }$ proposed an approach using Deep Convolutional Neural Networks (DCNN) and validated its performance with the InSDN dataset. While DCNN achieves high accuracy, it demands significant data and computational resources for training.
+
+Kisanga et al. ${}^{\left\lbrack {11}\right\rbrack }$ proposed a new Activity and Event Network (AEN) graph framework that focuses on capturing long-term stealthy threats that are difficult to detect by traditional security tools, and is very promising in detecting long-term threats in cybersecurity. L et al. ${}^{\left\lbrack {12}\right\rbrack }$ proposed an end-to-end anomalous edge detection method based on unified graph embedding, which enhances the model's ability to learn task-relevant patterns by combining embedding learning and anomaly detection into the same objective function, and accurately estimates the probability distributions of edges through the local structure of the graph to identify anomalous edges. Superior accuracy and scalability are demonstrated on multiple publicly available datasets.
+
+Sun et al. ${}^{\left\lbrack {13}\right\rbrack }$ proposed a framework combining Graph Neural Network (GNN) and Transformer for self-supervised heterogeneous graph representation learning. The Metapath-aware Hop2Token method is designed to efficiently convert neighbors with different hop counts in heterogeneous graphs into Token sequences, reducing the computational complexity in Transformer processing. GTC enhances information fusion, improves learning efficiency, and reduces the demand for computational resources by contrasting learning tasks between graph pattern views and hop count views.
+
+Nguyen et al. ${}^{\left\lbrack {14}\right\rbrack }$ proposed a Transformer-based GNN model for learning graph representation. With an unsupervised conduction learning approach, UGformer is able to solve the problem of limited category labels, but for large-scale datasets to construct graphs, UGformer may still need to be optimized to deal with extremely large graph structures, despite the sampling mechanism that UGformer is designed for.
+
+Unlike previous studies, our method focuses on extracting data edge features from network streams and develops E-GraphSAGE models that incorporate transformer modules. Combining local and global features to achieve more accurate feature representations, making full use of the structural and topological information and inherent in network streaming data to achieve better feature representations and network intrusion detection performance. The T-E-GraphSAGE method introduced in this paper addresses the shortcomings of traditional graph embedding techniques by capturing topological details and edge features in network flow data, leading to more precise detection. while its ability to effectively classify samples with unseen node features. Three NIDS standard datasets are used to evaluate our model, which verifies the broad applicability accuracy, and robustness of our model in different types of network scenarios, which is effective in comparison with traditional ML methods, especially in complex network environments. Through these improvements, the performance of our system in network intrusion detection has been significantly improved, and it is able to effectively respond to various network attacks in complex network environments.
+
+§ III.THE PROPOSED METHOD
+
+§ A. GRAPHSAGE
+
+Graph Neural Networks (GNN) are becoming increasingly popular in the field of machine learning. Its power stems from the effective utilization of graph-structured data. These data are widely available in application areas such as social media networks, biological research, and telecommunication systems ${}^{\left\lbrack {15}\right\rbrack }$ . The primary reason for using GNN in NIDS is their capability to leverage the structural data present in network streams, which can be represented graphically. Although some conventional machine learning approaches also handle graph data, they usually involve intricate processes and depend heavily on manually crafted features, leading to more cumbersome and less efficient applications.
+
+GraphSAGE ${}^{\left\lbrack {16}\right\rbrack }$ is an efficient graph neural network technique that generates embedded representations of nodes by sampling and aggregating the features of their neighbors. It is particularly suitable for processing large-scale graph data. The main steps include sampling neighboring nodes, aggregating features, and updating node features, which effectively solve the computation and storage bottlenecks of traditional graph neural networks. As a result, GraphSAGE has been widely used in many fields.
+
+GraphSAGE : learning node representation through local aggregation, and its core steps include three aspects: neighbor node sampling, feature aggregation, and node feature update, as shown in Fig. 2.
+
+In neighbor node sampling, for each node, a fixed number of neighbor nodes are randomly sampled to reduce the computation and storage requirements. Suppose a node in the graph is $v$ , and its set of neighbor nodes is $N\left( v\right)$ , and the set of neighbor nodes obtained from sampling is $\widetilde{N}\left( v\right)$ . This process can be represented as:
+
+$$
+\widetilde{N}\left( v\right) = \operatorname{Sample}\left( {N\left( v\right) ,K}\right) \tag{1}
+$$
+
+where $K$ denotes the number of neighbor nodes sampled. This phase seeks to manage computational complexity by limiting the number of adjacent nodes for each vertex in extensive graphs.
+
+ < g r a p h i c s >
+
+§ FIG.2. GRAPHSAGE MODEL DIAGRAM
+
+In feature aggregation, a feature aggregation operation is performed on the sampled set of neighbor nodes $\widetilde{N}\left( v\right)$ to generate neighbor feature representations. Common aggregation methods include mean value aggregation, pooling, and LSTM. The following are the formulas for several aggregation methods:
+
+1) Mean aggregation: Mean aggregation computes the average of neighboring node features. Its formula is:
+
+$$
+{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{mean}\left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{2}
+$$
+
+where ${h}_{u}^{\left( k - 1\right) }$ denotes the feature representation of the neighboring node at the $k - 1$ th layer of $u$ , and ${h}_{\widetilde{N}\left( v\right) }^{\left( k\right) }$ denotes the representation of the node $v$ after aggregating the features of its neighboring nodes at the $k$ layer.
+
+2) Maximum pooling: Maximum pooling is used to take the maximum value in the features of neighboring nodes. The formula for this is:
+
+$$
+{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \max \left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{3}
+$$
+
+3) LSTM aggregation: LSTM aggregation uses LSTM network for neighbor node features with the formula:
+
+$$
+{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{LSTM}\left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{4}
+$$
+
+For node feature update, the algorithm combines the node's own features with the aggregated neighbor features and updates the node feature representation through a neural network. A common way of combining is a concatenation operation (concatenation) followed by a transformation through a fully connected layer. Its formula is:
+
+$$
+{h}_{v}^{\left( k\right) } = \sigma \left( {{W}^{\left( k\right) } \cdot \operatorname{concat}\left( {{h}_{v}^{\left( k - 1\right) },{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) }}\right) }\right) \tag{5}
+$$
+
+where $\sigma$ denotes the activation function (e.g., ReLU), ${W}^{\left( k\right) }$ denotes the weight matrix of the $k$ -th layer, and ${h}_{v}^{\left( k\right) }$ denotes the feature representation of node $v$ in the $k$ -th layer.
+
+In the specific process, the features are first initialized and each node’s feature can be its attribute vector ${x}_{v}$ . Then multilayer sampling and aggregation is performed, for the $k$ -th layer, each node $v$ randomly samples a fixed number of $K$ neighbors from its neighborhood to form the sampling set $\widetilde{N}\left( v\right)$ and aggregates the features of the neighboring nodes using the selected aggregation function (e.g., mean, maximum pooling, or LSTM) to obtain ${h}_{\bar{N}\left( v\right) }^{\left( k\right) }$ . Then the node $v$ own features are connected to the aggregated neighboring features in a join operation and nonlinearly transformed through the fully connected layer to obtain a new node feature representation ${h}_{v}^{\left( k\right) }$ Finally, after multi-layer (usually 2 to 3 layers) sampling and aggregation operations, the embedding representation of each node is finally generated ${h}_{v}$ . Through the above steps, the GraphSAGE algorithm is able to efficiently deal with large-scale graph data, and generate high-quality node embedding representations through sampling and aggregation operations.
+
+§ B. E-TRANSFORMER-GRAPHSAGE METHODS
+
+The traditional GraphSAGE method mainly focuses on the analysis and utilization of node features for node classification, but is deficient in dealing with edge features. The primary objective of NIDS aims to detect and identify malicious traffic. In our study, we focus on the application of edge features and improve the GraphSAGE model by using the edge embedding method and introducing the Transformer layer method.
+
+1) E-GraphSAGE: In order to handle graph structure data efficiently, we designed and implemented the GraphSAGE layer (SAGELayer). This layer updates the representation of each node by aggregating the features of the node's neighbors to capture the relationships between nodes in the graph. GraphSAGE accomplishes the updating of node representations through message passing and apply updates, and employs the ReLU activation function to improve the model’s nonlinear representation ${}^{\left\lbrack {17}\right\rbrack }$ . The main differences from the original GraphSAGE algorithm are the algorithmic inputs, the message passing aggregation functions and the outputs. In the SAGE layer, edge embedding is incorporated into the messaging process to provide richer information. Unlike the traditional GraphSAGE module, the aggregated embedding of sampled neighboring edges is generated at the kth layer for edge features. using a mean aggregation function as shown in the following equation.
+
+$$
+{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{mean}\left( \left\{ {{e}_{uv}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) ,{uv} \in \varepsilon }\right\} \right) \tag{6}
+$$
+
+where ${e}_{uv}^{\left( k - 1\right) }$ is the feature of the edge ${uv}$ in the $\mathrm{k} - 1$ layer of the sampling neighborhood $\widetilde{N}\left( v\right)$ of node $v$ , and the set $\{ \forall u \in$ $\widetilde{N}\left( v\right) ,{uv} \in \varepsilon \}$ represents the sampling edges within the neighborhood $\widetilde{N}\left( v\right)$ . The edge features of the ${uv}$ of the kth layer are spliced by the following equation, which represents the final result of the forward propagation phase.
+
+$$
+{h}_{uv}^{k} = \operatorname{CONCAT}\left( {{h}_{u}^{k},{h}_{v}^{k}}\right) ,{uv} \in \mathcal{E} \tag{7}
+$$
+
+In our study, we constructed a two-layer E-GraphSAGE model with each layer consisting of an E-SAGELayer.
+
+Neighboring node features are aggregated to generate the embedded representation of the node and a mean value aggregation method is used, where the features of the node are the mean value of the features of its neighboring nodes. The first layer E-SAGELayer in this model aggregates the input features to generate the first layer of node embedding; The second layer takes the first layer of node embeddings as input and again performs aggregation to generate the final node embeddings. Through this multi-layer aggregation, we are able to capture more complex node characteristics and neighbor relationships. A Dropout operation is used to avoid overfitting. The advantage of stacking multiple layers of GraphSAGE is the ability to capture more complex node relationships and form richer node representations to improve the performance of the model.
+
+2) Transformer: The traditional GraphSAGE method mainly focuses on the analysis and utilization of node features for node classification, but is deficient in dealing with edge features. The primary aim of NIDS is to detect and identify malicious traffic, aligning with the edge classification problem in network flow classification. Our study emphasizes the use of edge features and enhances the GraphSAGE model by incorporating the edge embedding method and introducing the Transformer layer technique.
+
+The Transformer Encoder Layer (TEL) is the basic component of the Transformer model, which mainly consists of the MultiheadAttention mechanism, Feed-forward Neural Network (Linear Layer), and Normalization Layer (LayerNorm), and Dropout is applied between the layers to prevent overfitting. In the Transformer Encoder Layer, the inputs are node features (generated by the SAGE layer) and this layer does not explicitly process edge features. Its main function is to capture the dependencies between node features and global information through a multi-head attention mechanism along with a feed-forward neural network.
+
+a) Multi-head attention: The self-attention mechanism allows the model to capture global dependencies by focusing on all other elements in a sequence while processing each element in the sequence. The multi-head self-attention mechanism improves the model's sensitivity to different features by performing multiple self-attention computations in parallel. The specific formula is as follows:
+
+$$
+\left\{ \begin{matrix} \operatorname{Attemtion}\left( {Q,K,V}\right) = \operatorname{softmax}\left( \frac{Q{K}^{T}}{\sqrt{{d}_{k}}}\right) V \\ \operatorname{MultiHead}\left( {Q,K,V}\right) = \operatorname{Concat}\left( {{\operatorname{head}}_{1},\cdots {\operatorname{head}}_{i},\cdots ,{\operatorname{head}}_{h}}\right) {W}_{C} \end{matrix}\right. \tag{8}
+$$
+
+Identify applicable funding agency here. If none, delete this text box.
+
+where $\operatorname{Attemtion}\left( {Q,K,V}\right)$ is the single-head self-attention computation, $\mathrm{Q}$ denotes the computational query matrix, $\mathrm{K}$ denotes the key matrix, $\mathrm{V}$ denotes the value matrix, $\mathrm{d}$ denotes the input vector dimension, and MultiHead(Q, K, V)denotes the multi-head self-attention splicing the results of the $h$ heads together and obtaining the final output by a linear transformation, where ${\text{ head }}_{i} =$ Attention $\left( {{Q}_{i},{K}_{i},{V}_{i}}\right)$ , and ${W}_{O} \in {\mathbb{R}}^{h{d}_{k} \times {d}_{\text{ model }}}$ is the output weight matrix and ${d}_{\text{ model }}$ is the input feature dimension.
+
+Specifically, the MultiheadAttention mechanism captures the global dependencies of the input data by processing the input data in parallel through multiple Attention Heads. Each Attention Head performs self-attention computation independently, which is able to focus on different features in the input data and enhance the sensitivity of the model to multiple features. The multi-head attention mechanism's output is linked to the feed-forward neural network via a linear transformation.
+
+b) Feed-forward neural network: Feed-forward neural networks (FFN) are fully connected neural networks applied independently at each position in each Transformer coding layer. The specific formula is as follows:
+
+$$
+\operatorname{FFN}\left( x\right) = \max \left( {0,x{W}_{1} + {b}_{1}}\right) {W}_{2} + {b}_{2} \tag{9}
+$$
+
+where ${W}_{1} \in {\mathbb{R}}^{{d}_{\text{ model }} \times {dff}},{W}_{2} \in {\mathbb{R}}^{{d}_{ff} \times {d}_{\text{ model }}},{b}_{1} \in {\mathbb{R}}^{{d}_{ff}},{b}_{2} \in$ ${\mathbb{R}}^{{d}_{\text{ model }}}$ is the parameter of the science department and ${d}_{ff}$ is the hidden layer dimension of the FNN.
+
+The feedforward neural network used in this paper includes two fully connected layers with a ReLU activation function and Dropout applied in between. The first fully connected layer maps the input dimension from the embedded dimension (embed_dim) to a higher hidden dimension (ff_hidden_dim), the ReLU activation function introduces a nonlinear transformation, and the Dropout operation is used to prevent overfitting. The second fully connected layer maps the hidden dimension back to the embedded dimension, thus keeping the dimensionality of the inputs and outputs the same.
+
+c) Normalization layer: The normalization layer is implemented following each sublayer, including both self-attention and the feed-forward neural network, to ensure regularization and stabilize the training process. The specific formulas are as follows:
+
+$$
+\text{ LayerNorm }\left( x\right) = \frac{x - \mu }{\sigma + \varepsilon } \cdot \gamma + \beta \tag{10}
+$$
+
+where $\mu$ and $\sigma$ are the mean and standard deviation of the inputs respectively, $\gamma$ and $\beta$ are the learnable scaling and offset parameters and $\varepsilon$ is a small constant.
+
+Each coding layer undergoes Layer Normalization and Residual Connection between and after the multi-head self-attention mechanism and the feed-forward neural network. Layer Normalization helps to stabilize and speed up the training process, while Residual Connection helps to solve the problem of vanishing gradients in deep networks.
+
+d) Dropout: Dropout randomly discards a certain percentage of neurons during training to prevent overfitting. By stacking multiple such coding layers, the Transformer model is able to effectively capture the global dependencies of the input data and enhance the model's sensitivity to different features. The multi-head self-attention mechanism in each layer enables the model to focus on different features in the input data, and the feed-forward neural network further processes these features. Through the layer-by-layer processing of the multilayer structure, the model is able to capture more complex and deeper feature relationships in the input data, which improves its performance in various tasks.
+
+§ C. NIDS
+
+Fig. 3 shows how the network stream data is constructed as graph data and the propagation process from the source node to the destination node. Fig. 4 shows an overview of our E-Transformer-GraphSAGE NIDS. Initially, a graph is created using the network flow data. Next, the generated network graph is fed into the E-Transformer-GraphSAGE model for supervised training. Edge embeddings are designed to classify network streams into benign or malicious categories. The following subsections explain these three steps in detail.
+
+Netflow Data
+
+max width=
+
+X IPV4 SRC ADDR L4 SRC PORT IPV4 DST ADDR L4 DST PORT PROTOC OL L7 PROT 。 IN BYTE S OUT BY TES IN PKTS OUT PK TS TCP FLA QS FLOW D URATION MILLIS ECONDS Label Attack
+
+1-15
+X 192,168.1.7 0 46800 239.255.25 5.250 15600 17 0 63 0 1 0 0 0 0 Benign
+
+1-15
+X 192,168,1.7 9 41361 192,168.1.1 15600 17 0 63 0 1 0 0 0 0 Benign
+
+1-15
+X 192,168.1.1 60641 192,168.1.3 1 53 17 5 100 100 2 2 0 2 1 Injection
+
+1-15
+X 192,168.1.1 43803 192,168.1.1 52 53 17 5 100 100 2 2 0 7 1 Scanning
+
+1-15
+X 192,168,1.3 1 63898 192,168.1.3 6 5355 17 154 122 0 2 0 0 0 0 Benign
+
+1-15
+X 192,168,1.3 6 53153 192,168.1.0 7 5355 17 154 122 0 2 0 0 0 0 Benign
+
+1-15
+X 192,168.1.3 6 44248 192,168.1.1 52 80 6 7 526 2816 6 6 27 1021 1 XSS
+
+1-15
+X ... ... ... ... ... ... ... ... ... ... ... ... ... ...
+
+1-15
+
+ < g r a p h i c s >
+
+Fig. 3. Network flow data conversion diagram data
+
+ < g r a p h i c s >
+
+Fig. 4. E-Transformer-graphsage-based Network Intrusion Detection System
+
+1) Graph data structure: Net-Flow is a commonly used format for logging network communications in production environments and is the predominant format in Network Intrusion Detection System (NIDS) environments. A flow record typically includes fields that identify the communication's source and destination, along with additional information like packet and byte counts, and flow duration. Graph structures naturally model this type of data. In this study, we use the source IP address, source port, destination IP address, and destination port. The first two fields form a binary group identifying the source node, and the last two form the destination node. The remaining data are used as features for that edge, making the graph nodes featureless. We assign a vector of all 1's to all nodes in the algorithm.
+
+2) E-Transformer-GraphSAGE: Our proposed model combines the sensitivity of GNN to local structures and the ability of Transformer to capture global dependencies by first processing the graph data through E-GraphSAGE to obtain node representations. Then, Transformer is utilized to further capture global dependencies. During the training process, we utilize a weighted cross-entropy loss function (CrossEntropyLoss) to address category imbalance. We use Adam optimizer (Adam optimizer) for parameter updating. The algorithm's output is compared with the labels from the NIDS dataset and the model's trainable parameters are adjusted in the backpropagation phase. After tuning the model parameters during training, the performance of the model can be evaluated by classifying unseen test samples. The process involves converting the test stream records into graph data structures. Edge embeddings are then generated using a trained E-Transformer-GraphSAGE layer. These edge embeddings are subsequently transformed into class probabilities via the Softmax layer. The predicted class probabilities are compared with the actual class labels to evaluate the classification performance metrics.
+
+§ IV. EXPERIMENT
+
+In this section, We performed binary classification and multiclassification task comparisons to validate the effectiveness of our algorithm.
+
+§ A. EXPERIMENT SETTING
+
+We modeled using Python, Pytorch, and DGL, and the server environment was performed on an Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz total of 32 cores, a single A100 graphics card, and 192G RAM.
+
+§ B. DATASETS
+
+To evaluate our proposed GNN-based NIDS, we use three publicly available datasets that include various labeled attack flows and benign network flows. The first dataset is BoT-IoT, which is widely used for evaluating ML based network intrusion detection systems in the Internet of Things, with a proprietary format and feature set. The second and third datasets are NF-ToN-IoT and NF-BoT-IoT presented in Netflow format.
+
+1) BoT-IoT datasets: The BoT-IoT dataset ${}^{\left\lbrack {18}\right\rbrack }$ was generated by the Cyber Range Lab at the Australian Center for Cyber Security (ACCS) to evaluate the performance of cyber security tools. It simulates real network environments containing normal traffic and multiple types of malicious traffic such as DDoS, DoS, reconnaissance, and data theft for Intrusion Detection System (IDS) training and testing. Avoid combining SI units, like current in amperes, with CGS units, such as the magnetic field measured in oersteds, as this can cause dimensional imbalance and confusion. If using mixed units, clearly specify the units for each quantity in the equation.
+
+2) NF-BoT-IoT datasets: The NF-BoT-IoT dataset ${}^{\left\lbrack {19}\right\rbrack }$ is a NetFlow characterization dataset extracted from the BoT-IoT dataset to provide a more concise representation of network traffic by summarizing IP traffic flows. The dataset includes information such as source and destination IP addresses, ports, packet counts, byte counts, and timestamps, which helps in large-scale data analysis and real-time intrusion detection.
+
+3) NF-ToN-IoT datasets: The NF-ToN-IoT dataset is a NetFlow characterization dataset generated based on the ToN-IoT dataset and contains telemetry and operational network data from Internet of Things (IoT) devices. The dataset provides detailed traffic records that help detect network intrusions and understand traffic patterns in IoT environments and is suitable for IoT security research.
+
+§ C. RESULTS OF THE EXPERIMENT
+
+To assess the effectiveness of the proposed neural network model, we employed the standard metrics outlined in Table I. Here, TP stands for true positives, TN for true negatives, FP for false positives, and FN for false negatives.
+
+TABLE I. EVALUATION INDICATORS
+
+max width=
+
+Accuracy $\frac{\mathbf{{TP}} + \mathbf{{TN}}}{\mathbf{{TP}} + \mathbf{{FP}} + \mathbf{{TN}} + \mathbf{{FN}}} \times \mathbf{{100}}\%$
+
+1-2
+Precision $\mathbf{{TP}} + \mathbf{{FP}} \times \mathbf{{100}}\%$
+
+1-2
+FAR $\overline{{FP} + {TN}} \times {100}\%$
+
+1-2
+Recall TP + FN $\times$ 100%
+
+1-2
+F1-Score $2 \times \frac{\text{ Precision } \times \text{ Recall }}{\text{ Precision } \times \text{ Recall }} \times {100}\%$
+
+1-2
+
+1) Binary classification results: The datasets employed in our experiments contain dual-layer labels for each data instance The first layer indicates whether the network flow is benign or non-benign, while the second layer specifies the attack type. For the binary classification task, we use the first layer of labels, and for the multi-class classification task, we use the second layer of labels ${}^{\left\lbrack {20},{21}\right\rbrack }$ . across three datasets: BoT-IoT, NF-BoT-IoT, and NF-ToN-IoT. The findings demonstrate that our method performs exceptionally well in binary classification, a key factor for successful network intrusion detection.
+
+TABLE II. BINARY CLASSIFFCATION RESULTS
+
+max width=
+
+Dataset Accuracy Precision F1-Score Recall $\mathbf{{FAR}}$
+
+1-6
+BoT-IoT 99.99% 1.00 1.00 99.99% 0.00%
+
+1-6
+NF-BoT- IoT 94.52% 1.00 0.99 97.32% 0.24%
+
+1-6
+NF-ToN- IoT 99.93% 1.00 1.00 99.84% 0.03%
+
+1-6
+
+Table II summarizes our model's performance metrics-accuracy, precision, F1-Score, and False Alarm Rate (FAR)-
+
+In cybersecurity, datasets frequently exhibit an imbalance, with fewer attack samples compared to normal traffic. The F1- Score is particularly important in such scenarios as it balances precision and recall, providing a more accurate assessment of the model's ability to differentiate between benign and malicious traffic than accuracy alone.
+
+Given the importance of precise intrusion detection, particularly in practical applications where the cost of missed detections is high, we prioritize the F1-Score as a more reliable indicator of our model's performance. In the following sections, we will compare our F1-Score with those from other studies to demonstrate how effectively our model handles the challenges of imbalanced datasets, ensuring dependable intrusion detection.
+
+TABLE III. COMPARISON OF BINARY-CLASSIFICATION ALGORITHMS F1
+
+max width=
+
+Method Dataset F1
+
+1-3
+Ours CatBoost BoT-IoT 1.00 0.99
+
+1-3
+Ours Extra Tree Classifier TS-IDS NF-BoT-IoT 0.99 0.97 0.95
+
+1-3
+Ours Extra Tree Classifier NF-ToN-IoT 1.00 1.00
+
+1-3
+
+Table III shows the F1 of our method compared with other algorithms ${}^{\left\lbrack {21},{22}\right\rbrack }$ . The results show that our method achieves F1- Scores that are either similar to or better than those of existing approaches. This indicates that our method performs effectively in both traffic classification and binary network intrusion detection.
+
+The comparable or superior F1-Scores demonstrate that our model is not only accurate in identifying malicious network traffic but also maintains a balanced performance across different datasets. This balance is crucial in practical applications, where high precision and recall are necessary to minimize false positives and ensure reliable intrusion detection.
+
+In summary, the data in Table III confirms that our method is competitive with, and in some cases superior to, other leading algorithms, highlighting its effectiveness in traffic classification and network intrusion detection tasks.
+
+2) Multiclass classiffcation results: Table IV presents the multi-classification results of our method across three standard datasets, where the classifier is tasked with distinguishing between various attack types. The multi-classification problem is more complex than binary classification, as it requires the model to accurately identify not just whether an attack is present, but also to specify the type of attack. The results in Table IV indicate that our model demonstrates strong performance, particularly on the BoT-IoT dataset. This superior performance is indicative of the model's capability to effectively differentiate between the distinct attack types within this dataset.
+
+Table V provides further insight into the model's performance by showing the recall and F1-Score values for different attacks in the multi-classification task, specifically focusing on the ToN-IoT dataset. These metrics are crucial for understanding the model's ability to correctly identify each attack type. High recall values suggest that the model is effective in identifying the majority of true positive instances for most attack types, minimizing the risk of undetected threats. Similarly, strong F1-Score values indicate a good balance between precision and recall, reinforcing the model's robustness in handling diverse attack scenarios.
+
+TABLE IV. COMPARISON OF BOT-IOT AND NF-BOT-IOT MULTI-CLASSIFICATION ALGORITHMS FI
+
+max width=
+
+X 2|c|BoT-IoT 2|c|NF-BoT-IoT
+
+1-5
+Class Name Recall F1- Score Class Name Recall
+
+1-5
+Benign 100.00% 0.99 Benign 100.00%
+
+1-5
+DDos 99.99% 1.00 DDos 99.99%
+
+1-5
+Dos 99.99% 1.00 Dos 99.99%
+
+1-5
+Reconnaissance 99.99% 1.00 Reconnaissance 99.99%
+
+1-5
+Theft 94.52% 0.98 Theft 94.52%
+
+1-5
+Weighted Average 99.99 1.00 Weighted Average 99.99
+
+1-5
+
+ABLE V. COMPARISON OF NF-TON-IOT MULTI-CLASSIFICATION ALGORITHMS
+
+max width=
+
+X 2|c|NF-ToN-IoT
+
+1-3
+Class Name Recall F1-Score
+
+1-3
+Benign 98.33% 0.99
+
+1-3
+Backdoor 98.46% 0.99
+
+1-3
+DDos 57.47% 0.73
+
+1-3
+Dos 99.72 0.46
+
+1-3
+Injection 30.59 0.46
+
+1-3
+MIMT 55.02 0.25
+
+1-3
+Ransomware 80.28 0.42
+
+1-3
+Password 100.00 0.99
+
+1-3
+Scanning 25.92 0.15
+
+1-3
+XSS 40.70% 0.28
+
+1-3
+Weighted Average 68.65% 0.67
+
+1-3
+
+However, the experimental plots of confusion matrices shown in Figures 5 and 6 for the NF-BoT-IoT and NF-ToN-IoT datasets reveal some nuances in the model's performance. While the recognition rate is extremely high for several attack types, the model struggles with accurately classifying DDoS attacks. This issue likely stems from the fact that during model training, DDoS and DoS attacks shared similar features, leading to a significant overlap in their learned representations. As a result, the model occasionally misclassifies DDoS attacks as DoS attacks, which suggests that the feature extraction process may need refinement to better distinguish between these two attack types.
+
+The observed difficulty in separating DDoS from DoS attacks highlights a potential area for improvement. One possible solution could involve enhancing the feature engineering process to capture more distinctive characteristics of these attack types. Additionally, adjusting the training process to emphasize the differences between DDoS and DoS attacks, perhaps through the use of more advanced techniques like adversarial training or ensemble learning, could further improve classification accuracy.
+
+In summary, while our model excels in the multi-classification of several attack types, especially within the BoT-IoT dataset, there remains room for improvement in the classification of closely related attacks such as DDoS and DoS. Addressing these challenges will be crucial for further enhancing the model's overall reliability and effectiveness in real-world network security applications.
+
+ < g r a p h i c s >
+
+Fig. 5. NF-BoT-IoT multiclassification results
+
+ < g r a p h i c s >
+
+Fig. 6. NF-ToN-IoT multiclassification results
+
+As with binary classification, we compared the performance of our model's Network Intrusion Detection System (NIDS) with other classifiers, as shown in studies ${}^{\left\lbrack {23},{24}\right\rbrack }$ . Table VI presents the results of this comparison, focusing on the multi-classification task.
+
+The findings reveal that our algorithm consistently achieves higher average F1-Score values compared to all existing methods. This is particularly important in multi-classification, where the ability to accurately distinguish between multiple types of network attacks is crucial. The superior F1-Score suggests that our model not only identifies attacks effectively but also excels in correctly classifying the different types of attacks, a challenge where other classifiers often fall short.
+
+These results underscore the effectiveness of our approach in handling the complexities of multi-class network intrusion detection, proving that our model outperforms current leading methods in this critical area.
+
+TABLE VI. COMPARISON OF MULTI-CLASSIFICATION ALGORITHMS F 1
+
+max width=
+
+Method Dataset W-F1
+
+1-3
+Ours CatBoost BoT-IoT 1.00 0.99
+
+1-3
+Ours Extra Tree Classifier TS-IDS NF-BoT-IoT 0.88 0.77 0.83
+
+1-3
+Ours Extra Tree Classifier NF-ToN-IoT 0.67 0.60
+
+1-3
+
+Overall, our method demonstrates superior performance compared to other Network Intrusion Detection System (NIDS) approaches across both binary and multi-classification tasks, as evidenced by the results from the three datasets utilized in our study. Our model not only achieves higher accuracy and F1- Scores but also shows remarkable robustness and generalizability. This indicates that it is well-equipped to handle various types of network traffic and detect both known and emerging threats effectively.
+
+The model's ability to consistently outperform other methods highlights its advanced capabilities in accurately identifying and classifying different types of network attacks, whether it's simply distinguishing between benign and malicious traffic or correctly categorizing specific attack types. This robust performance across diverse datasets suggests that our method is adaptable to different network environments and can maintain its effectiveness even when faced with the complexities and variabilities of real-world data.
+
+§ V. CONCLUSION AND FUTURE WORK
+
+In this paper, we have introduced a novel GNN-based network intrusion detection method called E-T-GraphSAGE, which has enhanced attack flow detection by capturing edge features and topology patterns within network flow graphs. Our focus has been on applying E-T-GraphSAGE to detect malicious network flows in the context of network intrusion detection. Experimental evaluations have shown that our model performs very well on the three NIDS benchmark datasets and generally outperforms currently available network intrusion detection methods. In the future, we plan to build unsupervised graph neural network intrusion detection models, as well as lighten the E-T-GraphSAGE model and apply it to edge network servers, especially small and medium-sized network devices, for better timely network intrusion detection at the edge.
+
+§ ACKNOWLEDGMENT
+
+This work is supported by the National Natural Science Foundation of China under Grant 62101299.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf7bb838fcebc0da6d482cf3b1c7c9817b84e3e1
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,577 @@
+# Distributed Unknown Input Observer-Based Global Fault-Tolerant Average Consensus Control for Linear Multi-Agent Systems
+
+Ximing Yang
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+yxm961115123@163.com
+
+Tieshan Li
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+tieshanli@126.com
+
+Yue Long
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+longyue@uestc.edu.cn Hanqing Yang School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
+
+hqyang5517@uestc.edu.cn
+
+${Abstract}$ -The paper mainly investigates the distributed unknown input observer-based global fault-tolerant average consensus control problem for multi-agent systems (MASs). First, a distributed unknown input observer based on relative estimation error is proposed, which can effectively reduce the impact of external disturbances and achieve accurate estimation of the agent states and the faults they suffered. Then, based on the obtained accurate estimations and using the relative estimation error, a global fault-tolerant average consensus controller is proposed. The proposed controller can compensate for the effects of faults and enable the MASs to achieve global average consensus. Finally, simulations are given to verify the effectiveness of the proposed scheme.
+
+Index Terms-Multi-agent systems, fault-tolerant control, distributed unknown input observer, global average consensus.
+
+## I. INTRODUCTION
+
+In the past decades, the study of multi-agent systems (MASs) has been highly emphasized. Due to their extensive civilian and military applications, MASs are subject to stringent performance requirements, such as adaptability, flexibility, and robustness [1]. To meet these requirements, considerable attention has been given to coordination issues in MASs, such as consensus [2], containment control [3], and formation control [4]. These coordination mechanisms have been utilized in a wide range of applications such as intelligent transportation systems [5], drone formation [6], and smart grids [7]. However, the scalability and complexity of MASs render traditional centralized control schemes insufficient to meet these requirements. Therefore, the exploration of distributed control schemes for MASs is of significant importance.
+
+Compared with centralized control schemes, distributed control schemes are more suitable for the coordinated control of autonomous agents in MASs [8]. Currently, the existing control schemes can be categorized into two types based on the structure of MASs: leaderless and leader-follower. The goal of control in leaderless MASs is to reach the consensus among the agents [9]. In contrast, the control objective of leader-follower MASs is for the follower agents to track the state of the leader [10]. A formation control scheme based on dynamic output feedback was proposed for cases where velocity cannot be measured, ensuring that the agents converge to the desired formation pattern within a finite time [11]. In [12], an adaptive control strategy with a fully distributed neural network was proposed to ensure that all followers track the leader's state and that the synchronization error remains within a specified range. A formation control method based on constructing a direction alignment law and formation control law using the displacement between agents was proposed to address the direction misalignment issue caused by local reference frames [13]. Overall, distributed control has emerged as a popular research direction, attracting considerable research efforts and yielding abundant results. However, many research outcomes focus solely on the control methods design and consider relatively idealized cases, assuming precise knowledge of system states and the absence of system faults, which diminishes their engineering feasibility.
+
+In practical applications, MASs consist of numerous agents distributed across a spatial area, with each agent facing distinct environmental challenges. Agents may encounter uncertainties, such as actuator faults, which can incapacitate the entire control system [14]. To enhance the reliability and safety of the system, it is necessary to implement measures to compensate the adverse influences of faults on the system. In this context, fault-tolerant consensus control has attracted widespread attention as an effective method to compensate for the impact of faults [15]. A virtual actuator framework-based adaptive fault-tolerant control method was proposed to achieve leader-follower consensus control under time-varying actuator faults [16]. Based on an observer framework, a reliable consensus control design method under stochastic actuator failures was proposed to achieve multi-agent consensus [17]. A distributed fault-tolerant consensus protocol based on a distributed intermediate observer was proposed to achieve finite-time fault-tolerant consensus control with enhanced dissipation rate [18]. However, although [18] has addressed the consensus problem of MASs under faults, they have not considered the impact of external disturbances present in practical environments on estimation performance. Fortunately, The unknown input observer, as an effective method based on disturbance decoupling technology to handle external disturbances in estimation error systems, has been widely applied [19]- [20]. Depending on [19], to address the problem of distributed secure control in MASs, a decentralized unknown input observer-based distributed secure control scheme was proposed [21].
+
+---
+
+This work was supported in part by the National Natural Science Foundation of China under Grant 51939001, Grant 62273072, and Grant 62203088, in part by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0903. (Corresponding author: Tieshan Li.)
+
+---
+
+Based on these observations, a distributed unknown input observer and a fault-tolerant average consensus controller based on relative estimation error are proposed in this paper. Major contributions of this work are summarized below:
+
+(1) Compared with reference [18], a control scheme utilizing disturbance decoupling technology to handle external disturbances is proposed. This scheme effectively reduces the adverse influence of disturbances on estimation performance and achieves global average consensus for MASs.
+
+(2) Distinguished from [21], a novel distributed unknown input observer utilizing relative estimation error is proposed to obtain the estimations of the state and the fault experienced by each agent. Specifically, it uses relative estimation error to determine fault estimation, incorporating output estimates rather than just the outputs themselves into the distributed algorithm.
+
+The structure is given as follows: Section II presents the problem formulation and give some useful assumptions. In Section III, the main results including distributed unknown input observer-based global fault-tolerant average consensus control scheme and stability analysis are given. Simulations are given in Section IV. Finally, the conclusion of this work is presented in Section V.
+
+## II. Preparations
+
+## A. Graph Theory
+
+An undirected graph $\mathfrak{g}$ is defined as a pair $\left( {v,\epsilon ,\mathfrak{A}}\right)$ , where $v = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ represents a nonempty finite set of nodes, and $\epsilon \subseteq v \times v$ represents a set of edges. An edge $\left( {{v}_{i},{v}_{j}}\right)$ denotes a pair of nodes ${v}_{i}$ and ${v}_{j}$ . The adjacency matrix, denoted as $\mathfrak{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{N \times N}$ , has elements ${a}_{ij}$ representing the weight coefficient of the edge $\left( {{v}_{i},{v}_{j}}\right)$ , with ${a}_{ii} = 0$ and ${a}_{ij} = 1$ if $\left( {{v}_{i},{v}_{j}}\right) \in \epsilon$ . The Laplacian matrix, denoted as $\mathfrak{L} = \mathfrak{D} - \mathfrak{A}$ , is constructed where $\mathfrak{D} = \left\lbrack {d}_{ii}\right\rbrack$ is a diagonal matrix with ${d}_{ii} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}$ .
+
+## B. Problem Formulation
+
+Considering a MASs with $N$ agents $\left( {i \in \{ 1,\ldots , N\} }\right)$ , and the dynamics of $i$ th agent with actuator faults are denoted as follows:
+
+$$
+{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
+$$
+
+$$
+{y}_{i}\left( t\right) = C{x}_{i}\left( t\right) \tag{1}
+$$
+
+where ${x}_{i}\left( t\right) \in {\mathbf{R}}^{n},{u}_{i}\left( t\right) \in {\mathbf{R}}^{m},{y}_{i}\left( t\right) \in {\mathbf{R}}^{p}$ represent the agent’s state, input, and output, respectively. The terms ${f}_{i}\left( t\right) \in$ ${\mathbf{R}}^{q}$ and ${\omega }_{i}\left( t\right) \in {\mathbf{R}}^{s}$ denote the actuator fault and external disturbance, respectively. The matrices $A, B, C$ , and $D$ are constants with appropriate dimensions.
+
+This paper aims to propose a global fault-tolerant average consensus controller, so that the state of all agents can achieve global average consensus, i.e., global average consensus error ${\widetilde{x}}_{i}\left( t\right)$ satisfy:
+
+$$
+{\widetilde{x}}_{i}\left( t\right) = {x}_{i}\left( t\right) - \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{x}_{i} \Rightarrow 0. \tag{2}
+$$
+
+To facilitate subsequent analysis, the following useful assumptions and lemma are given:
+
+Assumption 1.
+
+$$
+\operatorname{rank}\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack = n + \operatorname{rank}\left( D\right) . \tag{3}
+$$
+
+Assumption 2. [22] The actuator fault ${f}_{i}\left( t\right)$ is differentiable with respect to time, and its time derivative ${\dot{f}}_{i}\left( t\right)$ belongs to ${L}_{2}\lbrack 0,\infty )$ . Similarly, the external disturbance ${\omega }_{i}\left( t\right)$ is bounded and also belongs to ${L}_{2}\lbrack 0,\infty )$ .
+
+Lemma 1. [21] For the undirected and connected graph $\mathfrak{g}$ , one has $\mathfrak{L}\mathcal{M} = \mathcal{M}\mathfrak{L} = \mathfrak{L}$ .
+
+## III. MAIN RESULTS
+
+A. Distributed unknown input observer-based global fault-tolerant average consensus control scheme
+
+To reconstruct the state and actuator fault of the agent, the relative estimation error-based distributed unknown input observer for agent $i$ is proposed:
+
+$$
+{\dot{m}}_{i}\left( t\right) = {\Upsilon A}{\widehat{x}}_{i}\left( t\right) + {\Upsilon B}\left( {{u}_{i}\left( t\right) + {\widehat{f}}_{i}\left( t\right) }\right)
+$$
+
+$$
++ {L}_{1}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
+$$
+
+$$
+{\widehat{x}}_{i}\left( t\right) = {m}_{i}\left( t\right) + \Theta {y}_{i}\left( t\right)
+$$
+
+$$
+{\dot{\widehat{f}}}_{i}\left( t\right) = {L}_{2}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
+$$
+
+$$
+{\widehat{y}}_{i} = C{\widehat{x}}_{i}\left( t\right) \tag{4}
+$$
+
+where ${m}_{i}\left( t\right) ,{\widehat{x}}_{i}\left( t\right) ,{\widehat{f}}_{i}\left( t\right)$ , and ${\widehat{y}}_{i}$ denote the state of unknown input observer, state estimation, actuator fault estimation, and output estimation for agent $i$ , respectively. And ${\eta }_{i} = {y}_{i}\left( t\right) -$ ${\widehat{y}}_{i}\left( t\right)$ denotes output estimation error, ${\eta }_{i} - {\eta }_{j}$ denotes the relative estimation error. In addition, the global fault-tolerant average consensus controller for agent $i$ is proposed:
+
+$$
+{u}_{i}\left( t\right) = E{\widehat{x}}_{i}\left( t\right) - {\widehat{f}}_{i}\left( t\right) + K\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\} . \tag{5}
+$$
+
+Then, for agent $i$ , the state estimation error system can be denoted as below:
+
+$$
+{\dot{e}}_{xi}\left( t\right) = {\dot{x}}_{i}\left( t\right) - {\dot{m}}_{i}\left( t\right) - {\Theta C}{\dot{x}}_{i}\left( t\right) . \tag{6}
+$$
+
+The following condition for the matrices $\Upsilon$ and $\Theta$ can be obtained based on Assumption 1:
+
+$$
+\left\lbrack \begin{array}{ll} \mathbf{\Upsilon } & \Theta \end{array}\right\rbrack \left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{I} & \mathbf{0} \end{array}\right\rbrack
+$$
+
+which could be re-written as follows
+
+$$
+{\Upsilon D} = \mathbf{0},\mathbf{I} - {\Theta C} = \Upsilon . \tag{7}
+$$
+
+Then, based on the above conditions, one has
+
+$$
+{\dot{e}}_{xi}\left( t\right) = {\Upsilon A}{x}_{i}\left( t\right) + {\Upsilon B}\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + {\Upsilon D}{\omega }_{i}\left( t\right) - {\Upsilon A}{\widehat{x}}_{i}\left( t\right)
+$$
+
+$$
+- {\Upsilon B}\left( {{u}_{i}\left( t\right) + {\widehat{f}}_{i}\left( t\right) }\right) - {L}_{1}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
+$$
+
+$$
+= {\Upsilon A}{e}_{xi}\left( t\right) + {\Upsilon B}{e}_{fi}\left( t\right)
+$$
+
+$$
+- {L}_{1}C\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} , \tag{8}
+$$
+
+and the fault estimation error system can be denoted as:
+
+$$
+{\dot{e}}_{fi}\left( t\right) = - {L}_{2}C\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} + {\dot{f}}_{i}\left( t\right) . \tag{9}
+$$
+
+Denote vector ${e}_{i}\left( t\right) = \left\lbrack {{e}_{xi}^{T}\left( t\right) ,{e}_{fi}^{T}\left( t\right) }\right\rbrack$ , the augmented estimation error system can be obtained:
+
+$$
+{\dot{e}}_{i}\left( t\right) = \widetilde{A}{e}_{i}\left( t\right) - L\bar{C}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{i}\left( t\right) - {e}_{j}\left( t\right) }\right\rbrack }\right\} + \widehat{I}{\dot{f}}_{i}\left( t\right)
+$$
+
+(10)
+
+where
+
+$$
+\widetilde{A} = \left\lbrack \begin{matrix} {\Upsilon A} & {\Upsilon B} \\ \mathbf{0} & \mathbf{0} \end{matrix}\right\rbrack , L = \left\lbrack \begin{array}{l} {L}_{1} \\ {L}_{2} \end{array}\right\rbrack ,\bar{C} = \left\lbrack \begin{array}{ll} C & \mathbf{0} \end{array}\right\rbrack ,\widehat{I} = \left\lbrack \begin{array}{l} \mathbf{0} \\ \mathbf{I} \end{array}\right\rbrack .
+$$
+
+Defining vector
+
+$$
+\dot{f}\left( t\right) = {\left\lbrack \begin{array}{lll} {\dot{f}}_{1}\left( t\right) & \ldots & {\dot{f}}_{N}\left( t\right) \end{array}\right\rbrack }^{T},
+$$
+
+$$
+e\left( t\right) = {\left\lbrack \begin{array}{lll} {e}_{1}^{T}\left( t\right) & \ldots & {e}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T}.
+$$
+
+Then, the estimation error system can be rewritten as:
+
+$$
+\dot{e}\left( t\right) = \left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) e\left( t\right) + {I}_{N} \otimes \widehat{I}\dot{f}\left( t\right) . \tag{11}
+$$
+
+In addition, for agent $i$ :, the closed-loop system can be denoted as:
+
+$$
+{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {E{\widehat{x}}_{i}\left( t\right) - {\widehat{f}}_{i}\left( t\right) + K\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\} }\right.
+$$
+
+$$
+\left. {+{f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
+$$
+
+$$
+= \left( {A + {BE}}\right) {x}_{i}\left( t\right) - {BE}{e}_{xi}\left( t\right) + B{e}_{fi}\left( t\right)
+$$
+
+$$
++ {BKC}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} + D{\omega }_{i}\left( t\right)
+$$
+
+$$
+= \left( {A + {BE}}\right) {x}_{i}\left( t\right) + \widetilde{B}{e}_{i}\left( t\right)
+$$
+
+$$
++ {BK}\bar{C}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{i}\left( t\right) - {e}_{j}\left( t\right) }\right\rbrack }\right\} + D{\omega }_{i}\left( t\right) \tag{12}
+$$
+
+where $\widetilde{B} = \left\lbrack \begin{array}{ll} - {BE} & B \end{array}\right\rbrack$ .
+
+To achieve global average consensus, recall the global average consensus error (2) for agent $i$ , defining vector
+
+$$
+\widetilde{x}\left( t\right) = {\left\lbrack \begin{array}{lll} {\widetilde{x}}_{1}^{T}\left( t\right) & \ldots & {\widetilde{x}}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T},
+$$
+
+$$
+x\left( t\right) = {\left\lbrack \begin{array}{lll} {x}_{1}^{T}\left( t\right) & \ldots & {x}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T},
+$$
+
+$$
+\omega \left( t\right) = {\left\lbrack \begin{array}{lll} {\omega }_{1}^{T}\left( t\right) & \ldots & {\omega }_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T}.
+$$
+
+Then, the closed-loop system can be rewritten as:
+
+$$
+\dot{x}\left( t\right) = \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) x\left( t\right) + \left( {{I}_{N} \otimes \widetilde{B}}\right.
+$$
+
+$$
++ \mathfrak{L} \otimes {BK}\bar{C})e\left( t\right) + \left( {{I}_{N} \otimes D}\right) \omega \left( t\right) . \tag{13}
+$$
+
+So, for the global average consensus error
+
+$$
+\widetilde{x}\left( t\right) = \left( {\mathcal{M} \otimes {I}_{n}}\right) x\left( t\right) \tag{14}
+$$
+
+where $\mathcal{M} = {I}_{N} - \frac{{1}_{N}{1}_{N}^{T}}{N}$ , it can be denoted as
+
+$$
+\dot{\widetilde{x}}\left( t\right) = \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \left( {{\mathcal{M}}^{-1} \otimes {I}_{n}^{-1}}\right) \widetilde{x}\left( t\right)
+$$
+
+$$
++ \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
+$$
+
+$$
++ \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes D}\right) \omega \left( t\right)
+$$
+
+$$
+= \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \widetilde{x}\left( t\right) + \left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
+$$
+
+$$
++ \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) \text{.} \tag{15}
+$$
+
+## B. Stability analysis
+
+Theorem 1. For given scalar $\alpha > 0$ , matrices $\Upsilon ,\Theta , L, K$ , controller feedback gain matrix $E$ , Laplacian matrix $\mathfrak{L}$ , matrix $\mathcal{M}$ , if there exist matrices $Q = {Q}^{T} > 0, P = {P}^{T} > 0$ with appropriate dimensions, such that the following condition holds
+
+$$
+\Phi = \left\lbrack \begin{matrix} {\Phi }_{1} & {\Phi }_{2} & {\Phi }_{3} & \mathbf{0} \\ * & {\Phi }_{4} & \mathbf{0} & {\Phi }_{5} \\ * & * & {\Phi }_{6} & \mathbf{0} \\ * & * & * & {\Phi }_{7} \end{matrix}\right\rbrack < 0 \tag{16}
+$$
+
+where ${\Phi }_{1} = \operatorname{He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} + \alpha {I}_{N} \otimes Q,{\Phi }_{2} =$ $\mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C},{\Phi }_{3} = \mathcal{M} \otimes {QD},{\Phi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - }\right.$ $\mathfrak{L} \otimes {PL}\bar{C}\} + \alpha {I}_{N} \otimes P,{\Phi }_{5} = {I}_{N} \otimes P\widehat{I},{\Phi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},{\Phi }_{7} =$ $- {I}_{N} \otimes {I}_{{n}_{f}}$ , then the all the signals of the estimation error system (11) and the global average consensus error system (15) are bounded.
+
+Proof. The Lyapunov function can be chosen as below:
+
+$$
+V\left( t\right) = {V}_{1}\left( t\right) + {V}_{2}\left( t\right) \tag{17}
+$$
+
+where ${V}_{1}\left( t\right) = {\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\widetilde{x}\left( t\right) ,{V}_{2}\left( t\right) = {e}^{T}\left( t\right) \widetilde{P}e\left( t\right) ,\widetilde{P} = {I}_{N} \otimes$ $P,\widetilde{Q} = {I}_{N} \otimes Q$ . Take the derivative of the above function, the following can be obtained:
+
+$$
+\dot{V}\left( t\right) \leq 2{e}^{T}\left( t\right) \widetilde{P}\dot{e}\left( t\right) + 2{\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\dot{\widetilde{x}}\left( t\right)
+$$
+
+$$
+\leq 2{e}^{T}\left( t\right) \widetilde{P}\left( {\left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) e\left( t\right) + {I}_{N} \otimes \widehat{I}\dot{f}\left( t\right) }\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\left( {\left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \widetilde{x}\left( t\right) }\right.
+$$
+
+$$
+\left. {+\left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right) + \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) }\right)
+$$
+
+$$
+\leq {e}^{T}\left( t\right) \operatorname{He}\left\{ {\left( {{I}_{N} \otimes P}\right) \left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) }\right\} e\left( t\right)
+$$
+
+$$
++ 2{e}^{T}\left( t\right) \left( {{I}_{N} \otimes P}\right) \left( {{I}_{N} \otimes \widehat{I}}\right) \dot{f}\left( t\right)
+$$
+
+$$
++ {\widetilde{x}}^{T}\left( t\right) {He}\left\{ {\left( {{I}_{N} \otimes Q}\right) \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) }\right\} \widetilde{x}\left( t\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \left( {{I}_{N} \otimes Q}\right) \left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \left( {{I}_{N} \otimes Q}\right) \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) . \tag{18}
+$$
+
+According to the properties of the Kronecker product, we can get:
+
+$$
+\dot{V}\left( t\right) \leq {e}^{T}\left( t\right) \operatorname{He}\left\{ {{I}_{N} \otimes P\widetilde{A} - \mathfrak{L} \otimes {PL}\bar{C}}\right\} e\left( t\right)
+$$
+
+$$
++ {\widetilde{x}}^{T}\left( t\right) {He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} \widetilde{x}\left( t\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \left( {\mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C}}\right) e\left( t\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \left( {\mathcal{M} \otimes {QD}}\right) \omega \left( t\right) + 2{e}^{T}\left( t\right) \left( {{I}_{N} \otimes P\widehat{I}}\right) \dot{f}\left( t\right) .
+$$
+
+Define $\xi \left( t\right) = \left\lbrack {{\widetilde{x}}^{T}\left( t\right) ,{e}^{T}\left( t\right) ,{\omega }^{T}\left( t\right) ,{\dot{f}}^{T}\left( t\right) }\right\rbrack$ , if the following linear matrix inequality holds
+
+$$
+\Phi = \left\lbrack \begin{matrix} {\Phi }_{1} & {\Phi }_{2} & {\Phi }_{3} & \mathbf{0} \\ * & {\Phi }_{4} & \mathbf{0} & {\Phi }_{5} \\ * & * & {\Phi }_{6} & \mathbf{0} \\ * & * & * & {\Phi }_{7} \end{matrix}\right\rbrack < 0 \tag{19}
+$$
+
+where
+
+$$
+{\Phi }_{1} = {He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} + \alpha {I}_{N} \otimes Q,
+$$
+
+$$
+{\Phi }_{2} = \mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C},
+$$
+
+$$
+{\Phi }_{3} = \mathcal{M} \otimes {QD},
+$$
+
+$$
+{\Phi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - \mathfrak{L} \otimes {PL}\bar{C}}\right\} + \alpha {I}_{N} \otimes P,
+$$
+
+$$
+{\Phi }_{5} = {I}_{N} \otimes P\widehat{I},
+$$
+
+$$
+{\Phi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},
+$$
+
+$$
+{\Phi }_{7} = - {I}_{N} \otimes {I}_{{n}_{f}},
+$$
+
+we have
+
+$$
+\dot{V}\left( t\right) \leq - \alpha {e}^{T}\left( t\right) \widetilde{P}e\left( t\right) - \alpha {\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\widetilde{x}\left( t\right) + \parallel \omega \left( t\right) {\parallel }^{2} + \parallel \dot{f}\left( t\right) {\parallel }^{2}
+$$
+
+$$
+\leq - {\alpha V}\left( t\right) + \Delta \left( t\right) \text{.} \tag{20}
+$$
+
+As can be seen from the above conclusion, the global average consensus of MASs (1) and the boundedness of the estimation error system (11) can be guaranteed. The proof is completed.
+
+Without loss of generality, the gain matrices $L, K$ can be solved by some algebraic operations, and the theorem is given as follows.
+
+Theorem 2. For given scalar $\alpha > 0$ , matrices $\Upsilon ,\Theta$ , controller feedback gain matrix $E$ , Laplacian matrix $\mathfrak{L}$ , matrix $\mathcal{M}$ , if there exist symmetric positive definite matrices $S, P$ , matrices $K,{P}_{L}$ with appropriate dimensions, such that the following condition holds
+
+$$
+\Psi = \left\lbrack \begin{matrix} {\Psi }_{1} & {\Psi }_{2} & {\Psi }_{3} & \mathbf{0} \\ * & {\Psi }_{4} & \mathbf{0} & {\Psi }_{5} \\ * & * & {\Psi }_{6} & \mathbf{0} \\ * & * & * & {\Psi }_{7} \end{matrix}\right\rbrack < 0 \tag{21}
+$$
+
+where ${\Psi }_{1} = \operatorname{He}\left\{ {{I}_{N} \otimes \left( {{AS} + {BES}}\right) }\right\} + \alpha {I}_{N} \otimes S,{\Psi }_{2} =$ $\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C},{\Psi }_{3} = \mathcal{M} \otimes D,{\Psi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - }\right.$ $\left. {\mathfrak{L} \otimes {P}_{L}\bar{C}}\right\} + \alpha {I}_{N} \otimes P,{\Psi }_{5} = {I}_{N} \otimes P\widehat{I},{\Psi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},$ ${\Psi }_{7} = - {I}_{N} \otimes {I}_{{n}_{f}}, S = {Q}^{-1}$ , then the all the signals of the estimation error system (11) and the global average consensus error system (15) are bounded, and gain matrix $L = {P}^{-1}{P}_{L}$ .
+
+Proof. Post- and pre-multiplying (19) by $\operatorname{diag}\left\{ {{I}_{N} \otimes }\right.$ $\left. {{Q}^{-1},{I}_{N} \otimes {I}_{{n}_{x} + {n}_{f}},{I}_{N} \otimes {I}_{{n}_{\omega }},{I}_{N} \otimes {I}_{{n}_{f}}}\right\}$ , the linear matrix inequality (21) can be deduced. This proof is completed.
+
+## IV. EXAMPLE
+
+In this example, a group of five agents is considered. And the dynamics of the agents are in the form of
+
+$$
+{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
+$$
+
+$$
+{y}_{i}\left( t\right) = C{x}_{i}\left( t\right) \tag{22}
+$$
+
+which are borrowed from [23], and parameter matrices are given as below
+
+$$
+A = \left\lbrack \begin{matrix} 0 & 1 \\ {0.2} & - 2 \end{matrix}\right\rbrack , B = \left\lbrack \begin{array}{l} 0 \\ 1 \end{array}\right\rbrack , C = \left\lbrack \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right\rbrack , D = \left\lbrack \begin{array}{l} {0.1} \\ {0.1} \end{array}\right\rbrack .
+$$
+
+The communication graph considered in this paper is shown below:
+
+
+
+Fig. 1: Communication graph.
+
+From Fig. 1, one has
+
+$$
+\mathfrak{L} = \left\lbrack \begin{matrix} 2 & 0 & - 1 & - 1 & 0 \\ 0 & 2 & 0 & - 1 & - 1 \\ - 1 & 0 & 2 & - 1 & 0 \\ - 1 & - 1 & - 1 & 3 & 0 \\ 0 & - 1 & 0 & 0 & 1 \end{matrix}\right\rbrack
+$$
+
+To obtain the pre-design unknown input observer gain matrices, the matrix ${M}_{\varkappa }$ can be selected as follows:
+
+$$
+{M}_{\varkappa } = \left\lbrack \begin{array}{llll} - {6.7245} & - {9.1869} & - {9.4050} & - {7.5082} \\ - {5.2013} & - {8.2981} & - {7.0737} & - {8.8809} \end{array}\right\rbrack ,
+$$
+
+according to the following condition
+
+$$
+\left\lbrack \begin{array}{ll} \mathbf{\Upsilon } & \Theta \end{array}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{I} & \mathbf{0} \end{array}\right\rbrack \times {\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack }^{ \dagger }
+$$
+
+$$
+- {M}_{\varkappa }\left( {\mathbf{I} - \left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack \times {\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack }^{ \dagger }}\right) ,
+$$
+
+the pre-design unknown input observer gain matrices can be obtained:
+
+$$
+\Upsilon = \left\lbrack \begin{matrix} {0.1086} & - {0.1086} \\ - {1.4760} & {1.4760} \end{matrix}\right\rbrack ,\Theta = \left\lbrack \begin{matrix} {0.1086} & {0.8914} \\ - {0.4760} & {1.4760} \end{matrix}\right\rbrack .
+$$
+
+Then, the parameters required to solve Theorem 2 are selected as $E = \left\lbrack {-{18.7279} - {7.9363}}\right\rbrack ,\alpha = {0.4}$ . the following matrices exist to make inequality (21) negative definite:
+
+$$
+P = \left\lbrack \begin{matrix} {22.2529} & {0.8245} & {0.2564} \\ {0.8245} & {7.9547} & - {2.6069} \\ {0.2564} & - {2.6069} & {1.0677} \end{matrix}\right\rbrack ,
+$$
+
+$$
+S = \left\lbrack \begin{matrix} {14.8878} & - {23.6762} \\ - {23.6762} & {47.0985} \end{matrix}\right\rbrack ,
+$$
+
+$$
+K = \left\lbrack \begin{array}{ll} - {2.7207} & - {6.7659} \end{array}\right\rbrack ,
+$$
+
+$$
+{P}_{L} = \left\lbrack \begin{matrix} {0.3396} & {12.6409} \\ - {0.9358} & {1.4581} \\ {6.4466} & - {0.6400} \end{matrix}\right\rbrack
+$$
+
+where gain matrix
+
+$$
+L = {P}^{-1}{P}_{L} = \left\lbrack \begin{matrix} - {0.7049} & {0.6177} \\ {9.9518} & - {0.6290} \\ {30.5037} & - {2.2834} \end{matrix}\right\rbrack .
+$$
+
+Next, experimental results are presented below to verify the effectiveness of the proposed scheme: The initial state values of agents can be selected as ${x}_{1}\left( 0\right) = \left\lbrack {8;8}\right\rbrack ,{x}_{2}\left( 0\right) = \left\lbrack {8; - 8}\right\rbrack$ , ${x}_{3}\left( 0\right) = \left\lbrack {-8;8}\right\rbrack ,{x}_{4}\left( 0\right) = \left\lbrack {-8; - 8}\right\rbrack ,{x}_{5}\left( 0\right) = \left\lbrack {7;{12}}\right\rbrack$ . The external disturbance is ${\omega }_{i}\left( t\right) = {30}\sin \left( {2t}\right)$ , and agent 1 and 2 are considered to be faulty agents and faults they encounter are shown as follows:
+
+$$
+{f}_{1}\left( t\right) = \left\{ {\begin{array}{ll} 2{e}^{-{0.1}\left( {t - 5}\right) }\sin \left( {{1.2}\left( {t - 5}\right) }\right) , & t \in \left\lbrack {5,{10}}\right\rbrack \\ 0, & \text{ otherwise } \end{array},}\right.
+$$
+
+$$
+{f}_{2}\left( t\right) = \left\{ {\begin{array}{ll} 2\sin \left( {{1.2}\left( {t - {15}}\right) }\right) , & t \in \left\lbrack {{15},{20}}\right\rbrack \\ 0, & \text{ otherwise } \end{array}.}\right.
+$$
+
+
+
+Fig. 2: Curves of state/fault and their estimations (agent 1).
+
+
+
+Fig. 3: Curves of state/fault and their estimations (agent 2).
+
+
+
+Fig. 4: Curves of state/fault and their estimations (agent 3).
+
+
+
+Fig. 5: Curves of state/fault and their estimations (agent 4).
+
+
+
+Fig. 6: Curves of state/fault and their estimations (agent 5).
+
+
+
+Fig. 7: Curves of global average consensus error ${\widetilde{x}}_{i}\left( t\right)$ .
+
+As can be seen from Figs. 2-6, the proposed scheme (4) can effectively reduce the influence of external disturbance ${\omega }_{i}\left( t\right)$ on the estimation performance and realize accurate estimations of the agent state and fault. Based on the accurate estimations obtained by scheme (4) and the relative estimation error ${\eta }_{i} - {\eta }_{j}$ , the proposed global fault-tolerant average consensus controller (5) can make the global average consensus errors ${\widetilde{x}}_{i}\left( t\right)$ approach zero, as shown in Fig. 7.
+
+## V. CONCLUSION
+
+In this paper, the distributed unknown input observer-based global fault-tolerant average consensus control problem for linear MASs has been investigated. First, a distributed unknown input observer based on relative estimation error has been proposed, which can mitigate the impact of external disturbances on estimation performance, thereby achieving accurate estimations of state and fault. Then, based on the obtained estimations and the relative estimation error, a global fault-tolerant average consensus controller has been developed. The proposed scheme can compensate for fault impacts while ensuring global average consensus of the MASs. Finally, simulation experiments have been given to validate the effectiveness of the proposed control scheme.
+
+## REFERENCES
+
+[1] L. Ding, Q.-L. Han, X. Ge, and X.-M. Zhang, "An overview of recent advances in event-triggered consensus of multiagent systems," IEEE Transactions on Cybernetics, vol. 48, no. 4, pp. 1110-1123, 2018.
+
+[2] J. Long, W. Wang, C. Wen, J. Huang, and Y. Guo, "Output-feedback-based adaptive leaderless consensus for heterogenous nonlinear multia-gent systems with switching topologies," IEEE Transactions on Cybernetics, 2024, doi:10.1109/TCYB.2024.3418825.
+
+[3] H. Zhang, W. Zhao, X. Xie, and D. Yue, "Dynamic leader"cfollower output containment control of heterogeneous multiagent systems using reinforcement learning," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, doi:10.1109/TSMC.2024.3406777.
+
+[4] H. Zhou and S. Tong, "Fuzzy adaptive event-triggered resilient formation control for nonlinear multiagent systems under dos attacks and input saturation," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 6, pp. 3665-3674, 2024.
+
+[5] B. Wang, S. Sun, and W. Ren, "Distributed time-varying quadratic optimal resource allocation subject to nonidentical time-varying hessians with application to multiquadrotor hose transportation," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 10, pp. 6109-6119, 2022.
+
+[6] B. Ning, Q.-L. Han, and Z. Zuo, "Distributed optimization for multi-agent systems: An edge-based fixed-time consensus approach," IEEE Transactions on Cybernetics, vol. 49, no. 1, pp. 122-132, 2019.
+
+[7] S. Z. Tajalli, A. Kavousi-Fard, M. Mardaneh, A. Khosravi, and R. Razavi-Far, "Uncertainty-aware management of smart grids using cloud-based lstm-prediction interval," IEEE Transactions on Cybernetics, vol. 52, no. 10, pp. 9964-9977, 2022.
+
+[8] R. Nie, W. Du, Z. Li, and S. He, "Finite-time consensus control for MASs under hidden markov model mechanism," IEEE Transactions on Automatic Control, vol. 69, no. 7, pp. 4726-4733, 2024.
+
+[9] C. Chen, F. L. Lewis, and X. Li, "Event-triggered coordination of multi-agent systems via a lyapunov-based approach for leaderless consensus," Automatica, vol. 136, p. 109936, 2022.
+
+[10] Z. Hu and B. Chen, "Sliding mode control for multi-agent systems under event-triggering hybrid scheduling strategy," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 71, no. 4, pp. 2184- 2188, 2024.
+
+[11] H. Du, S. Li, and X. Lin, "Finite-time formation control of multiagent systems via dynamic output feedback," International Journal of Robust and Nonlinear Control, vol. 23, no. 14, pp. 1609-1628, 2013.
+
+[12] Q. Shen, P. Shi, J. Zhu, S. Wang, and Y. Shi, "Neural networks-based distributed adaptive control of nonlinear multiagent systems," IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 3, pp. 1010-1021, 2020.
+
+[13] K.-K. Oh and H.-S. Ahn, "Formation control and network localization via orientation alignment," IEEE Transactions on Automatic Control, vol. 59, no. 2, pp. 540-545, 2014.
+
+[14] S. Tong, H. Zhou, and Y. Li, "Neural network event-triggered formation fault-tolerant control for nonlinear multiagent systems with actuator faults," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 12, pp. 7571-7582, 2023.
+
+[15] X. Guo, G. Wei, and D. Ding, "Fault-tolerant consensus control for discrete-time multi-agent systems: A distributed adaptive sliding-mode scheme," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 70, no. 7, pp. 2515-2519, 2023.
+
+[16] M. Yadegar and N. Meskin, "Fault-tolerant control of nonlinear heterogeneous multi-agent systems," Automatica, vol. 127, p. 109514, 2021.
+
+[17] R. Sakthivel, B. Kaviarasan, C. K. Ahn, and H. R. Karimi, "Observer and stochastic faulty actuator-based reliable consensus protocol for mul-tiagent system," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 12, pp. 2383-2393, 2018.
+
+[18] X. Zhu, Y. Xia, J. Han, X. Hu, and H. Yang, "Extended dissipative finite-time distributed time-varying delay active fault-tolerant consensus control for semi-markov jump nonlinear multi-agent systems," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 71, no. 4, pp. 2269-2273, 2024.
+
+[19] Q. Jia, W. Chen, Y. Zhang, and H. Li, "Fault reconstruction and fault-tolerant control via learning observers in Takagi-Sugeno fuzzy descriptor systems with time delays," IEEE Transactions on industrial electronics, vol. 62, no. 6, pp. 3885-3895, 2015.
+
+[20] Y. Mu, H. Zhang, Z. Gao, and J. Zhang, "A fuzzy lyapunov function approach for fault estimation of T-S fuzzy fractional-order systems based on unknown input observer," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 2, pp. 1246-1255, 2023.
+
+[21] C. Liu, B. Jiang, X. Wang, Y. Zhang, and S. Xie, "Event-based distributed secure control of unmanned surface vehicles with DoS attacks," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 4, pp. 2159-2170, 2024.
+
+[22] J. C. L. Chan, T. H. Lee, C. P. Tan, H. Trinh, and J. H. Park, "A nonlinear observer for robust fault reconstruction in one-sided lipschitz and quadratically inner-bounded nonlinear descriptor systems," IEEE Access, vol. 9, pp. 22455-22469, 2021.
+
+[23] A.-Y. Lu and G.-H. Yang, "Distributed consensus control for multi-agent systems under denial-of-service," Information Sciences, vol. 439, pp. 95-107, 2018.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..d9e9f37b739bbff10900c3a566f3b7a38eea0860
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,525 @@
+§ DISTRIBUTED UNKNOWN INPUT OBSERVER-BASED GLOBAL FAULT-TOLERANT AVERAGE CONSENSUS CONTROL FOR LINEAR MULTI-AGENT SYSTEMS
+
+Ximing Yang
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+yxm961115123@163.com
+
+Tieshan Li
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+tieshanli@126.com
+
+Yue Long
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+longyue@uestc.edu.cn Hanqing Yang School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
+
+hqyang5517@uestc.edu.cn
+
+${Abstract}$ -The paper mainly investigates the distributed unknown input observer-based global fault-tolerant average consensus control problem for multi-agent systems (MASs). First, a distributed unknown input observer based on relative estimation error is proposed, which can effectively reduce the impact of external disturbances and achieve accurate estimation of the agent states and the faults they suffered. Then, based on the obtained accurate estimations and using the relative estimation error, a global fault-tolerant average consensus controller is proposed. The proposed controller can compensate for the effects of faults and enable the MASs to achieve global average consensus. Finally, simulations are given to verify the effectiveness of the proposed scheme.
+
+Index Terms-Multi-agent systems, fault-tolerant control, distributed unknown input observer, global average consensus.
+
+§ I. INTRODUCTION
+
+In the past decades, the study of multi-agent systems (MASs) has been highly emphasized. Due to their extensive civilian and military applications, MASs are subject to stringent performance requirements, such as adaptability, flexibility, and robustness [1]. To meet these requirements, considerable attention has been given to coordination issues in MASs, such as consensus [2], containment control [3], and formation control [4]. These coordination mechanisms have been utilized in a wide range of applications such as intelligent transportation systems [5], drone formation [6], and smart grids [7]. However, the scalability and complexity of MASs render traditional centralized control schemes insufficient to meet these requirements. Therefore, the exploration of distributed control schemes for MASs is of significant importance.
+
+Compared with centralized control schemes, distributed control schemes are more suitable for the coordinated control of autonomous agents in MASs [8]. Currently, the existing control schemes can be categorized into two types based on the structure of MASs: leaderless and leader-follower. The goal of control in leaderless MASs is to reach the consensus among the agents [9]. In contrast, the control objective of leader-follower MASs is for the follower agents to track the state of the leader [10]. A formation control scheme based on dynamic output feedback was proposed for cases where velocity cannot be measured, ensuring that the agents converge to the desired formation pattern within a finite time [11]. In [12], an adaptive control strategy with a fully distributed neural network was proposed to ensure that all followers track the leader's state and that the synchronization error remains within a specified range. A formation control method based on constructing a direction alignment law and formation control law using the displacement between agents was proposed to address the direction misalignment issue caused by local reference frames [13]. Overall, distributed control has emerged as a popular research direction, attracting considerable research efforts and yielding abundant results. However, many research outcomes focus solely on the control methods design and consider relatively idealized cases, assuming precise knowledge of system states and the absence of system faults, which diminishes their engineering feasibility.
+
+In practical applications, MASs consist of numerous agents distributed across a spatial area, with each agent facing distinct environmental challenges. Agents may encounter uncertainties, such as actuator faults, which can incapacitate the entire control system [14]. To enhance the reliability and safety of the system, it is necessary to implement measures to compensate the adverse influences of faults on the system. In this context, fault-tolerant consensus control has attracted widespread attention as an effective method to compensate for the impact of faults [15]. A virtual actuator framework-based adaptive fault-tolerant control method was proposed to achieve leader-follower consensus control under time-varying actuator faults [16]. Based on an observer framework, a reliable consensus control design method under stochastic actuator failures was proposed to achieve multi-agent consensus [17]. A distributed fault-tolerant consensus protocol based on a distributed intermediate observer was proposed to achieve finite-time fault-tolerant consensus control with enhanced dissipation rate [18]. However, although [18] has addressed the consensus problem of MASs under faults, they have not considered the impact of external disturbances present in practical environments on estimation performance. Fortunately, The unknown input observer, as an effective method based on disturbance decoupling technology to handle external disturbances in estimation error systems, has been widely applied [19]- [20]. Depending on [19], to address the problem of distributed secure control in MASs, a decentralized unknown input observer-based distributed secure control scheme was proposed [21].
+
+This work was supported in part by the National Natural Science Foundation of China under Grant 51939001, Grant 62273072, and Grant 62203088, in part by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0903. (Corresponding author: Tieshan Li.)
+
+Based on these observations, a distributed unknown input observer and a fault-tolerant average consensus controller based on relative estimation error are proposed in this paper. Major contributions of this work are summarized below:
+
+(1) Compared with reference [18], a control scheme utilizing disturbance decoupling technology to handle external disturbances is proposed. This scheme effectively reduces the adverse influence of disturbances on estimation performance and achieves global average consensus for MASs.
+
+(2) Distinguished from [21], a novel distributed unknown input observer utilizing relative estimation error is proposed to obtain the estimations of the state and the fault experienced by each agent. Specifically, it uses relative estimation error to determine fault estimation, incorporating output estimates rather than just the outputs themselves into the distributed algorithm.
+
+The structure is given as follows: Section II presents the problem formulation and give some useful assumptions. In Section III, the main results including distributed unknown input observer-based global fault-tolerant average consensus control scheme and stability analysis are given. Simulations are given in Section IV. Finally, the conclusion of this work is presented in Section V.
+
+§ II. PREPARATIONS
+
+§ A. GRAPH THEORY
+
+An undirected graph $\mathfrak{g}$ is defined as a pair $\left( {v,\epsilon ,\mathfrak{A}}\right)$ , where $v = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ represents a nonempty finite set of nodes, and $\epsilon \subseteq v \times v$ represents a set of edges. An edge $\left( {{v}_{i},{v}_{j}}\right)$ denotes a pair of nodes ${v}_{i}$ and ${v}_{j}$ . The adjacency matrix, denoted as $\mathfrak{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{N \times N}$ , has elements ${a}_{ij}$ representing the weight coefficient of the edge $\left( {{v}_{i},{v}_{j}}\right)$ , with ${a}_{ii} = 0$ and ${a}_{ij} = 1$ if $\left( {{v}_{i},{v}_{j}}\right) \in \epsilon$ . The Laplacian matrix, denoted as $\mathfrak{L} = \mathfrak{D} - \mathfrak{A}$ , is constructed where $\mathfrak{D} = \left\lbrack {d}_{ii}\right\rbrack$ is a diagonal matrix with ${d}_{ii} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}$ .
+
+§ B. PROBLEM FORMULATION
+
+Considering a MASs with $N$ agents $\left( {i \in \{ 1,\ldots ,N\} }\right)$ , and the dynamics of $i$ th agent with actuator faults are denoted as follows:
+
+$$
+{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
+$$
+
+$$
+{y}_{i}\left( t\right) = C{x}_{i}\left( t\right) \tag{1}
+$$
+
+where ${x}_{i}\left( t\right) \in {\mathbf{R}}^{n},{u}_{i}\left( t\right) \in {\mathbf{R}}^{m},{y}_{i}\left( t\right) \in {\mathbf{R}}^{p}$ represent the agent’s state, input, and output, respectively. The terms ${f}_{i}\left( t\right) \in$ ${\mathbf{R}}^{q}$ and ${\omega }_{i}\left( t\right) \in {\mathbf{R}}^{s}$ denote the actuator fault and external disturbance, respectively. The matrices $A,B,C$ , and $D$ are constants with appropriate dimensions.
+
+This paper aims to propose a global fault-tolerant average consensus controller, so that the state of all agents can achieve global average consensus, i.e., global average consensus error ${\widetilde{x}}_{i}\left( t\right)$ satisfy:
+
+$$
+{\widetilde{x}}_{i}\left( t\right) = {x}_{i}\left( t\right) - \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{x}_{i} \Rightarrow 0. \tag{2}
+$$
+
+To facilitate subsequent analysis, the following useful assumptions and lemma are given:
+
+Assumption 1.
+
+$$
+\operatorname{rank}\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack = n + \operatorname{rank}\left( D\right) . \tag{3}
+$$
+
+Assumption 2. [22] The actuator fault ${f}_{i}\left( t\right)$ is differentiable with respect to time, and its time derivative ${\dot{f}}_{i}\left( t\right)$ belongs to ${L}_{2}\lbrack 0,\infty )$ . Similarly, the external disturbance ${\omega }_{i}\left( t\right)$ is bounded and also belongs to ${L}_{2}\lbrack 0,\infty )$ .
+
+Lemma 1. [21] For the undirected and connected graph $\mathfrak{g}$ , one has $\mathfrak{L}\mathcal{M} = \mathcal{M}\mathfrak{L} = \mathfrak{L}$ .
+
+§ III. MAIN RESULTS
+
+A. Distributed unknown input observer-based global fault-tolerant average consensus control scheme
+
+To reconstruct the state and actuator fault of the agent, the relative estimation error-based distributed unknown input observer for agent $i$ is proposed:
+
+$$
+{\dot{m}}_{i}\left( t\right) = {\Upsilon A}{\widehat{x}}_{i}\left( t\right) + {\Upsilon B}\left( {{u}_{i}\left( t\right) + {\widehat{f}}_{i}\left( t\right) }\right)
+$$
+
+$$
++ {L}_{1}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
+$$
+
+$$
+{\widehat{x}}_{i}\left( t\right) = {m}_{i}\left( t\right) + \Theta {y}_{i}\left( t\right)
+$$
+
+$$
+{\dot{\widehat{f}}}_{i}\left( t\right) = {L}_{2}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
+$$
+
+$$
+{\widehat{y}}_{i} = C{\widehat{x}}_{i}\left( t\right) \tag{4}
+$$
+
+where ${m}_{i}\left( t\right) ,{\widehat{x}}_{i}\left( t\right) ,{\widehat{f}}_{i}\left( t\right)$ , and ${\widehat{y}}_{i}$ denote the state of unknown input observer, state estimation, actuator fault estimation, and output estimation for agent $i$ , respectively. And ${\eta }_{i} = {y}_{i}\left( t\right) -$ ${\widehat{y}}_{i}\left( t\right)$ denotes output estimation error, ${\eta }_{i} - {\eta }_{j}$ denotes the relative estimation error. In addition, the global fault-tolerant average consensus controller for agent $i$ is proposed:
+
+$$
+{u}_{i}\left( t\right) = E{\widehat{x}}_{i}\left( t\right) - {\widehat{f}}_{i}\left( t\right) + K\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\} . \tag{5}
+$$
+
+Then, for agent $i$ , the state estimation error system can be denoted as below:
+
+$$
+{\dot{e}}_{xi}\left( t\right) = {\dot{x}}_{i}\left( t\right) - {\dot{m}}_{i}\left( t\right) - {\Theta C}{\dot{x}}_{i}\left( t\right) . \tag{6}
+$$
+
+The following condition for the matrices $\Upsilon$ and $\Theta$ can be obtained based on Assumption 1:
+
+$$
+\left\lbrack \begin{array}{ll} \mathbf{\Upsilon } & \Theta \end{array}\right\rbrack \left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{I} & \mathbf{0} \end{array}\right\rbrack
+$$
+
+which could be re-written as follows
+
+$$
+{\Upsilon D} = \mathbf{0},\mathbf{I} - {\Theta C} = \Upsilon . \tag{7}
+$$
+
+Then, based on the above conditions, one has
+
+$$
+{\dot{e}}_{xi}\left( t\right) = {\Upsilon A}{x}_{i}\left( t\right) + {\Upsilon B}\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + {\Upsilon D}{\omega }_{i}\left( t\right) - {\Upsilon A}{\widehat{x}}_{i}\left( t\right)
+$$
+
+$$
+- {\Upsilon B}\left( {{u}_{i}\left( t\right) + {\widehat{f}}_{i}\left( t\right) }\right) - {L}_{1}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
+$$
+
+$$
+= {\Upsilon A}{e}_{xi}\left( t\right) + {\Upsilon B}{e}_{fi}\left( t\right)
+$$
+
+$$
+- {L}_{1}C\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} , \tag{8}
+$$
+
+and the fault estimation error system can be denoted as:
+
+$$
+{\dot{e}}_{fi}\left( t\right) = - {L}_{2}C\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} + {\dot{f}}_{i}\left( t\right) . \tag{9}
+$$
+
+Denote vector ${e}_{i}\left( t\right) = \left\lbrack {{e}_{xi}^{T}\left( t\right) ,{e}_{fi}^{T}\left( t\right) }\right\rbrack$ , the augmented estimation error system can be obtained:
+
+$$
+{\dot{e}}_{i}\left( t\right) = \widetilde{A}{e}_{i}\left( t\right) - L\bar{C}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{i}\left( t\right) - {e}_{j}\left( t\right) }\right\rbrack }\right\} + \widehat{I}{\dot{f}}_{i}\left( t\right)
+$$
+
+(10)
+
+where
+
+$$
+\widetilde{A} = \left\lbrack \begin{matrix} {\Upsilon A} & {\Upsilon B} \\ \mathbf{0} & \mathbf{0} \end{matrix}\right\rbrack ,L = \left\lbrack \begin{array}{l} {L}_{1} \\ {L}_{2} \end{array}\right\rbrack ,\bar{C} = \left\lbrack \begin{array}{ll} C & \mathbf{0} \end{array}\right\rbrack ,\widehat{I} = \left\lbrack \begin{array}{l} \mathbf{0} \\ \mathbf{I} \end{array}\right\rbrack .
+$$
+
+Defining vector
+
+$$
+\dot{f}\left( t\right) = {\left\lbrack \begin{array}{lll} {\dot{f}}_{1}\left( t\right) & \ldots & {\dot{f}}_{N}\left( t\right) \end{array}\right\rbrack }^{T},
+$$
+
+$$
+e\left( t\right) = {\left\lbrack \begin{array}{lll} {e}_{1}^{T}\left( t\right) & \ldots & {e}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T}.
+$$
+
+Then, the estimation error system can be rewritten as:
+
+$$
+\dot{e}\left( t\right) = \left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) e\left( t\right) + {I}_{N} \otimes \widehat{I}\dot{f}\left( t\right) . \tag{11}
+$$
+
+In addition, for agent $i$ :, the closed-loop system can be denoted as:
+
+$$
+{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {E{\widehat{x}}_{i}\left( t\right) - {\widehat{f}}_{i}\left( t\right) + K\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\} }\right.
+$$
+
+$$
+\left. {+{f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
+$$
+
+$$
+= \left( {A + {BE}}\right) {x}_{i}\left( t\right) - {BE}{e}_{xi}\left( t\right) + B{e}_{fi}\left( t\right)
+$$
+
+$$
++ {BKC}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} + D{\omega }_{i}\left( t\right)
+$$
+
+$$
+= \left( {A + {BE}}\right) {x}_{i}\left( t\right) + \widetilde{B}{e}_{i}\left( t\right)
+$$
+
+$$
++ {BK}\bar{C}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{i}\left( t\right) - {e}_{j}\left( t\right) }\right\rbrack }\right\} + D{\omega }_{i}\left( t\right) \tag{12}
+$$
+
+where $\widetilde{B} = \left\lbrack \begin{array}{ll} - {BE} & B \end{array}\right\rbrack$ .
+
+To achieve global average consensus, recall the global average consensus error (2) for agent $i$ , defining vector
+
+$$
+\widetilde{x}\left( t\right) = {\left\lbrack \begin{array}{lll} {\widetilde{x}}_{1}^{T}\left( t\right) & \ldots & {\widetilde{x}}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T},
+$$
+
+$$
+x\left( t\right) = {\left\lbrack \begin{array}{lll} {x}_{1}^{T}\left( t\right) & \ldots & {x}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T},
+$$
+
+$$
+\omega \left( t\right) = {\left\lbrack \begin{array}{lll} {\omega }_{1}^{T}\left( t\right) & \ldots & {\omega }_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T}.
+$$
+
+Then, the closed-loop system can be rewritten as:
+
+$$
+\dot{x}\left( t\right) = \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) x\left( t\right) + \left( {{I}_{N} \otimes \widetilde{B}}\right.
+$$
+
+$$
++ \mathfrak{L} \otimes {BK}\bar{C})e\left( t\right) + \left( {{I}_{N} \otimes D}\right) \omega \left( t\right) . \tag{13}
+$$
+
+So, for the global average consensus error
+
+$$
+\widetilde{x}\left( t\right) = \left( {\mathcal{M} \otimes {I}_{n}}\right) x\left( t\right) \tag{14}
+$$
+
+where $\mathcal{M} = {I}_{N} - \frac{{1}_{N}{1}_{N}^{T}}{N}$ , it can be denoted as
+
+$$
+\dot{\widetilde{x}}\left( t\right) = \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \left( {{\mathcal{M}}^{-1} \otimes {I}_{n}^{-1}}\right) \widetilde{x}\left( t\right)
+$$
+
+$$
++ \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
+$$
+
+$$
++ \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes D}\right) \omega \left( t\right)
+$$
+
+$$
+= \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \widetilde{x}\left( t\right) + \left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
+$$
+
+$$
++ \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) \text{ . } \tag{15}
+$$
+
+§ B. STABILITY ANALYSIS
+
+Theorem 1. For given scalar $\alpha > 0$ , matrices $\Upsilon ,\Theta ,L,K$ , controller feedback gain matrix $E$ , Laplacian matrix $\mathfrak{L}$ , matrix $\mathcal{M}$ , if there exist matrices $Q = {Q}^{T} > 0,P = {P}^{T} > 0$ with appropriate dimensions, such that the following condition holds
+
+$$
+\Phi = \left\lbrack \begin{matrix} {\Phi }_{1} & {\Phi }_{2} & {\Phi }_{3} & \mathbf{0} \\ * & {\Phi }_{4} & \mathbf{0} & {\Phi }_{5} \\ * & * & {\Phi }_{6} & \mathbf{0} \\ * & * & * & {\Phi }_{7} \end{matrix}\right\rbrack < 0 \tag{16}
+$$
+
+where ${\Phi }_{1} = \operatorname{He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} + \alpha {I}_{N} \otimes Q,{\Phi }_{2} =$ $\mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C},{\Phi }_{3} = \mathcal{M} \otimes {QD},{\Phi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - }\right.$ $\mathfrak{L} \otimes {PL}\bar{C}\} + \alpha {I}_{N} \otimes P,{\Phi }_{5} = {I}_{N} \otimes P\widehat{I},{\Phi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},{\Phi }_{7} =$ $- {I}_{N} \otimes {I}_{{n}_{f}}$ , then the all the signals of the estimation error system (11) and the global average consensus error system (15) are bounded.
+
+Proof. The Lyapunov function can be chosen as below:
+
+$$
+V\left( t\right) = {V}_{1}\left( t\right) + {V}_{2}\left( t\right) \tag{17}
+$$
+
+where ${V}_{1}\left( t\right) = {\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\widetilde{x}\left( t\right) ,{V}_{2}\left( t\right) = {e}^{T}\left( t\right) \widetilde{P}e\left( t\right) ,\widetilde{P} = {I}_{N} \otimes$ $P,\widetilde{Q} = {I}_{N} \otimes Q$ . Take the derivative of the above function, the following can be obtained:
+
+$$
+\dot{V}\left( t\right) \leq 2{e}^{T}\left( t\right) \widetilde{P}\dot{e}\left( t\right) + 2{\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\dot{\widetilde{x}}\left( t\right)
+$$
+
+$$
+\leq 2{e}^{T}\left( t\right) \widetilde{P}\left( {\left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) e\left( t\right) + {I}_{N} \otimes \widehat{I}\dot{f}\left( t\right) }\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\left( {\left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \widetilde{x}\left( t\right) }\right.
+$$
+
+$$
+\left. {+\left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right) + \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) }\right)
+$$
+
+$$
+\leq {e}^{T}\left( t\right) \operatorname{He}\left\{ {\left( {{I}_{N} \otimes P}\right) \left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) }\right\} e\left( t\right)
+$$
+
+$$
++ 2{e}^{T}\left( t\right) \left( {{I}_{N} \otimes P}\right) \left( {{I}_{N} \otimes \widehat{I}}\right) \dot{f}\left( t\right)
+$$
+
+$$
++ {\widetilde{x}}^{T}\left( t\right) {He}\left\{ {\left( {{I}_{N} \otimes Q}\right) \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) }\right\} \widetilde{x}\left( t\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \left( {{I}_{N} \otimes Q}\right) \left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \left( {{I}_{N} \otimes Q}\right) \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) . \tag{18}
+$$
+
+According to the properties of the Kronecker product, we can get:
+
+$$
+\dot{V}\left( t\right) \leq {e}^{T}\left( t\right) \operatorname{He}\left\{ {{I}_{N} \otimes P\widetilde{A} - \mathfrak{L} \otimes {PL}\bar{C}}\right\} e\left( t\right)
+$$
+
+$$
++ {\widetilde{x}}^{T}\left( t\right) {He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} \widetilde{x}\left( t\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \left( {\mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C}}\right) e\left( t\right)
+$$
+
+$$
++ 2{\widetilde{x}}^{T}\left( t\right) \left( {\mathcal{M} \otimes {QD}}\right) \omega \left( t\right) + 2{e}^{T}\left( t\right) \left( {{I}_{N} \otimes P\widehat{I}}\right) \dot{f}\left( t\right) .
+$$
+
+Define $\xi \left( t\right) = \left\lbrack {{\widetilde{x}}^{T}\left( t\right) ,{e}^{T}\left( t\right) ,{\omega }^{T}\left( t\right) ,{\dot{f}}^{T}\left( t\right) }\right\rbrack$ , if the following linear matrix inequality holds
+
+$$
+\Phi = \left\lbrack \begin{matrix} {\Phi }_{1} & {\Phi }_{2} & {\Phi }_{3} & \mathbf{0} \\ * & {\Phi }_{4} & \mathbf{0} & {\Phi }_{5} \\ * & * & {\Phi }_{6} & \mathbf{0} \\ * & * & * & {\Phi }_{7} \end{matrix}\right\rbrack < 0 \tag{19}
+$$
+
+where
+
+$$
+{\Phi }_{1} = {He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} + \alpha {I}_{N} \otimes Q,
+$$
+
+$$
+{\Phi }_{2} = \mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C},
+$$
+
+$$
+{\Phi }_{3} = \mathcal{M} \otimes {QD},
+$$
+
+$$
+{\Phi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - \mathfrak{L} \otimes {PL}\bar{C}}\right\} + \alpha {I}_{N} \otimes P,
+$$
+
+$$
+{\Phi }_{5} = {I}_{N} \otimes P\widehat{I},
+$$
+
+$$
+{\Phi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},
+$$
+
+$$
+{\Phi }_{7} = - {I}_{N} \otimes {I}_{{n}_{f}},
+$$
+
+we have
+
+$$
+\dot{V}\left( t\right) \leq - \alpha {e}^{T}\left( t\right) \widetilde{P}e\left( t\right) - \alpha {\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\widetilde{x}\left( t\right) + \parallel \omega \left( t\right) {\parallel }^{2} + \parallel \dot{f}\left( t\right) {\parallel }^{2}
+$$
+
+$$
+\leq - {\alpha V}\left( t\right) + \Delta \left( t\right) \text{ . } \tag{20}
+$$
+
+As can be seen from the above conclusion, the global average consensus of MASs (1) and the boundedness of the estimation error system (11) can be guaranteed. The proof is completed.
+
+Without loss of generality, the gain matrices $L,K$ can be solved by some algebraic operations, and the theorem is given as follows.
+
+Theorem 2. For given scalar $\alpha > 0$ , matrices $\Upsilon ,\Theta$ , controller feedback gain matrix $E$ , Laplacian matrix $\mathfrak{L}$ , matrix $\mathcal{M}$ , if there exist symmetric positive definite matrices $S,P$ , matrices $K,{P}_{L}$ with appropriate dimensions, such that the following condition holds
+
+$$
+\Psi = \left\lbrack \begin{matrix} {\Psi }_{1} & {\Psi }_{2} & {\Psi }_{3} & \mathbf{0} \\ * & {\Psi }_{4} & \mathbf{0} & {\Psi }_{5} \\ * & * & {\Psi }_{6} & \mathbf{0} \\ * & * & * & {\Psi }_{7} \end{matrix}\right\rbrack < 0 \tag{21}
+$$
+
+where ${\Psi }_{1} = \operatorname{He}\left\{ {{I}_{N} \otimes \left( {{AS} + {BES}}\right) }\right\} + \alpha {I}_{N} \otimes S,{\Psi }_{2} =$ $\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C},{\Psi }_{3} = \mathcal{M} \otimes D,{\Psi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - }\right.$ $\left. {\mathfrak{L} \otimes {P}_{L}\bar{C}}\right\} + \alpha {I}_{N} \otimes P,{\Psi }_{5} = {I}_{N} \otimes P\widehat{I},{\Psi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},$ ${\Psi }_{7} = - {I}_{N} \otimes {I}_{{n}_{f}},S = {Q}^{-1}$ , then the all the signals of the estimation error system (11) and the global average consensus error system (15) are bounded, and gain matrix $L = {P}^{-1}{P}_{L}$ .
+
+Proof. Post- and pre-multiplying (19) by $\operatorname{diag}\left\{ {{I}_{N} \otimes }\right.$ $\left. {{Q}^{-1},{I}_{N} \otimes {I}_{{n}_{x} + {n}_{f}},{I}_{N} \otimes {I}_{{n}_{\omega }},{I}_{N} \otimes {I}_{{n}_{f}}}\right\}$ , the linear matrix inequality (21) can be deduced. This proof is completed.
+
+§ IV. EXAMPLE
+
+In this example, a group of five agents is considered. And the dynamics of the agents are in the form of
+
+$$
+{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
+$$
+
+$$
+{y}_{i}\left( t\right) = C{x}_{i}\left( t\right) \tag{22}
+$$
+
+which are borrowed from [23], and parameter matrices are given as below
+
+$$
+A = \left\lbrack \begin{matrix} 0 & 1 \\ {0.2} & - 2 \end{matrix}\right\rbrack ,B = \left\lbrack \begin{array}{l} 0 \\ 1 \end{array}\right\rbrack ,C = \left\lbrack \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right\rbrack ,D = \left\lbrack \begin{array}{l} {0.1} \\ {0.1} \end{array}\right\rbrack .
+$$
+
+The communication graph considered in this paper is shown below:
+
+ < g r a p h i c s >
+
+Fig. 1: Communication graph.
+
+From Fig. 1, one has
+
+$$
+\mathfrak{L} = \left\lbrack \begin{matrix} 2 & 0 & - 1 & - 1 & 0 \\ 0 & 2 & 0 & - 1 & - 1 \\ - 1 & 0 & 2 & - 1 & 0 \\ - 1 & - 1 & - 1 & 3 & 0 \\ 0 & - 1 & 0 & 0 & 1 \end{matrix}\right\rbrack
+$$
+
+To obtain the pre-design unknown input observer gain matrices, the matrix ${M}_{\varkappa }$ can be selected as follows:
+
+$$
+{M}_{\varkappa } = \left\lbrack \begin{array}{llll} - {6.7245} & - {9.1869} & - {9.4050} & - {7.5082} \\ - {5.2013} & - {8.2981} & - {7.0737} & - {8.8809} \end{array}\right\rbrack ,
+$$
+
+according to the following condition
+
+$$
+\left\lbrack \begin{array}{ll} \mathbf{\Upsilon } & \Theta \end{array}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{I} & \mathbf{0} \end{array}\right\rbrack \times {\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack }^{ \dagger }
+$$
+
+$$
+- {M}_{\varkappa }\left( {\mathbf{I} - \left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack \times {\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack }^{ \dagger }}\right) ,
+$$
+
+the pre-design unknown input observer gain matrices can be obtained:
+
+$$
+\Upsilon = \left\lbrack \begin{matrix} {0.1086} & - {0.1086} \\ - {1.4760} & {1.4760} \end{matrix}\right\rbrack ,\Theta = \left\lbrack \begin{matrix} {0.1086} & {0.8914} \\ - {0.4760} & {1.4760} \end{matrix}\right\rbrack .
+$$
+
+Then, the parameters required to solve Theorem 2 are selected as $E = \left\lbrack {-{18.7279} - {7.9363}}\right\rbrack ,\alpha = {0.4}$ . the following matrices exist to make inequality (21) negative definite:
+
+$$
+P = \left\lbrack \begin{matrix} {22.2529} & {0.8245} & {0.2564} \\ {0.8245} & {7.9547} & - {2.6069} \\ {0.2564} & - {2.6069} & {1.0677} \end{matrix}\right\rbrack ,
+$$
+
+$$
+S = \left\lbrack \begin{matrix} {14.8878} & - {23.6762} \\ - {23.6762} & {47.0985} \end{matrix}\right\rbrack ,
+$$
+
+$$
+K = \left\lbrack \begin{array}{ll} - {2.7207} & - {6.7659} \end{array}\right\rbrack ,
+$$
+
+$$
+{P}_{L} = \left\lbrack \begin{matrix} {0.3396} & {12.6409} \\ - {0.9358} & {1.4581} \\ {6.4466} & - {0.6400} \end{matrix}\right\rbrack
+$$
+
+where gain matrix
+
+$$
+L = {P}^{-1}{P}_{L} = \left\lbrack \begin{matrix} - {0.7049} & {0.6177} \\ {9.9518} & - {0.6290} \\ {30.5037} & - {2.2834} \end{matrix}\right\rbrack .
+$$
+
+Next, experimental results are presented below to verify the effectiveness of the proposed scheme: The initial state values of agents can be selected as ${x}_{1}\left( 0\right) = \left\lbrack {8;8}\right\rbrack ,{x}_{2}\left( 0\right) = \left\lbrack {8; - 8}\right\rbrack$ , ${x}_{3}\left( 0\right) = \left\lbrack {-8;8}\right\rbrack ,{x}_{4}\left( 0\right) = \left\lbrack {-8; - 8}\right\rbrack ,{x}_{5}\left( 0\right) = \left\lbrack {7;{12}}\right\rbrack$ . The external disturbance is ${\omega }_{i}\left( t\right) = {30}\sin \left( {2t}\right)$ , and agent 1 and 2 are considered to be faulty agents and faults they encounter are shown as follows:
+
+$$
+{f}_{1}\left( t\right) = \left\{ {\begin{array}{ll} 2{e}^{-{0.1}\left( {t - 5}\right) }\sin \left( {{1.2}\left( {t - 5}\right) }\right) , & t \in \left\lbrack {5,{10}}\right\rbrack \\ 0, & \text{ otherwise } \end{array},}\right.
+$$
+
+$$
+{f}_{2}\left( t\right) = \left\{ {\begin{array}{ll} 2\sin \left( {{1.2}\left( {t - {15}}\right) }\right) , & t \in \left\lbrack {{15},{20}}\right\rbrack \\ 0, & \text{ otherwise } \end{array}.}\right.
+$$
+
+ < g r a p h i c s >
+
+Fig. 2: Curves of state/fault and their estimations (agent 1).
+
+ < g r a p h i c s >
+
+Fig. 3: Curves of state/fault and their estimations (agent 2).
+
+ < g r a p h i c s >
+
+Fig. 4: Curves of state/fault and their estimations (agent 3).
+
+ < g r a p h i c s >
+
+Fig. 5: Curves of state/fault and their estimations (agent 4).
+
+ < g r a p h i c s >
+
+Fig. 6: Curves of state/fault and their estimations (agent 5).
+
+ < g r a p h i c s >
+
+Fig. 7: Curves of global average consensus error ${\widetilde{x}}_{i}\left( t\right)$ .
+
+As can be seen from Figs. 2-6, the proposed scheme (4) can effectively reduce the influence of external disturbance ${\omega }_{i}\left( t\right)$ on the estimation performance and realize accurate estimations of the agent state and fault. Based on the accurate estimations obtained by scheme (4) and the relative estimation error ${\eta }_{i} - {\eta }_{j}$ , the proposed global fault-tolerant average consensus controller (5) can make the global average consensus errors ${\widetilde{x}}_{i}\left( t\right)$ approach zero, as shown in Fig. 7.
+
+§ V. CONCLUSION
+
+In this paper, the distributed unknown input observer-based global fault-tolerant average consensus control problem for linear MASs has been investigated. First, a distributed unknown input observer based on relative estimation error has been proposed, which can mitigate the impact of external disturbances on estimation performance, thereby achieving accurate estimations of state and fault. Then, based on the obtained estimations and the relative estimation error, a global fault-tolerant average consensus controller has been developed. The proposed scheme can compensate for fault impacts while ensuring global average consensus of the MASs. Finally, simulation experiments have been given to validate the effectiveness of the proposed control scheme.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7be94d63d753303ffe2357ebcc02f0b0c6ebb7c8
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,397 @@
+# Privacy-Preserving Event-Triggered Predefined Time Containment Control for Networked Agent Systems
+
+Weihao ${\mathrm{{Li}}}^{1,2,3, \dagger }$ , Jiangfeng ${\mathrm{{Yue}}}^{1,2,3, \dagger }$ , Mengji ${\mathrm{{Shi}}}^{1,2,3, * }$ , Boxian ${\mathrm{{Lin}}}^{1,2,3}$ , Kaiyu ${\mathrm{{Qin}}}^{1,2,3}$
+
+${}^{1}$ School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu, China.
+
+${}^{2}$ Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu, China.
+
+${}^{3}$ National Laboratory on Adaptive Optics, Chengdu,610209, China.
+
+Email: maangat@126.com
+
+Abstract-This paper addresses the privacy-preserving event-triggered predefined time containment control problem for networked agent systems. A novel containment control scheme is developed that integrates privacy protection with event-triggered mechanisms, optimizing network efficiency by minimizing unnecessary data transmission while ensuring robust containment within a specified time frame. The proposed control scheme ensures the confidentiality of agents' information through output masking, thereby maintaining both privacy and control accuracy. Furthermore, it provides a distinct advantage over traditional finite-time and fixed-time control methods by guaranteeing convergence to the desired state within a predefined time, regardless of initial conditions. Finally, some simulation results are given to verify the effectiveness of the proposed containment control scheme.
+
+Index Terms-Containment Control; Privacy-preserving; Predefined Time; Event-triggered Control; Networked Agent Systems.
+
+## I. INTRODUCTION
+
+Networked agent systems have garnered significant attention across various fields due to their broad range of applications, including robotics [1], autonomous vehicles [2], and distributed sensor networks [3]. The cooperative control of networked agent systems involves designing strategies that enable agents to work together effectively to achieve shared objectives. A prominent approach within cooperative control is containment control [4], [5], which aims to ensure that a group of agents (followers) remains within a specified region or adheres to a particular trajectory, while another group of agents (leaders) directs their behavior. Containment control is particularly crucial in scenarios requiring strict spatial or operational constraint adherence. For instance, in a formation flying scenario, containment control can ensure that a group of drones maintains a specific formation while another set of drones guides their collective movement [6].
+
+Convergence speed is a critical performance metric in the containment control of networked agent systems. Current research explores several approaches to achieving convergence, including asymptotic convergence [7], finite-time convergence [8], and fixed-time convergence [9]. Asymptotic convergence guarantees that the system will eventually converge to the desired state over time, although the convergence rate may not be specified. Finite-time convergence ensures that the system reaches the desired state within a finite period, though the exact time depends on system parameters and states. Fixed-time convergence provides a guarantee of convergence within a predetermined time, irrespective of initial conditions, thereby offering more predictability in performance. However, the convergence time in both finite-time and fixed-time approaches is influenced by system parameters and states. To address this, researchers have developed predefined time control schemes that enable the specification of a desired convergence time [10], [11]. The primary advantages of predefined-time control include the ability to guarantee convergence within a specified time frame, thereby providing more predictable and controllable system behavior, and enhancing system performance by setting precise deadlines for achieving the desired state.
+
+The existing literature [10]-[13] on predefined-time convergence in networked agent systems generally overlooks the issue of information privacy during transmission. However, privacy protection is of paramount importance in containment control, where safeguarding the confidentiality of agents' information is critical. Several methods for privacy protection have been proposed, including state decomposition [14], differential privacy [15], additive noise [16], and output masking [17]. Among these, output masking has received considerable attention due to its simplicity and ease of implementation. This method involves obscuring the output of agents to protect sensitive information while still allowing effective control. However, output masking relies on continuous information exchange, which can impose constraints on communication bandwidth. To address this limitation, it is necessary to develop privacy protection schemes under event-triggered mechanisms [18], [19], which can alleviate communication bandwidth constraints. In [19], the authors integrated both privacy preservation and event-triggered mechanisms into the consensus and containment control but overlooked predefined performance. Zhang et al. [20] incorporated prescribed-time theory and privacy preservation into consensus control but neglected bandwidth constraints. In conclusion, to the best of the author's knowledge, no existing solution simultaneously addresses the challenges of communication bandwidth, convergence time, and privacy protection in containment control, making this an area of significant research opportunity.
+
+---
+
+$\dagger$ :These authors contribute equally to this paper.
+
+This work was supported by the Natural Science Foundation of Sichuan Province (2022NSFSC0037), the Sichuan Science and Technology Programs (2022JDR0107, 2021YFG0130, MZGC20230069, MZGC20240139), the Fundamental Research Funds for the Central Universities (ZYGX2020J020), the Wuhu Science and Technology Plan Project (2022yf23). (Corresponding author: Mengji Shi.)
+
+---
+
+According to the above discussion, this paper focuses on the privacy-preserving event-triggered predefined time containment control problem of networked agent systems. The main contributions of this paper are summarized as follows:
+
+(1) A novel event-triggered predefined-time containment control scheme is developed to optimize network efficiency while ensuring robust containment performance within a specified time frame. By employing event-triggered control, the scheme significantly reduces unnecessary data transmission, ensuring that agents communicate only when necessary. This approach effectively balances communication efficiency and system performance.
+
+(2) The proposed control scheme guarantees convergence within a predefined time, offering a distinct advantage over finite-time and fixed-time methods. Unlike these traditional methods, where convergence time is often influenced by initial conditions and system parameters, the predefined time control ensures that the desired state is consistently reached within the predetermined time frame, thereby enhancing the predictability and reliability of the system.
+
+(3) Furthermore, a privacy-preserving containment control scheme is designed to safeguard the confidentiality of agents' information by masking their outputs while maintaining accurate control. Compared to alternative privacy protection methods such as differential privacy or state decomposition, this scheme provides a simpler and more efficient solution. It ensures both privacy and communication efficiency without compromising the overall system performance, making it particularly suitable for applications with stringent privacy and bandwidth requirements.
+
+The remainder of the paper is listed below. Some preliminaries are formulated in Section II and Section III formulates the problem. Section IV designs a privacy-preserving containment control input. Numerical simulation examples are provided in Section V, and Section VI sums up the whole paper.
+
+## II. PRELIMINARY AND PROBLEM FORMULATION
+
+## A. Preliminaries
+
+The communication structure among agents in this study is represented by a graph topology denoted as $\mathcal{G} = \langle \mathcal{V},\mathcal{E},\mathcal{A}\rangle$ , where $\mathcal{V},\mathcal{E}$ , and $\mathcal{A}$ correspond to the set of nodes, the set of edges, and the adjacency matrix, respectively. The network consists of a total of $N = m + n$ agents, with $n$ being the number of follower agents and $m$ being the number of leader agents. The leader and follower agents are categorized into sets ${\mathcal{V}}_{L} = \{ 1,2,\ldots , m\}$ and ${\mathcal{V}}_{F} = \{ m + 1, m + 2,\ldots , m + n\}$ , respectively. Consequently, the overall set of nodes is formed by the union of these two sets, $\mathcal{V} = {\mathcal{V}}_{F} \cup {\mathcal{V}}_{L}$ . Following the definitions of the node sets, the adjacency matrix is represented as $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathcal{R}}^{\left( {n + m}\right) \times \left( {n + m}\right) }$ , where the element ${a}_{ij}$ is positive if there exists an edge from node $j$ to $i$ within the set $\mathcal{E}$ , and zero otherwise. Assuming leaders do not have adjacent nodes, implying that leaders solely disseminate information to followers, the Laplacian matrix $\mathcal{L}$ for the network of agents is derived as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ . The degree matrix, denoted by $\mathcal{D}$ , is a diagonal matrix with elements ${d}_{i}$ on the diagonal, where ${d}_{i}$ is the sum of the adjacency matrix elements in the $i$ -th row, calculated as ${d}_{i} = \mathop{\sum }\limits_{{k = 1}}^{{n + m}}{a}_{ik}$ .
+
+Based on the aforementioned definitions, the Laplacian matrix is constructed as follows:
+
+$$
+\mathcal{L} = \left\lbrack \begin{matrix} {\mathbf{0}}_{m \times n} & {\mathbf{0}}_{m \times m} \\ {\mathcal{L}}_{F} & {\mathcal{L}}_{L} \end{matrix}\right\rbrack , \tag{1}
+$$
+
+where the sub-Laplacian matrix specific to the follower agents is denoted as ${\mathcal{L}}_{F} \in {\mathcal{R}}^{n \times n}$ , and the sub-Laplacian matrix that captures the interactions between leader and follower agents is represented by ${\mathcal{L}}_{L} \in {\mathcal{R}}^{n \times m}$ . The elements of ${\mathcal{L}}_{F}$ , denoted as $\left\lbrack {l}_{ij}\right\rbrack$ , are defined such that when indices match, ${l}_{ij}$ equals the sum of the adjacency matrix entries ${a}_{ip}$ for all $p$ in the set of nodes $\mathcal{V}$ , and when indices differ, ${l}_{ij}$ is the negation of the corresponding adjacency entry ${a}_{ij}$ . Mathematically, this is expressed as:
+
+$$
+{l}_{ij} = \left\{ \begin{array}{ll} \mathop{\sum }\limits_{{p = 1}}^{{m + n}}{a}_{ip}, & \text{ if }i = j, \\ - {a}_{ij}, & \text{ otherwise. } \end{array}\right.
+$$
+
+The subsequent assumption about the communication framework is established to guarantee the feasibility of containment control within the networked agent systems.
+
+Assumption 1: This paper posits that each follower is associated with at least one leader, with whom there exists a directed path leading to the follower.
+
+Definition 1 ([21]): Let ${Z}_{n}$ be the collection of all $n \times n$ square matrices with non-positive off-diagonal elements, denoted as ${Z}_{n} \subset {\mathcal{R}}^{n \times n}$ . A matrix $Y$ is classified as a nonsingular M-matrix if it belongs to ${Z}_{n}$ and all its eigenvalues possess positive real parts.
+
+Lemma 1 ([4]): Under Assumption 1, it is established that the matrix ${\mathcal{L}}_{F}$ qualifies as a nonsingular M-matrix. Furthermore, it holds that $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{\mathbf{1}}_{m} = {\mathbf{1}}_{n}$ , and every component of $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}$ is nonnegative.
+
+Definition 2 ([22]): Let $\Lambda$ be a subset of ${\mathcal{R}}^{n}$ . If for any ${z}_{1},{z}_{2} \in \Lambda$ and a scalar $0 < \gamma < 1$ , the linear combination $\left( {1 - \gamma }\right) {z}_{1} + \gamma {z}_{2}$ also belongs to $\Lambda$ , then $\Lambda$ is deemed a convex set. Given a vector $\chi$ with elements ${\chi }_{i}$ , the convex hull of $\chi$ , denoted as $\operatorname{Co}\left( \chi \right)$ , is the set of all vectors that can be expressed as $\mathop{\sum }\limits_{{i = 1}}^{n}{\gamma }_{i}{\chi }_{i}$ , where each ${\gamma }_{i} \geq 0$ and the sum $\mathop{\sum }\limits_{{i = 1}}^{n}{\gamma }_{i} = 1$ .
+
+## B. Time-varying transformation
+
+The objective of privacy-preserving containment control is to guide the followers into the convex hull spanned by the leaders, without revealing the initial states of the participating agents. To address this, the paper integrates a dynamic, time-variant transformation into the traditional containment control paradigm. This transformation enables each agent to modify its state according to the evolving function before sharing information with its neighbors. The employed transformation function is both standardized and perpetually updating, characterized as
+
+$$
+p : {\mathcal{R}}^{ + } \times {\mathcal{R}}^{h} \times {\mathcal{R}}^{d} \rightarrow {\mathcal{R}}^{h} \tag{2}
+$$
+
+$$
+\left( {t, x, m}\right) \mapsto y\left( t\right) = \Lambda \left( {t, x\left( t\right) , m}\right) ,
+$$
+
+where $x = {\left\lbrack {x}_{1},\ldots ,{x}_{h}\right\rbrack }^{T} \in {\mathcal{R}}^{h}$ is the agent’s true states, the hidden state output after the time-varying transformation is $y = {\left\lbrack {y}_{1},\ldots ,{y}_{h}\right\rbrack }^{T} \in {\mathcal{R}}^{h}$ , both states have equal dimensions, the parameter set $m \in {\mathcal{R}}^{d}$ represents the key of time-varying transformation. The state output after the time-varying transformation is uniformly referred to as the hidden state in this paper. It is postulated that there exists a common system $\dot{x} = f\left( x\right)$ , and the dynamics following the application of time-varying transformation can be expressed as $\dot{x} = f\left( y\right)$ and $y = \Lambda \left( {t, x, m}\right)$ . If $\left| {\Lambda \left( {t, x, m}\right) - x\left( t\right) }\right|$ is approaching zero under the given key $m$ , it is referred to as a finite time-varying transformation, and the following condition holds
+
+$$
+\left\{ \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow \Omega }}\Lambda \left( {t, x\left( t\right) , m}\right) = x\left( t\right) , \\ \Lambda \left( {t, x\left( t\right) , m}\right) = x\left( t\right) , t \in \lbrack \Omega ,\infty ), \end{array}\right.
+$$
+
+where $\Omega$ denotes a finite time constant indicates that the final hidden state converges to the real state over time. The range of $\Omega$ is primarily influenced by the values of each parameter in the key $m$ .
+
+## C. Containment control problem description
+
+In this paper, we focus on a single-integrator networked agent system. The dynamics of the follower agents are characterized by the following equation:
+
+$$
+{\dot{x}}_{i}\left( t\right) = {u}_{i}\left( t\right) , i \in {\mathcal{V}}_{F}, \tag{3}
+$$
+
+where ${x}_{i}\left( t\right)$ and ${u}_{i}\left( t\right)$ denote the position and control input of $i$ th follower agent, respectively.
+
+Additionally, the dynamics of the leader agents are governed by the following equation:
+
+$$
+{\dot{x}}_{i}\left( t\right) = 0, i \in {\mathcal{V}}_{L}, \tag{4}
+$$
+
+where ${x}_{i}\left( t\right)$ denotes the position of $i$ th leader agent. The above dynamics mean that the leader agents' position is stationary.
+
+Definition 3: Consider a single-integrator networked agent system comprising $m$ leader agents and $n$ follower agents, the implementation of predefined time containment control necessitates that the position states of the followers converge to the convex hull defined by the leaders within specified time $T$ . Specifically, for any given initial condition, the convergence is characterized by the satisfaction of the following set of equations:
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow T}}\left| {{x}_{i}\left( t\right) - \mathop{\sum }\limits_{{k = 1}}^{m}{\varepsilon }_{ik}{x}_{k}\left( t\right) }\right| = 0, \tag{5}
+$$
+
+where ${\varepsilon }_{ik} \in \mathcal{R},{\varepsilon }_{ik} \geq 0$ and $\mathop{\sum }\limits_{{k = 1}}^{m}{\varepsilon }_{ik} = 1, i \in {\mathcal{V}}_{F}, k \in {\mathcal{V}}_{L}$ .
+
+## III. MAIN RESULTS
+
+This section designs a decentralized finite-time varying transformation function to serve as a privacy mask and incorporates the event-triggered mechanism and predefined time theory to enhance the performance of networked agent systems. The proposed containment controller synthetically considers privacy-preserving, communication bandwidth constraint, and convergence speed.
+
+To safeguard the confidentiality of agents' initial state information, we introduce mutually independent functions into the process of information exchange among agents. Furthermore, the aforementioned time-varying function can be implemented as
+
+$$
+\left\{ \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow {T}_{i}}}{\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , t \in \lbrack {T}_{i},\infty ). \end{array}\right. \tag{6}
+$$
+
+According to the requirements of the finite-time varying function, the received information of follower agent $j$ from agent $i$ can be designed as
+
+$$
+\left\{ \begin{array}{ll} {\mathrm{R}}_{i}^{m}\left( t\right) = {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) & \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) + {a}_{i}{t}^{2} + {b}_{i}t + {c}_{i}, & t \in \left\lbrack {0,{\Omega }_{i}}\right) \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , & t \in \left\lbrack {{\Omega }_{i},\infty }\right) \end{array}\right.
+$$
+
+where ${\Omega }_{i}$ satisfies
+
+$$
+\left\{ \begin{array}{l} {\Omega }_{i} = \frac{-{b}_{i} - \sqrt{{b}_{i}{}^{2} - 4{a}_{i}{c}_{i}}}{2{a}_{i}},{b}_{i} \geq 0,{c}_{i} \geq 0,\text{ if }\mathrm{a} \in \lbrack 0,\infty ), \\ {\Omega }_{i} = \frac{-{b}_{i} + \sqrt{{b}_{i}{}^{2} - 4{a}_{i}{c}_{i}}}{2{a}_{i}},{b}_{i} < 0,{c}_{i} < 0,\text{ if }\mathrm{a} \in \left( {-\infty ,0}\right) , \end{array}\right.
+$$
+
+and ${a}_{i},{b}_{i},{c}_{i} \in \mathcal{R}$ , each agent has its distinctive encode key, denoted as ${m}_{i} = \left\{ {{a}_{i},{b}_{i},{c}_{i}}\right\}$ , noting that individual encode keys remain undisclosed to other agents.
+
+Building upon the previously devised time-varying function and the acquired hidden information from neighboring agents, the predefined time containment control input for the $i$ th agent can be expressed as follows
+
+$$
+\left\{ \begin{array}{l} {u}_{i}\left( t\right) = - \left( {\rho + {\delta }_{\mu }^{\dot{\mu }}}\right) \mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( t\right) - {\mathrm{R}}_{j}^{m}\left( t\right) }\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) + {a}_{i}{t}^{2} + {b}_{i}t + {c}_{i}, t \in \left\lbrack {0,{\Omega }_{i}}\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , t \in \left\lbrack {{\Omega }_{i},\infty }\right) , \end{array}\right. \tag{7}
+$$
+
+where $\rho > 0$ represents the control gain, and $\mu$ denotes a time-varying scaling function, which takes the form of
+
+$$
+\mu \left( t\right) = \left\{ \begin{matrix} {\left( \frac{T}{T - t}\right) }^{h}, & t \in \lbrack 0, T), \\ 0, & t \in \lbrack T,\infty ), \end{matrix}\right.
+$$
+
+where the real number $h$ holds the condition $h > 2$ .
+
+Considering the practical challenges encountered in networked agent systems, which frequently involve communication limitations, the incorporation of an event-triggered mechanism can considerably reduce the utilization of communication resources. In this paper, we integrate the event-triggered mechanism into the aforementioned controller.
+
+Assumption 2: When employing an event-triggered mechanism, it is presupposed that every agent has the capability to actively monitor its state information in real time. Furthermore, agents are designed to disseminate relevant state updates contingent upon the fulfillment of designed event-triggering condition.
+
+To ensure synchronization among all agents, we establish a triggering sequence denoted as $\left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{k}}\right\}$ . This sequential arrangement guarantees that all agents update their controllers simultaneously at a unified triggering time. As a result, the control input (7) can be reformulated as
+
+$$
+{u}_{i}\left( t\right) = - \left( {\rho + \delta \frac{\dot{\mu }}{\mu }}\right) \mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{j}^{m}\left( {t}_{k}\right) }\right) . \tag{8}
+$$
+
+For each agent, the state measurement error between triggering and true state is
+
+$$
+{e}_{i}^{m}\left( t\right) = {\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{i}^{m}\left( t\right) , t \in \left\lbrack {{t}_{k},{t}_{k + 1}}\right) . \tag{9}
+$$
+
+Substituting the state measurement error and the controller into the agent's dynamics, yields
+
+$$
+{\dot{x}}_{i}\left( t\right) = - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{j}^{m}\left( {t}_{k}\right) }\right)
+$$
+
+$$
+= - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{e}_{i}^{m}\left( t\right) + {\mathrm{R}}_{i}^{m}\left( t\right) - \left( {{e}_{j}^{m}\left( t\right) + {\mathrm{R}}_{j}^{m}\left( t\right) }\right) }\right)
+$$
+
+$$
+= - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{e}_{i}^{m}\left( t\right) - {e}_{j}^{m}\left( t\right) }\right)
+$$
+
+$$
+- {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( t\right) - {\mathrm{R}}_{j}^{m}\left( t\right) }\right) ,
+$$
+
+where ${\mathrm{K}}_{\rho } = \rho + \delta \frac{\dot{\mu }}{\mu }$ , and its corresponding compact form can be represented as
+
+$$
+\dot{x}\left( t\right) = - {\mathrm{K}}_{\rho }\mathcal{L}{\mathrm{R}}^{m}\left( t\right) - {\mathrm{K}}_{\rho }\mathcal{L}{e}^{m}\left( t\right)
+$$
+
+$$
+= - {\mathrm{K}}_{\rho }\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) .
+$$
+
+where $x\left( t\right) = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {{x}_{i}\left( t\right) }\right\rbrack ,{\mathrm{R}}_{F}^{m}\left( t\right) = {\mathbf{{col}}}_{i}^{n}\left\lbrack {{\mathrm{R}}_{Fi}^{m}\left( t\right) }\right\rbrack$ , ${\mathrm{R}}_{L}^{m}\left( t\right) = {\operatorname{col}}_{i}^{m}\left\lbrack {{\mathrm{R}}_{Li}^{m}\left( t\right) }\right\rbrack ,{e}_{L}^{m}\left( t\right) = {\operatorname{col}}_{i}^{m}\left\lbrack {{e}_{Li}^{m}\left( t\right) }\right\rbrack$ and ${e}_{F}^{m}\left( t\right) =$ ${\mathbf{{col}}}_{i}^{n}\left\lbrack {{e}_{Fi}^{m}\left( t\right) }\right\rbrack$ . Besides, let $A = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {a}_{i}\right\rbrack , B = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {b}_{i}\right\rbrack$ and $C = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {c}_{i}\right\rbrack$ .
+
+Accordingly, the whole closed-loop error system is
+
+$$
+\left\{ \begin{array}{l} \dot{x}\left( t\right) = - {\mathrm{K}}_{\rho }\mathcal{L}{\mathrm{R}}^{m}\left( t\right) - {\mathrm{K}}_{\rho }\mathcal{L}{e}^{m}\left( t\right) \\ {\mathrm{R}}^{m}\left( t\right) = x\left( t\right) + m\left( t\right) \end{array}\right. \tag{10}
+$$
+
+where
+
+$$
+m\left( t\right) = \left\{ \begin{array}{l} A{t}^{2} + {Bt} + C, t \in \left\lbrack {0,{T}^{1}}\right) \\ {A}_{{m}_{1}}{t}^{2} + {B}_{{m}_{1}}t + C, t \in \left\lbrack {{T}^{1},{T}^{2}}\right) \\ \vdots \\ {A}_{{m}_{1}\ldots {m}_{N - 1}}{t}^{2} + {B}_{{m}_{1}\ldots {m}_{N - 1}}t + C, t \in \left\lbrack {{T}^{N - 1},{T}^{N}}\right) \\ 0, t \in \left\lbrack {{T}^{N},\infty }\right) \end{array}\right.
+$$
+
+To address the predefined time privacy-preserving containment control under the event-triggered mechanism, we design the event-triggering condition (ETC) for the networked agent systems as
+
+$$
+{t}_{k + 1} = \inf \left\{ {t > {t}_{k} : \begin{Vmatrix}{{e}^{m}\left( t\right) }\end{Vmatrix} \geq \left( {1 - \varepsilon }\right) \frac{{\mathrm{K}}_{\rho }^{\lambda }}{{\mathrm{K}}_{\rho }}\frac{\parallel \varpi \left( t\right) \parallel }{\parallel \mathcal{L}\parallel }}\right\} . \tag{11}
+$$
+
+where ${\mathrm{K}}_{\rho } = \rho + \delta \frac{\dot{\mu }}{\mu }$ and ${\mathrm{K}}_{\rho }^{\lambda } = \rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) + \delta \frac{\dot{\mu }}{\mu },\varepsilon \in \left( {0,1}\right)$ and ${\lambda }_{2}\left( {\mathcal{L}}_{F}\right)$ is the second smallest eigenvalue of the Laplacian matrix ${\mathcal{L}}_{F}$ . Upon the occurrence of a triggering event, all agents discard their previous state and proceed to sample their current state to update their controller. Subsequently, they transmit the newly sampled state to their neighboring agents. Throughout the inter-event period, their control inputs remain constant until the next triggering instance, which forcibly violates the event-triggering condition.
+
+Theorem 1: Under the event-triggering condition (11) and control input (8), the predefined time privacy-preserving containment control for networked agent system with graph $\mathcal{G}$ can be achieved. While the parameter in ETC satisfies $\varepsilon \in \left( {0,1}\right)$ .
+
+Proof: The proof of Theorem 1 includes convergence analysis and privacy analysis, respectively.
+
+(I) Convergence analysis: The vector $x\left( t\right)$ can be divided into sub-vector ${x}_{F}\left( t\right)$ and ${x}_{L}\left( t\right)$ . Based on Definition 3, we define the containment error as $\varpi \left( t\right) = {x}_{F}\left( t\right) -$ $\left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right)$ , and Lyapunov function is adopted as
+
+$$
+V\left( t\right) = \varpi {\left( t\right) }^{T}\varpi \left( t\right) . \tag{12}
+$$
+
+Note that the leader agents' dynamics model (4), it yields
+
+$$
+\dot{\varpi }\left( t\right) = {\dot{x}}_{F}\left( t\right) - \left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{\dot{x}}_{L}\left( t\right) }\right) = {\dot{x}}_{F}\left( t\right) .
+$$
+
+Taking the derivative of the Lyapunov function $V\left( t\right)$ , one obtains the following expression
+
+$$
+\dot{V}\left( t\right) = \varpi {\left( t\right) }^{T}\dot{\varpi }\left( t\right) = \varpi {\left( t\right) }^{T}{\dot{x}}_{F}\left( t\right)
+$$
+
+$$
+= \varpi {\left( t\right) }^{T}\left( {-{\mathrm{K}}_{\rho }\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) }\right)
+$$
+
+$$
+= - \rho \varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right)
+$$
+
+$$
+- \delta \frac{\dot{\mu }}{\mu }\varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) .
+$$
+
+To satisfy the privacy-preserving requirement of designing a time-varying transformation function, it is essential to ensure that ${T}^{N}$ , the moment at which the final time-varying function converges to its corresponding true state, is less than $T$ , for all $t \in \lbrack 0, T)$ . Notably, the value of $m\left( t\right)$ decreases monotonically as $t$ increases in the interval $t \in \left\lbrack {0,{T}^{N}}\right)$ , and it attains zero if $t \in \left\lbrack {{T}^{N}, T}\right)$ . The result further derives the condition $\mathop{\lim }\limits_{{t \rightarrow {T}_{N}}}{\mathrm{R}}_{F}^{m}\left( t\right) = {x}_{F}\left( t\right) ,\mathop{\lim }\limits_{{t \rightarrow {T}_{N}}}{\mathrm{R}}_{L}^{m}\left( t\right) = {x}_{L}\left( t\right)$ . Based on Lemma 1 in [11], it follows that
+
+$$
+{\mathcal{L}}_{F}\left( {{x}_{F}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{x}_{L}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right)
+$$
+
+$$
+= {\mathcal{L}}_{F}\left( {\left( {{x}_{F}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}\left( {{x}_{L}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right)
+$$
+
+$$
+= {\mathcal{L}}_{F}\left( {{x}_{F}\left( t\right) + {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right) + {\mathcal{L}}_{F}{e}_{F}^{m}\left( t\right) + {\mathcal{L}}_{L}{e}_{L}^{m}\left( t\right)
+$$
+
+$$
+= {\mathcal{L}}_{F}\varpi \left( t\right) + \mathcal{L}{e}^{m}\left( t\right) .
+$$
+
+It is noted that ${\mathcal{L}}_{F} \in {\mathcal{R}}^{n \times n}$ denotes the sub-Laplacian matrix among follower agents, we can obtain $\varpi {\left( t\right) }^{T}{\mathcal{L}}_{F}\varpi \left( t\right) \leq$ ${\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \varpi {\left( t\right) }^{T}\varpi \left( t\right)$ , and it derives
+
+$$
+\dot{V}\left( t\right) \leq - {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - {\mathrm{K}}_{\rho }\varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}{e}_{F}^{m}\left( t\right) + {\mathcal{L}}_{L}{e}_{L}^{m}\left( t\right) }\right)
+$$
+
+$$
+= - \varepsilon {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - {\mathrm{K}}_{\rho }\varpi {\left( t\right) }^{T}\mathcal{L}{e}^{m}\left( t\right)
+$$
+
+$$
+\leq - \varepsilon {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }\parallel \varpi {\parallel }^{2} + {\mathrm{K}}_{\rho }\parallel \varpi \parallel \begin{Vmatrix}{\mathcal{L}{e}^{m}}\end{Vmatrix}.
+$$
+
+Considering the designed event-triggering condition (11) and the condition $\varepsilon \in \left( {0,1}\right)$ , it concludes
+
+$$
+{\mathrm{K}}_{\rho }\begin{Vmatrix}{\mathcal{L}{e}^{m}\left( t\right) }\end{Vmatrix} \leq \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }\parallel \varpi \left( t\right) \parallel .
+$$
+
+Accordingly, since $\delta \geq 1$ , it yields
+
+$$
+\dot{V}\left( t\right) \leq - \left( {\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) + \frac{\dot{\mu }}{\mu }}\right) \varpi {\left( t\right) }^{T}\varpi \left( t\right) = \rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) V - \frac{\dot{\mu }}{\mu }V.
+$$
+
+
+
+Fig. 1. The communication topology among twelve agents.
+
+According to the Lemma 1 in [11], one has
+
+$$
+V\left( t\right) \leq \mu {\left( t\right) }^{-2}{\exp }^{-\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \left( {t - {T}^{N}}\right) }V\left( {T}^{N}\right) . \tag{13}
+$$
+
+And then $\parallel \varpi \left( t\right) \parallel \leq \mu {\left( t\right) }^{-1}{\exp }^{-\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \left( {t - {T}^{N}}\right) }\begin{Vmatrix}{\varpi \left( {T}^{N}\right) }\end{Vmatrix}$ . Note that $\mathop{\lim }\limits_{{t \rightarrow {T}^{ - }}}\mu {\left( t\right) }^{-1} = 0$ , it yields $\mathop{\lim }\limits_{{t \rightarrow {T}^{ - }}}\parallel \varpi \left( t\right) \parallel =$ 0. That is, when $t \rightarrow {T}^{ - }$ , the condition ${x}_{F}\left( t\right) -$ $\left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right) = 0$ holds. Based on the equation (46) of [19] and Definition (2)-(3), $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right)$ is the convex hull signal formed by the leaders, when $\varpi \left( t\right) = 0$ , it implies that all followers converge within the convex hull formed by the leaders. Therefore, the containment control of the networked agent system is achieved within the predefined time $T$ . Since the finite time-varying transformation is only applied to the interval $\lbrack 0, T)$ , the problem of predefined-time containment can be transformed into the general case discussed in [11] for $t \in \lbrack T,\infty )$ . For further information, interested readers can refer to Theorem 1 in [11], which provides detailed proof.
+
+(II) Privacy analysis: Consider a scene where the dynamics $f\left( \cdot \right)$ of all agents are widely known and each agent has access to the hidden output states ${\mathrm{R}}_{i}^{m}\left( t\right)$ of its neighboring agents. While the true states ${x}_{i}\left( t\right)$ and the encode keys $\left\{ {{a}_{i},{b}_{i},{c}_{i}}\right\}$ are regarded as private information exclusive to each agent. For an honest-but-curious agent, the information accessible includes the unsigned graph $\mathcal{G}$ , the state of the honest-but-curious agents and the set of neighboring agents, and the hidden state of both the honest-but-curious agents and their neighbors. Following the application of a finite time-varying transformation to conceal agent $i$ ’s initial state, the resulting hidden output ${\mathrm{R}}_{i}^{m}\left( t\right)$ bears no resemblance to the true initial value ${x}_{i}\left( 0\right)$ . As a result, any information set acquired by an honest but curious agent proves futile in determining agent $i$ ’s true initial state. Additionally, the agent cannot reconstruct their true initial state by employing the findings presented in [23]. Importantly, even external eavesdroppers are unable to obtain the true initial state, as evidenced by the process mentioned above. Thus, it becomes apparent that the integrity of the initial state remains elusive to all parties involved, substantiating the claim of its unattainability by external eavesdroppers.
+
+## IV. Simulation
+
+In this section, several numerical simulations are conducted to verify the effectiveness of the theoretical analysis. The simulation consists of the networked agent systems comprising 12 agents, which include six followers and six leaders. Fig. 1 displays the communication topology among agents. The numerical simulations are performed in the 2- D space. The initial position states of all agents are set as ${x}^{1}\left( 0\right) = {\left\lbrack -{10},0,{10},{10},0, - {10}, - {30}, - 5,{20},{30},5, - {15}\right\rbrack }^{T}$ and ${x}^{2}\left( 0\right) = {\left\lbrack 5,5,5, - 5, - 5, - 5,5,{20},{25}, - {10}, - {15}, - {20}\right\rbrack }^{T}$ . And the parameter $\varepsilon$ is equal to 0.5, the predefined time is $T = {1.5}\mathrm{\;s}$ . The encode keys are selected as
+
+$$
+A = {\left\lbrack -5, - 9, - 5,8, - 3,6, - 4,5,6, - 4,5, - 3\right\rbrack }^{T},
+$$
+
+$$
+B = {\left\lbrack 2,4,3, - 4,1, - 3,2, - 1, - 3,2, - 1,1\right\rbrack }^{T},
+$$
+
+$$
+C = {\left\lbrack 3,4,1, - 3,2, - 2,1, - 3, - 2,1, - 3,2\right\rbrack }^{T}.
+$$
+
+
+
+Fig. 2. The true and masked states of all agents.
+
+
+
+Fig. 3. The control input of the follower agents.
+
+The simulation results are depicted in Fig 2-5. The trajectory of agents in the ${x}^{1}$ direction is illustrated in Fig 2, with the subfigure highlighting the masked trajectories of all agents. This indicates that the proposed method effectively preserves the privacy of the agents' initial states and achieves the predefined time convergence within 1.5s. Fig 3 presents the control input trajectories of all follower agents, where abrupt changes in the trajectories are attributed to the event-triggered mechanism. Fig 4 demonstrates the fulfillment of the event-triggering conditions, when the designed boundary threshold is exceeded, the agents' states are sampled and updated. Fig 5 shows that all followers successfully move from their initial positions into the convex hull formed by the fixed leaders, achieving privacy-preserving event-triggered predefined time containment control for the networked agent system.
+
+
+
+Fig. 4. The trajectory of the state measurement error and boundary threshold.
+
+
+
+Fig. 5. The trajectory of all agents in the 2-D plane under designed containment control input. (Square markers represent the followers, and circular markers represent the leaders. Leaders form a rectangular convex hull.)
+
+## V. CONCULSION
+
+This paper has addressed the privacy-preserving event-triggered predefined-time containment control problem for networked agent systems. A novel containment control scheme has been developed, effectively integrating privacy protection with event-triggered mechanisms. This integration has optimized network efficiency by minimizing unnecessary data transmission while ensuring robust containment within a specified time frame. The proposed control scheme has successfully ensured the confidentiality of agents' information through output masking, thereby maintaining both privacy and control accuracy. The effectiveness of the proposed scheme has been verified through simulation results. It is important to note that this study has focused on static leaders, and future research will extend the investigation to address containment control problems under dynamic leaders.
+
+## REFERENCES
+
+[1] Z. Wang, H. Li, J. Liu, T. Zhang, X. Ma, S. Xie, and J. Luo, "Static group-bipartite consensus in networked robot systems with integral action," International Journal of Advanced Robotic Systems, vol. 20, no. 3, p. 17298806231177148, 2023.
+
+[2] C. Feng, Z. Xu, X. Zhu, P. V. Klaine, and L. Zhang, "Wireless distributed consensus in vehicle to vehicle networks for autonomous driving," IEEE Transactions on Vehicular Technology, vol. 72, no. 6, pp. 8061-8073, 2023.
+
+[3] E. Arabi, T. Yucelen, and W. M. Haddad, "Mitigating the effects of sensor uncertainties in networked multi-agent systems," Journal of
+
+Dynamic Systems, Measurement, and Control, vol. 139, no. 4, p. 041003, 2017.
+
+[4] H. Zhou and S. Tong, "Adaptive neural network event-triggered output-feedback containment control for nonlinear mass with input quantization," IEEE Transactions on Cybernetics, vol. 53, no. 11, pp. 7406-7416, 2023.
+
+[5] X. Wang, N. Pang, Y. Xu, T. Huang, and J. Kurths, "On state-constrained containment control for nonlinear multiagent systems using event-triggered input," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, doi: 10.1109/TSMC.2023.3345365.
+
+[6] X. Shao, H. Liu, W. Zhang, J. Zhao, and Q. Zhang, "Path driven formation-containment control of multiple uavs: A path-following framework," Aerospace Science and Technology, vol. 135, p. 108168, 2023.
+
+[7] X. Wang, R. Xu, T. Huang, and J. Kurths, "Event-triggered adaptive containment control for heterogeneous stochastic nonlinear multiagent systems," IEEE Transactions on Neural Networks and Learning Systems, 2023, doi: 10.1109/TNNLS.2022.3230508.
+
+[8] S. Tong and H. Zhou, "Finite-time adaptive fuzzy event-triggered output-feedback containment control for nonlinear multiagent systems with input saturation," IEEE Transactions on Fuzzy Systems, vol. 31, no. 9, pp. 3135-3147, 2023.
+
+[9] Z. Zhu, Y. Yin, F. Wang, Z. Liu, and Z. Chen, "Practical robust fixed-time containment control for multi-agent systems under actuator faults," Expert Systems with Applications, vol. 245, p. 123152, 2024.
+
+[10] T. Yang and J. Dong, "Funnel-based predefined-time containment control of heterogeneous multiagent systems with sensor and actuator faults," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, doi: 10.1109/TSMC.2023.3330942.
+
+[11] Y. Wang, Y. Song, D. J. Hill, and M. Krstic, "Prescribed-time consensus and containment control of networked multiagent systems," IEEE Transactions on Cybernetics, vol. 49, no. 4, pp. 1138-1147, 2018.
+
+[12] X. Gong and X. Li, "Fault-tolerant practical prescribed-time formation-containment control of multi-agent systems on directed graphs," IEEE Transactions on Network Science and Engineering, 2023, doi: 10.1109/TNSE.2023.3298719.
+
+[13] S. Chang, C. Wang, and X. Luo, "Predefined-time bipartite containment control of multi-agent systems with novel super-twisting extended state observer," Information Sciences, p. 120952, 2024.
+
+[14] X. Chen, L. Huang, L. He, S. Dey, and L. Shi, "A differentially private method for distributed optimization in directed networks via state decomposition," IEEE Transactions on Control of Network Systems, vol. 10, no. 4, pp. 2165-2177, 2023.
+
+[15] C. Gao, D. Zhao, J. Li, and H. Lin, "Private bipartite consensus control for multi-agent systems: A hierarchical differential privacy scheme," Information Fusion, vol. 105, p. 102259, 2024.
+
+[16] L. Liang, R. Ding, S. Liu, and R. Su, "Event-triggered privacy preserving consensus control with edge-based additive noise," IEEE Transactions on Automatic Control, 2024, doi: 10.1109/TAC.2024.3390574.
+
+[17] Y. Gong, L. Cao, Y. Pan, and Q. Lu, "Adaptive containment control of nonlinear multi-agent systems about privacy preservation with multiple attacks," International Journal of Robust and Nonlinear Control, vol. 33, no. 11, pp. 6103-6120, 2023.
+
+[18] M. Zhang, Y. Sun, H. Liu, X. Yi, and D. Ding, "Event-triggered formation-containment control for multi-agent systems based on sliding mode control approaches," Neurocomputing, vol. 562, p. 126905, 2023.
+
+[19] Y. Liu, X. Xie, J. Sun, and D. Yang, "Event-triggered privacy preservation consensus control and containment control for nonlinear mass: An output mask approach," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, doi: 10.1109/TSMC.2024.3379375.
+
+[20] J. Zhang, J. Lu, and J. Lou, "Privacy-preserving average consensus via finite time-varying transformation," IEEE Transactions on Network Science and Engineering, vol. 9, no. 3, pp. 1756-1764, 2022.
+
+[21] A. Berman and R. J. Plemmons, Nonnegative matrices in the mathematical sciences. SIAM, 1994.
+
+[22] R. T. Rockafellar, Convex analysis. Princeton University Press, 1970. [23] J. Yue, K. Qin, M. Shi, B. Jiang, W. Li, and L. Shi, "Event-trigger-based finite-time privacy-preserving formation control for multi-uav system," Drones, vol. 7, no. 4, p. 235, 2023.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c4a1e208008902e0305d17d4d43ca7bf066898b2
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,345 @@
+§ PRIVACY-PRESERVING EVENT-TRIGGERED PREDEFINED TIME CONTAINMENT CONTROL FOR NETWORKED AGENT SYSTEMS
+
+Weihao ${\mathrm{{Li}}}^{1,2,3, \dagger }$ , Jiangfeng ${\mathrm{{Yue}}}^{1,2,3, \dagger }$ , Mengji ${\mathrm{{Shi}}}^{1,2,3, * }$ , Boxian ${\mathrm{{Lin}}}^{1,2,3}$ , Kaiyu ${\mathrm{{Qin}}}^{1,2,3}$
+
+${}^{1}$ School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu, China.
+
+${}^{2}$ Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu, China.
+
+${}^{3}$ National Laboratory on Adaptive Optics, Chengdu,610209, China.
+
+Email: maangat@126.com
+
+Abstract-This paper addresses the privacy-preserving event-triggered predefined time containment control problem for networked agent systems. A novel containment control scheme is developed that integrates privacy protection with event-triggered mechanisms, optimizing network efficiency by minimizing unnecessary data transmission while ensuring robust containment within a specified time frame. The proposed control scheme ensures the confidentiality of agents' information through output masking, thereby maintaining both privacy and control accuracy. Furthermore, it provides a distinct advantage over traditional finite-time and fixed-time control methods by guaranteeing convergence to the desired state within a predefined time, regardless of initial conditions. Finally, some simulation results are given to verify the effectiveness of the proposed containment control scheme.
+
+Index Terms-Containment Control; Privacy-preserving; Predefined Time; Event-triggered Control; Networked Agent Systems.
+
+§ I. INTRODUCTION
+
+Networked agent systems have garnered significant attention across various fields due to their broad range of applications, including robotics [1], autonomous vehicles [2], and distributed sensor networks [3]. The cooperative control of networked agent systems involves designing strategies that enable agents to work together effectively to achieve shared objectives. A prominent approach within cooperative control is containment control [4], [5], which aims to ensure that a group of agents (followers) remains within a specified region or adheres to a particular trajectory, while another group of agents (leaders) directs their behavior. Containment control is particularly crucial in scenarios requiring strict spatial or operational constraint adherence. For instance, in a formation flying scenario, containment control can ensure that a group of drones maintains a specific formation while another set of drones guides their collective movement [6].
+
+Convergence speed is a critical performance metric in the containment control of networked agent systems. Current research explores several approaches to achieving convergence, including asymptotic convergence [7], finite-time convergence [8], and fixed-time convergence [9]. Asymptotic convergence guarantees that the system will eventually converge to the desired state over time, although the convergence rate may not be specified. Finite-time convergence ensures that the system reaches the desired state within a finite period, though the exact time depends on system parameters and states. Fixed-time convergence provides a guarantee of convergence within a predetermined time, irrespective of initial conditions, thereby offering more predictability in performance. However, the convergence time in both finite-time and fixed-time approaches is influenced by system parameters and states. To address this, researchers have developed predefined time control schemes that enable the specification of a desired convergence time [10], [11]. The primary advantages of predefined-time control include the ability to guarantee convergence within a specified time frame, thereby providing more predictable and controllable system behavior, and enhancing system performance by setting precise deadlines for achieving the desired state.
+
+The existing literature [10]-[13] on predefined-time convergence in networked agent systems generally overlooks the issue of information privacy during transmission. However, privacy protection is of paramount importance in containment control, where safeguarding the confidentiality of agents' information is critical. Several methods for privacy protection have been proposed, including state decomposition [14], differential privacy [15], additive noise [16], and output masking [17]. Among these, output masking has received considerable attention due to its simplicity and ease of implementation. This method involves obscuring the output of agents to protect sensitive information while still allowing effective control. However, output masking relies on continuous information exchange, which can impose constraints on communication bandwidth. To address this limitation, it is necessary to develop privacy protection schemes under event-triggered mechanisms [18], [19], which can alleviate communication bandwidth constraints. In [19], the authors integrated both privacy preservation and event-triggered mechanisms into the consensus and containment control but overlooked predefined performance. Zhang et al. [20] incorporated prescribed-time theory and privacy preservation into consensus control but neglected bandwidth constraints. In conclusion, to the best of the author's knowledge, no existing solution simultaneously addresses the challenges of communication bandwidth, convergence time, and privacy protection in containment control, making this an area of significant research opportunity.
+
+$\dagger$ :These authors contribute equally to this paper.
+
+This work was supported by the Natural Science Foundation of Sichuan Province (2022NSFSC0037), the Sichuan Science and Technology Programs (2022JDR0107, 2021YFG0130, MZGC20230069, MZGC20240139), the Fundamental Research Funds for the Central Universities (ZYGX2020J020), the Wuhu Science and Technology Plan Project (2022yf23). (Corresponding author: Mengji Shi.)
+
+According to the above discussion, this paper focuses on the privacy-preserving event-triggered predefined time containment control problem of networked agent systems. The main contributions of this paper are summarized as follows:
+
+(1) A novel event-triggered predefined-time containment control scheme is developed to optimize network efficiency while ensuring robust containment performance within a specified time frame. By employing event-triggered control, the scheme significantly reduces unnecessary data transmission, ensuring that agents communicate only when necessary. This approach effectively balances communication efficiency and system performance.
+
+(2) The proposed control scheme guarantees convergence within a predefined time, offering a distinct advantage over finite-time and fixed-time methods. Unlike these traditional methods, where convergence time is often influenced by initial conditions and system parameters, the predefined time control ensures that the desired state is consistently reached within the predetermined time frame, thereby enhancing the predictability and reliability of the system.
+
+(3) Furthermore, a privacy-preserving containment control scheme is designed to safeguard the confidentiality of agents' information by masking their outputs while maintaining accurate control. Compared to alternative privacy protection methods such as differential privacy or state decomposition, this scheme provides a simpler and more efficient solution. It ensures both privacy and communication efficiency without compromising the overall system performance, making it particularly suitable for applications with stringent privacy and bandwidth requirements.
+
+The remainder of the paper is listed below. Some preliminaries are formulated in Section II and Section III formulates the problem. Section IV designs a privacy-preserving containment control input. Numerical simulation examples are provided in Section V, and Section VI sums up the whole paper.
+
+§ II. PRELIMINARY AND PROBLEM FORMULATION
+
+§ A. PRELIMINARIES
+
+The communication structure among agents in this study is represented by a graph topology denoted as $\mathcal{G} = \langle \mathcal{V},\mathcal{E},\mathcal{A}\rangle$ , where $\mathcal{V},\mathcal{E}$ , and $\mathcal{A}$ correspond to the set of nodes, the set of edges, and the adjacency matrix, respectively. The network consists of a total of $N = m + n$ agents, with $n$ being the number of follower agents and $m$ being the number of leader agents. The leader and follower agents are categorized into sets ${\mathcal{V}}_{L} = \{ 1,2,\ldots ,m\}$ and ${\mathcal{V}}_{F} = \{ m + 1,m + 2,\ldots ,m + n\}$ , respectively. Consequently, the overall set of nodes is formed by the union of these two sets, $\mathcal{V} = {\mathcal{V}}_{F} \cup {\mathcal{V}}_{L}$ . Following the definitions of the node sets, the adjacency matrix is represented as $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathcal{R}}^{\left( {n + m}\right) \times \left( {n + m}\right) }$ , where the element ${a}_{ij}$ is positive if there exists an edge from node $j$ to $i$ within the set $\mathcal{E}$ , and zero otherwise. Assuming leaders do not have adjacent nodes, implying that leaders solely disseminate information to followers, the Laplacian matrix $\mathcal{L}$ for the network of agents is derived as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ . The degree matrix, denoted by $\mathcal{D}$ , is a diagonal matrix with elements ${d}_{i}$ on the diagonal, where ${d}_{i}$ is the sum of the adjacency matrix elements in the $i$ -th row, calculated as ${d}_{i} = \mathop{\sum }\limits_{{k = 1}}^{{n + m}}{a}_{ik}$ .
+
+Based on the aforementioned definitions, the Laplacian matrix is constructed as follows:
+
+$$
+\mathcal{L} = \left\lbrack \begin{matrix} {\mathbf{0}}_{m \times n} & {\mathbf{0}}_{m \times m} \\ {\mathcal{L}}_{F} & {\mathcal{L}}_{L} \end{matrix}\right\rbrack , \tag{1}
+$$
+
+where the sub-Laplacian matrix specific to the follower agents is denoted as ${\mathcal{L}}_{F} \in {\mathcal{R}}^{n \times n}$ , and the sub-Laplacian matrix that captures the interactions between leader and follower agents is represented by ${\mathcal{L}}_{L} \in {\mathcal{R}}^{n \times m}$ . The elements of ${\mathcal{L}}_{F}$ , denoted as $\left\lbrack {l}_{ij}\right\rbrack$ , are defined such that when indices match, ${l}_{ij}$ equals the sum of the adjacency matrix entries ${a}_{ip}$ for all $p$ in the set of nodes $\mathcal{V}$ , and when indices differ, ${l}_{ij}$ is the negation of the corresponding adjacency entry ${a}_{ij}$ . Mathematically, this is expressed as:
+
+$$
+{l}_{ij} = \left\{ \begin{array}{ll} \mathop{\sum }\limits_{{p = 1}}^{{m + n}}{a}_{ip}, & \text{ if }i = j, \\ - {a}_{ij}, & \text{ otherwise. } \end{array}\right.
+$$
+
+The subsequent assumption about the communication framework is established to guarantee the feasibility of containment control within the networked agent systems.
+
+Assumption 1: This paper posits that each follower is associated with at least one leader, with whom there exists a directed path leading to the follower.
+
+Definition 1 ([21]): Let ${Z}_{n}$ be the collection of all $n \times n$ square matrices with non-positive off-diagonal elements, denoted as ${Z}_{n} \subset {\mathcal{R}}^{n \times n}$ . A matrix $Y$ is classified as a nonsingular M-matrix if it belongs to ${Z}_{n}$ and all its eigenvalues possess positive real parts.
+
+Lemma 1 ([4]): Under Assumption 1, it is established that the matrix ${\mathcal{L}}_{F}$ qualifies as a nonsingular M-matrix. Furthermore, it holds that $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{\mathbf{1}}_{m} = {\mathbf{1}}_{n}$ , and every component of $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}$ is nonnegative.
+
+Definition 2 ([22]): Let $\Lambda$ be a subset of ${\mathcal{R}}^{n}$ . If for any ${z}_{1},{z}_{2} \in \Lambda$ and a scalar $0 < \gamma < 1$ , the linear combination $\left( {1 - \gamma }\right) {z}_{1} + \gamma {z}_{2}$ also belongs to $\Lambda$ , then $\Lambda$ is deemed a convex set. Given a vector $\chi$ with elements ${\chi }_{i}$ , the convex hull of $\chi$ , denoted as $\operatorname{Co}\left( \chi \right)$ , is the set of all vectors that can be expressed as $\mathop{\sum }\limits_{{i = 1}}^{n}{\gamma }_{i}{\chi }_{i}$ , where each ${\gamma }_{i} \geq 0$ and the sum $\mathop{\sum }\limits_{{i = 1}}^{n}{\gamma }_{i} = 1$ .
+
+§ B. TIME-VARYING TRANSFORMATION
+
+The objective of privacy-preserving containment control is to guide the followers into the convex hull spanned by the leaders, without revealing the initial states of the participating agents. To address this, the paper integrates a dynamic, time-variant transformation into the traditional containment control paradigm. This transformation enables each agent to modify its state according to the evolving function before sharing information with its neighbors. The employed transformation function is both standardized and perpetually updating, characterized as
+
+$$
+p : {\mathcal{R}}^{ + } \times {\mathcal{R}}^{h} \times {\mathcal{R}}^{d} \rightarrow {\mathcal{R}}^{h} \tag{2}
+$$
+
+$$
+\left( {t,x,m}\right) \mapsto y\left( t\right) = \Lambda \left( {t,x\left( t\right) ,m}\right) ,
+$$
+
+where $x = {\left\lbrack {x}_{1},\ldots ,{x}_{h}\right\rbrack }^{T} \in {\mathcal{R}}^{h}$ is the agent’s true states, the hidden state output after the time-varying transformation is $y = {\left\lbrack {y}_{1},\ldots ,{y}_{h}\right\rbrack }^{T} \in {\mathcal{R}}^{h}$ , both states have equal dimensions, the parameter set $m \in {\mathcal{R}}^{d}$ represents the key of time-varying transformation. The state output after the time-varying transformation is uniformly referred to as the hidden state in this paper. It is postulated that there exists a common system $\dot{x} = f\left( x\right)$ , and the dynamics following the application of time-varying transformation can be expressed as $\dot{x} = f\left( y\right)$ and $y = \Lambda \left( {t,x,m}\right)$ . If $\left| {\Lambda \left( {t,x,m}\right) - x\left( t\right) }\right|$ is approaching zero under the given key $m$ , it is referred to as a finite time-varying transformation, and the following condition holds
+
+$$
+\left\{ \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow \Omega }}\Lambda \left( {t,x\left( t\right) ,m}\right) = x\left( t\right) , \\ \Lambda \left( {t,x\left( t\right) ,m}\right) = x\left( t\right) ,t \in \lbrack \Omega ,\infty ), \end{array}\right.
+$$
+
+where $\Omega$ denotes a finite time constant indicates that the final hidden state converges to the real state over time. The range of $\Omega$ is primarily influenced by the values of each parameter in the key $m$ .
+
+§ C. CONTAINMENT CONTROL PROBLEM DESCRIPTION
+
+In this paper, we focus on a single-integrator networked agent system. The dynamics of the follower agents are characterized by the following equation:
+
+$$
+{\dot{x}}_{i}\left( t\right) = {u}_{i}\left( t\right) ,i \in {\mathcal{V}}_{F}, \tag{3}
+$$
+
+where ${x}_{i}\left( t\right)$ and ${u}_{i}\left( t\right)$ denote the position and control input of $i$ th follower agent, respectively.
+
+Additionally, the dynamics of the leader agents are governed by the following equation:
+
+$$
+{\dot{x}}_{i}\left( t\right) = 0,i \in {\mathcal{V}}_{L}, \tag{4}
+$$
+
+where ${x}_{i}\left( t\right)$ denotes the position of $i$ th leader agent. The above dynamics mean that the leader agents' position is stationary.
+
+Definition 3: Consider a single-integrator networked agent system comprising $m$ leader agents and $n$ follower agents, the implementation of predefined time containment control necessitates that the position states of the followers converge to the convex hull defined by the leaders within specified time $T$ . Specifically, for any given initial condition, the convergence is characterized by the satisfaction of the following set of equations:
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow T}}\left| {{x}_{i}\left( t\right) - \mathop{\sum }\limits_{{k = 1}}^{m}{\varepsilon }_{ik}{x}_{k}\left( t\right) }\right| = 0, \tag{5}
+$$
+
+where ${\varepsilon }_{ik} \in \mathcal{R},{\varepsilon }_{ik} \geq 0$ and $\mathop{\sum }\limits_{{k = 1}}^{m}{\varepsilon }_{ik} = 1,i \in {\mathcal{V}}_{F},k \in {\mathcal{V}}_{L}$ .
+
+§ III. MAIN RESULTS
+
+This section designs a decentralized finite-time varying transformation function to serve as a privacy mask and incorporates the event-triggered mechanism and predefined time theory to enhance the performance of networked agent systems. The proposed containment controller synthetically considers privacy-preserving, communication bandwidth constraint, and convergence speed.
+
+To safeguard the confidentiality of agents' initial state information, we introduce mutually independent functions into the process of information exchange among agents. Furthermore, the aforementioned time-varying function can be implemented as
+
+$$
+\left\{ \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow {T}_{i}}}{\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) ,t \in \lbrack {T}_{i},\infty ). \end{array}\right. \tag{6}
+$$
+
+According to the requirements of the finite-time varying function, the received information of follower agent $j$ from agent $i$ can be designed as
+
+$$
+\left\{ \begin{array}{ll} {\mathrm{R}}_{i}^{m}\left( t\right) = {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) & \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) + {a}_{i}{t}^{2} + {b}_{i}t + {c}_{i}, & t \in \left\lbrack {0,{\Omega }_{i}}\right) \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , & t \in \left\lbrack {{\Omega }_{i},\infty }\right) \end{array}\right.
+$$
+
+where ${\Omega }_{i}$ satisfies
+
+$$
+\left\{ \begin{array}{l} {\Omega }_{i} = \frac{-{b}_{i} - \sqrt{{b}_{i}{}^{2} - 4{a}_{i}{c}_{i}}}{2{a}_{i}},{b}_{i} \geq 0,{c}_{i} \geq 0,\text{ if }\mathrm{a} \in \lbrack 0,\infty ), \\ {\Omega }_{i} = \frac{-{b}_{i} + \sqrt{{b}_{i}{}^{2} - 4{a}_{i}{c}_{i}}}{2{a}_{i}},{b}_{i} < 0,{c}_{i} < 0,\text{ if }\mathrm{a} \in \left( {-\infty ,0}\right) , \end{array}\right.
+$$
+
+and ${a}_{i},{b}_{i},{c}_{i} \in \mathcal{R}$ , each agent has its distinctive encode key, denoted as ${m}_{i} = \left\{ {{a}_{i},{b}_{i},{c}_{i}}\right\}$ , noting that individual encode keys remain undisclosed to other agents.
+
+Building upon the previously devised time-varying function and the acquired hidden information from neighboring agents, the predefined time containment control input for the $i$ th agent can be expressed as follows
+
+$$
+\left\{ \begin{array}{l} {u}_{i}\left( t\right) = - \left( {\rho + {\delta }_{\mu }^{\dot{\mu }}}\right) \mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( t\right) - {\mathrm{R}}_{j}^{m}\left( t\right) }\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) + {a}_{i}{t}^{2} + {b}_{i}t + {c}_{i},t \in \left\lbrack {0,{\Omega }_{i}}\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) ,t \in \left\lbrack {{\Omega }_{i},\infty }\right) , \end{array}\right. \tag{7}
+$$
+
+where $\rho > 0$ represents the control gain, and $\mu$ denotes a time-varying scaling function, which takes the form of
+
+$$
+\mu \left( t\right) = \left\{ \begin{matrix} {\left( \frac{T}{T - t}\right) }^{h}, & t \in \lbrack 0,T), \\ 0, & t \in \lbrack T,\infty ), \end{matrix}\right.
+$$
+
+where the real number $h$ holds the condition $h > 2$ .
+
+Considering the practical challenges encountered in networked agent systems, which frequently involve communication limitations, the incorporation of an event-triggered mechanism can considerably reduce the utilization of communication resources. In this paper, we integrate the event-triggered mechanism into the aforementioned controller.
+
+Assumption 2: When employing an event-triggered mechanism, it is presupposed that every agent has the capability to actively monitor its state information in real time. Furthermore, agents are designed to disseminate relevant state updates contingent upon the fulfillment of designed event-triggering condition.
+
+To ensure synchronization among all agents, we establish a triggering sequence denoted as $\left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{k}}\right\}$ . This sequential arrangement guarantees that all agents update their controllers simultaneously at a unified triggering time. As a result, the control input (7) can be reformulated as
+
+$$
+{u}_{i}\left( t\right) = - \left( {\rho + \delta \frac{\dot{\mu }}{\mu }}\right) \mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{j}^{m}\left( {t}_{k}\right) }\right) . \tag{8}
+$$
+
+For each agent, the state measurement error between triggering and true state is
+
+$$
+{e}_{i}^{m}\left( t\right) = {\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{i}^{m}\left( t\right) ,t \in \left\lbrack {{t}_{k},{t}_{k + 1}}\right) . \tag{9}
+$$
+
+Substituting the state measurement error and the controller into the agent's dynamics, yields
+
+$$
+{\dot{x}}_{i}\left( t\right) = - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{j}^{m}\left( {t}_{k}\right) }\right)
+$$
+
+$$
+= - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{e}_{i}^{m}\left( t\right) + {\mathrm{R}}_{i}^{m}\left( t\right) - \left( {{e}_{j}^{m}\left( t\right) + {\mathrm{R}}_{j}^{m}\left( t\right) }\right) }\right)
+$$
+
+$$
+= - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{e}_{i}^{m}\left( t\right) - {e}_{j}^{m}\left( t\right) }\right)
+$$
+
+$$
+- {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( t\right) - {\mathrm{R}}_{j}^{m}\left( t\right) }\right) ,
+$$
+
+where ${\mathrm{K}}_{\rho } = \rho + \delta \frac{\dot{\mu }}{\mu }$ , and its corresponding compact form can be represented as
+
+$$
+\dot{x}\left( t\right) = - {\mathrm{K}}_{\rho }\mathcal{L}{\mathrm{R}}^{m}\left( t\right) - {\mathrm{K}}_{\rho }\mathcal{L}{e}^{m}\left( t\right)
+$$
+
+$$
+= - {\mathrm{K}}_{\rho }\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) .
+$$
+
+where $x\left( t\right) = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {{x}_{i}\left( t\right) }\right\rbrack ,{\mathrm{R}}_{F}^{m}\left( t\right) = {\mathbf{{col}}}_{i}^{n}\left\lbrack {{\mathrm{R}}_{Fi}^{m}\left( t\right) }\right\rbrack$ , ${\mathrm{R}}_{L}^{m}\left( t\right) = {\operatorname{col}}_{i}^{m}\left\lbrack {{\mathrm{R}}_{Li}^{m}\left( t\right) }\right\rbrack ,{e}_{L}^{m}\left( t\right) = {\operatorname{col}}_{i}^{m}\left\lbrack {{e}_{Li}^{m}\left( t\right) }\right\rbrack$ and ${e}_{F}^{m}\left( t\right) =$ ${\mathbf{{col}}}_{i}^{n}\left\lbrack {{e}_{Fi}^{m}\left( t\right) }\right\rbrack$ . Besides, let $A = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {a}_{i}\right\rbrack ,B = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {b}_{i}\right\rbrack$ and $C = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {c}_{i}\right\rbrack$ .
+
+Accordingly, the whole closed-loop error system is
+
+$$
+\left\{ \begin{array}{l} \dot{x}\left( t\right) = - {\mathrm{K}}_{\rho }\mathcal{L}{\mathrm{R}}^{m}\left( t\right) - {\mathrm{K}}_{\rho }\mathcal{L}{e}^{m}\left( t\right) \\ {\mathrm{R}}^{m}\left( t\right) = x\left( t\right) + m\left( t\right) \end{array}\right. \tag{10}
+$$
+
+where
+
+$$
+m\left( t\right) = \left\{ \begin{array}{l} A{t}^{2} + {Bt} + C,t \in \left\lbrack {0,{T}^{1}}\right) \\ {A}_{{m}_{1}}{t}^{2} + {B}_{{m}_{1}}t + C,t \in \left\lbrack {{T}^{1},{T}^{2}}\right) \\ \vdots \\ {A}_{{m}_{1}\ldots {m}_{N - 1}}{t}^{2} + {B}_{{m}_{1}\ldots {m}_{N - 1}}t + C,t \in \left\lbrack {{T}^{N - 1},{T}^{N}}\right) \\ 0,t \in \left\lbrack {{T}^{N},\infty }\right) \end{array}\right.
+$$
+
+To address the predefined time privacy-preserving containment control under the event-triggered mechanism, we design the event-triggering condition (ETC) for the networked agent systems as
+
+$$
+{t}_{k + 1} = \inf \left\{ {t > {t}_{k} : \begin{Vmatrix}{{e}^{m}\left( t\right) }\end{Vmatrix} \geq \left( {1 - \varepsilon }\right) \frac{{\mathrm{K}}_{\rho }^{\lambda }}{{\mathrm{K}}_{\rho }}\frac{\parallel \varpi \left( t\right) \parallel }{\parallel \mathcal{L}\parallel }}\right\} . \tag{11}
+$$
+
+where ${\mathrm{K}}_{\rho } = \rho + \delta \frac{\dot{\mu }}{\mu }$ and ${\mathrm{K}}_{\rho }^{\lambda } = \rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) + \delta \frac{\dot{\mu }}{\mu },\varepsilon \in \left( {0,1}\right)$ and ${\lambda }_{2}\left( {\mathcal{L}}_{F}\right)$ is the second smallest eigenvalue of the Laplacian matrix ${\mathcal{L}}_{F}$ . Upon the occurrence of a triggering event, all agents discard their previous state and proceed to sample their current state to update their controller. Subsequently, they transmit the newly sampled state to their neighboring agents. Throughout the inter-event period, their control inputs remain constant until the next triggering instance, which forcibly violates the event-triggering condition.
+
+Theorem 1: Under the event-triggering condition (11) and control input (8), the predefined time privacy-preserving containment control for networked agent system with graph $\mathcal{G}$ can be achieved. While the parameter in ETC satisfies $\varepsilon \in \left( {0,1}\right)$ .
+
+Proof: The proof of Theorem 1 includes convergence analysis and privacy analysis, respectively.
+
+(I) Convergence analysis: The vector $x\left( t\right)$ can be divided into sub-vector ${x}_{F}\left( t\right)$ and ${x}_{L}\left( t\right)$ . Based on Definition 3, we define the containment error as $\varpi \left( t\right) = {x}_{F}\left( t\right) -$ $\left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right)$ , and Lyapunov function is adopted as
+
+$$
+V\left( t\right) = \varpi {\left( t\right) }^{T}\varpi \left( t\right) . \tag{12}
+$$
+
+Note that the leader agents' dynamics model (4), it yields
+
+$$
+\dot{\varpi }\left( t\right) = {\dot{x}}_{F}\left( t\right) - \left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{\dot{x}}_{L}\left( t\right) }\right) = {\dot{x}}_{F}\left( t\right) .
+$$
+
+Taking the derivative of the Lyapunov function $V\left( t\right)$ , one obtains the following expression
+
+$$
+\dot{V}\left( t\right) = \varpi {\left( t\right) }^{T}\dot{\varpi }\left( t\right) = \varpi {\left( t\right) }^{T}{\dot{x}}_{F}\left( t\right)
+$$
+
+$$
+= \varpi {\left( t\right) }^{T}\left( {-{\mathrm{K}}_{\rho }\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) }\right)
+$$
+
+$$
+= - \rho \varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right)
+$$
+
+$$
+- \delta \frac{\dot{\mu }}{\mu }\varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) .
+$$
+
+To satisfy the privacy-preserving requirement of designing a time-varying transformation function, it is essential to ensure that ${T}^{N}$ , the moment at which the final time-varying function converges to its corresponding true state, is less than $T$ , for all $t \in \lbrack 0,T)$ . Notably, the value of $m\left( t\right)$ decreases monotonically as $t$ increases in the interval $t \in \left\lbrack {0,{T}^{N}}\right)$ , and it attains zero if $t \in \left\lbrack {{T}^{N},T}\right)$ . The result further derives the condition $\mathop{\lim }\limits_{{t \rightarrow {T}_{N}}}{\mathrm{R}}_{F}^{m}\left( t\right) = {x}_{F}\left( t\right) ,\mathop{\lim }\limits_{{t \rightarrow {T}_{N}}}{\mathrm{R}}_{L}^{m}\left( t\right) = {x}_{L}\left( t\right)$ . Based on Lemma 1 in [11], it follows that
+
+$$
+{\mathcal{L}}_{F}\left( {{x}_{F}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{x}_{L}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right)
+$$
+
+$$
+= {\mathcal{L}}_{F}\left( {\left( {{x}_{F}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}\left( {{x}_{L}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right)
+$$
+
+$$
+= {\mathcal{L}}_{F}\left( {{x}_{F}\left( t\right) + {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right) + {\mathcal{L}}_{F}{e}_{F}^{m}\left( t\right) + {\mathcal{L}}_{L}{e}_{L}^{m}\left( t\right)
+$$
+
+$$
+= {\mathcal{L}}_{F}\varpi \left( t\right) + \mathcal{L}{e}^{m}\left( t\right) .
+$$
+
+It is noted that ${\mathcal{L}}_{F} \in {\mathcal{R}}^{n \times n}$ denotes the sub-Laplacian matrix among follower agents, we can obtain $\varpi {\left( t\right) }^{T}{\mathcal{L}}_{F}\varpi \left( t\right) \leq$ ${\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \varpi {\left( t\right) }^{T}\varpi \left( t\right)$ , and it derives
+
+$$
+\dot{V}\left( t\right) \leq - {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - {\mathrm{K}}_{\rho }\varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}{e}_{F}^{m}\left( t\right) + {\mathcal{L}}_{L}{e}_{L}^{m}\left( t\right) }\right)
+$$
+
+$$
+= - \varepsilon {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - {\mathrm{K}}_{\rho }\varpi {\left( t\right) }^{T}\mathcal{L}{e}^{m}\left( t\right)
+$$
+
+$$
+\leq - \varepsilon {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }\parallel \varpi {\parallel }^{2} + {\mathrm{K}}_{\rho }\parallel \varpi \parallel \begin{Vmatrix}{\mathcal{L}{e}^{m}}\end{Vmatrix}.
+$$
+
+Considering the designed event-triggering condition (11) and the condition $\varepsilon \in \left( {0,1}\right)$ , it concludes
+
+$$
+{\mathrm{K}}_{\rho }\begin{Vmatrix}{\mathcal{L}{e}^{m}\left( t\right) }\end{Vmatrix} \leq \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }\parallel \varpi \left( t\right) \parallel .
+$$
+
+Accordingly, since $\delta \geq 1$ , it yields
+
+$$
+\dot{V}\left( t\right) \leq - \left( {\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) + \frac{\dot{\mu }}{\mu }}\right) \varpi {\left( t\right) }^{T}\varpi \left( t\right) = \rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) V - \frac{\dot{\mu }}{\mu }V.
+$$
+
+ < g r a p h i c s >
+
+Fig. 1. The communication topology among twelve agents.
+
+According to the Lemma 1 in [11], one has
+
+$$
+V\left( t\right) \leq \mu {\left( t\right) }^{-2}{\exp }^{-\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \left( {t - {T}^{N}}\right) }V\left( {T}^{N}\right) . \tag{13}
+$$
+
+And then $\parallel \varpi \left( t\right) \parallel \leq \mu {\left( t\right) }^{-1}{\exp }^{-\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \left( {t - {T}^{N}}\right) }\begin{Vmatrix}{\varpi \left( {T}^{N}\right) }\end{Vmatrix}$ . Note that $\mathop{\lim }\limits_{{t \rightarrow {T}^{ - }}}\mu {\left( t\right) }^{-1} = 0$ , it yields $\mathop{\lim }\limits_{{t \rightarrow {T}^{ - }}}\parallel \varpi \left( t\right) \parallel =$ 0. That is, when $t \rightarrow {T}^{ - }$ , the condition ${x}_{F}\left( t\right) -$ $\left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right) = 0$ holds. Based on the equation (46) of [19] and Definition (2)-(3), $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right)$ is the convex hull signal formed by the leaders, when $\varpi \left( t\right) = 0$ , it implies that all followers converge within the convex hull formed by the leaders. Therefore, the containment control of the networked agent system is achieved within the predefined time $T$ . Since the finite time-varying transformation is only applied to the interval $\lbrack 0,T)$ , the problem of predefined-time containment can be transformed into the general case discussed in [11] for $t \in \lbrack T,\infty )$ . For further information, interested readers can refer to Theorem 1 in [11], which provides detailed proof.
+
+(II) Privacy analysis: Consider a scene where the dynamics $f\left( \cdot \right)$ of all agents are widely known and each agent has access to the hidden output states ${\mathrm{R}}_{i}^{m}\left( t\right)$ of its neighboring agents. While the true states ${x}_{i}\left( t\right)$ and the encode keys $\left\{ {{a}_{i},{b}_{i},{c}_{i}}\right\}$ are regarded as private information exclusive to each agent. For an honest-but-curious agent, the information accessible includes the unsigned graph $\mathcal{G}$ , the state of the honest-but-curious agents and the set of neighboring agents, and the hidden state of both the honest-but-curious agents and their neighbors. Following the application of a finite time-varying transformation to conceal agent $i$ ’s initial state, the resulting hidden output ${\mathrm{R}}_{i}^{m}\left( t\right)$ bears no resemblance to the true initial value ${x}_{i}\left( 0\right)$ . As a result, any information set acquired by an honest but curious agent proves futile in determining agent $i$ ’s true initial state. Additionally, the agent cannot reconstruct their true initial state by employing the findings presented in [23]. Importantly, even external eavesdroppers are unable to obtain the true initial state, as evidenced by the process mentioned above. Thus, it becomes apparent that the integrity of the initial state remains elusive to all parties involved, substantiating the claim of its unattainability by external eavesdroppers.
+
+§ IV. SIMULATION
+
+In this section, several numerical simulations are conducted to verify the effectiveness of the theoretical analysis. The simulation consists of the networked agent systems comprising 12 agents, which include six followers and six leaders. Fig. 1 displays the communication topology among agents. The numerical simulations are performed in the 2- D space. The initial position states of all agents are set as ${x}^{1}\left( 0\right) = {\left\lbrack -{10},0,{10},{10},0, - {10}, - {30}, - 5,{20},{30},5, - {15}\right\rbrack }^{T}$ and ${x}^{2}\left( 0\right) = {\left\lbrack 5,5,5, - 5, - 5, - 5,5,{20},{25}, - {10}, - {15}, - {20}\right\rbrack }^{T}$ . And the parameter $\varepsilon$ is equal to 0.5, the predefined time is $T = {1.5}\mathrm{\;s}$ . The encode keys are selected as
+
+$$
+A = {\left\lbrack -5, - 9, - 5,8, - 3,6, - 4,5,6, - 4,5, - 3\right\rbrack }^{T},
+$$
+
+$$
+B = {\left\lbrack 2,4,3, - 4,1, - 3,2, - 1, - 3,2, - 1,1\right\rbrack }^{T},
+$$
+
+$$
+C = {\left\lbrack 3,4,1, - 3,2, - 2,1, - 3, - 2,1, - 3,2\right\rbrack }^{T}.
+$$
+
+ < g r a p h i c s >
+
+Fig. 2. The true and masked states of all agents.
+
+ < g r a p h i c s >
+
+Fig. 3. The control input of the follower agents.
+
+The simulation results are depicted in Fig 2-5. The trajectory of agents in the ${x}^{1}$ direction is illustrated in Fig 2, with the subfigure highlighting the masked trajectories of all agents. This indicates that the proposed method effectively preserves the privacy of the agents' initial states and achieves the predefined time convergence within 1.5s. Fig 3 presents the control input trajectories of all follower agents, where abrupt changes in the trajectories are attributed to the event-triggered mechanism. Fig 4 demonstrates the fulfillment of the event-triggering conditions, when the designed boundary threshold is exceeded, the agents' states are sampled and updated. Fig 5 shows that all followers successfully move from their initial positions into the convex hull formed by the fixed leaders, achieving privacy-preserving event-triggered predefined time containment control for the networked agent system.
+
+ < g r a p h i c s >
+
+Fig. 4. The trajectory of the state measurement error and boundary threshold.
+
+ < g r a p h i c s >
+
+Fig. 5. The trajectory of all agents in the 2-D plane under designed containment control input. (Square markers represent the followers, and circular markers represent the leaders. Leaders form a rectangular convex hull.)
+
+§ V. CONCULSION
+
+This paper has addressed the privacy-preserving event-triggered predefined-time containment control problem for networked agent systems. A novel containment control scheme has been developed, effectively integrating privacy protection with event-triggered mechanisms. This integration has optimized network efficiency by minimizing unnecessary data transmission while ensuring robust containment within a specified time frame. The proposed control scheme has successfully ensured the confidentiality of agents' information through output masking, thereby maintaining both privacy and control accuracy. The effectiveness of the proposed scheme has been verified through simulation results. It is important to note that this study has focused on static leaders, and future research will extend the investigation to address containment control problems under dynamic leaders.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..fe1b52861ac4c9a7091a40fbd835ccf78f5265ba
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,181 @@
+# Unsupervised Feature Fusion Model for Marine Raft Aquaculture Sematic Segmentation Based on SAR Images
+
+${1}^{\text{st }}$ Mengmeng Li
+
+School of Information Science and Engineering
+
+Dalian Polytechnic University
+
+Dalian, China
+
+220520854000601@xy.dlpu.edu.cn
+
+${2}^{\text{nd }}$ Xinzhe Wang
+
+School of Information Science and Engineering
+
+Dalian Polytechnic University
+
+Dalian, China
+
+wxzagm@dlpu.edu.cn
+
+${3}^{\text{rd }}$ Jianchao Fan *
+
+School of Control Science and Engineering
+
+Dalian University of Technology
+
+Dalian, China
+
+fjchao@dlut.edu.cn
+
+Abstract-Marine aquaculture semantic segmentation provides a scientific basis for marine regulation and plays an important role in marine ecological protection and management. Currently, most high-performance marine aquaculture segmentation networks are trained by supervised learning. This approach requires collecting a large number of accurate manually labelled samples for training, but the labelled samples are difficult to obtain. To solve this problem, this paper proposes an unsupervised feature fusion model (UFFM) for marine raft aquaculture semantic segmentation. Firstly, a pseudo-label generator is designed to label the training samples, and a coarse mask is generated using saliency feature clustering. The training samples with pseudo-labels are inputted into a multilevel feature fusion module to extract further and continuously improve the graphical shapes and categories of the objects under the guidance of cross-entropy loss. The pseudo-labels are optimised under continuous iteration to improve the model segmentation performance. Comparison experiments on the GF-3 dataset demonstrate the effectiveness of UFFM.
+
+Index Terms-unsupervised learning, pseudo-label, SAR images, semantic segmentation
+
+## I. INTRODUCTION
+
+China has witnessed rapid growth in the scale and benefits of marine aquaculture development in recent years [1]. However, while the marine aquaculture industry has made significant progress, it is also faced with problems such as pollution around aquaculture waters, irrational layout of aquaculture, and excessive density of offshore aquaculture [2]. Synthetic aperture radar (SAR) has the advantage of being all-weather and does not need to consider factors such as cloudy weather. It has become an essential tool for monitoring marine aquaculture. The backscattering features of the mariculture raft target in SAR images are much larger than the backscattering features of the seawater surface, which makes the aquaculture rafts and seawater background present a high contrast [3]. Researchers have adopted deep learning techniques to design various mariculture semantic segmentation methods to efficiently and accurately extract the mariculture information [4].
+
+However, existing neural network models usually rely on a large amount of manually labeled data for training to obtain high-accuracy results. This approach faces two main problems: 1) the cost of obtaining high-quality manually labeled data is extremely high in complex scenarios and when dealing with massive remote sensing data, resulting in a large amount of remote sensing data that cannot be fully utilized. 2) the reliance on manual labelling as the only learning signal leads to limited feature learning. Several studies have proposed unsupervised methods for extracting information on marine aquaculture to address these challenges. Fan et al. [3] proposed using the multi-source characteristics of floating rafts and combining the neurodynamic optimization with the collective multi-core fuzzy C-means algorithm to classify unsupervised aquaculture. Wang et al. [5] designed an incremental dual unsupervised deep learning model based on the idea of alternating iterative optimization of pseudo-labels and segmentation results to maintain and strengthen the edge semantic information of pseudo-labels and effectively reduce the influence of coherent spot noise in SAR images. Subsequently, Zhou et al. [6] constructed an unsupervised semantic segmentation network for mariculture based on mutual information theory and su-perpixel algorithm, which improves the continuity and spatial consistency of mariculture target extraction through global feature learning, pseudo-label generation, and optimization with mutual information loss. However, the above unsupervised deep learning models mainly rely on single-area data training, which is difficult to generalize to intelligent image interpretation in wide-area and complex scenes.
+
+---
+
+This work was supported in part by the National Natural Science Foundation of China under Grant 42076184, Grant 41876109, and Grant 41706195; in part by the National Key Research and Development Program of China under Grant 2021YFC2801000; in part by the National High Resolution Special Research under Grant 41-Y30F07-9001-20/22; in part by the Fundamental Research Funds for the Central Universities under Grant DUT23RC(3)050; and in part by the Dalian High Level Talent Innovation Support Plan under Grant2021RD04. (Corresponding author: Jianchao Fan.)
+
+---
+
+With the emergence of transformer [7], a self-supervised representation learning model using unlabeled remote sensing big data to address regional feature differences. Self-supervised transformer network can learn its spatial features from a large amount of remote sensing data by constructing a pretexting task and pre-training the vision transformer model, which applies to a variety of downstream tasks by fine-tuning, e.g., change detection [8], classification [9], target detection [10], and semantic segmentation tasks [11]. Fan et al. [12] established a self-supervised feature fusion transformer model to obtain the essential features of mariculture through a large number of unlabeled samples, introduced contrast loss and mask loss, and paid attention to the global and local features of aquaculture at the same time, which mitigated the problems of mutual interference among multiple targets and imbalance of data between classes, and realized the accurate segmentation of mariculture. However, the self-supervised transformer model can rely on a large number of unlabeled floating raft aquaculture data for information extraction on a single sea area but still needs high-quality labeled data fine-tuning in the downstream segmentation network.
+
+To solve the above problems, this paper applies the saliency information obtained from self-supervised representation learning to the downstream segmentation network. It combines it with a multi-stage feature fusion module to further enhance the semantic segmentation performance of the network. Specifically, a pseudo-label generator is first designed to generate saliency pseudo-labels. Then, the semantic segmentation results output by the multilevel feature fusion module is cross-entropy loss with the pseudo-labels, which are constrained and directionally passed parameters to the network. The pseudo-labels are optimised through continuous iteration to improve network segmentation performance further.
+
+## II. RELATED WORK
+
+## A. Self-supervised feature learning
+
+Self-supervised learning mainly utilizes auxiliary tasks to mine supervised information from large-scale unmanually labeled data. It trains the network with this constructed supervised information to learn valuable representations for downstream tasks. Common auxiliary tasks include comparative learning, generative learning, and comparative generative methods that design learning paradigms based on data distribution characteristics to obtain better feature representations. However, these methods are mainly focused on image classification tasks and thus are typically designed to generate separate global vectors from images as input. This problem leads to poor results downstream for densely predicted segmentation tasks, requiring high-quality truth-labeled fine-tuned models. However, the emergence of self-supervised transformer has made it possible to extract dense feature vectors without requiring specialized dense contrast learning methods, which can reveal hidden semantic relationships in images. In this paper, inspired by DINO [13], the upstream trained image saliency features generate pseudo-labels for the training data to fine-tune the downstream segmentation network to construct a fully unsupervised semantic segmentation model.
+
+## B. Unsupervised semantic segmentation
+
+Unsupervised semantic segmentation aims at class prediction for each pixel point in an image without artificial labels. Ji et al. [14] proposed invariant information clustering (IIC), which ensures cross-view consistency by maximising the mutual information between neighbouring pixels of different views. Li et al. [15] constructed PiCIE to learn the invariance and isotropy of photometric and geometric variations by using geometric consistency as an inductive bias. This approach is that it only works on dataset MS COCO, which does not distinguish between foreground and background classes. MaskContrast [16] first generates object masks using DINO pre-trained ViT and then uses pixel-level embeddings obtained from contrast loss. However, the method can only be applied to saliency datasets. For the multi-stage paradigm, researchers tried to utilise class activation maps (CAM) [17] to obtain initial pixel-level pseudo-labels, which were then refined using a teacher-student network. However, this would result in losing features during training, decreasing segmentation accuracy. In this paper, to solve the above problems, Grad-CAM [18] is introduced in multi-stage to generate pseudo-labels and improve the segmentation performance by multi-scale feature fusion.
+
+## III. Method
+
+## A. Overall framework
+
+In the upstream task, a large amount of unlabeled marine aquaculture data is trained from zero to obtain the pre-trained ViT weights ${\theta }_{t}$ and initialize the downstream feature extraction network. The overall architecture designed for the downstream segmentation task is shown in Fig. 1. The processed unlabeled marine aquaculture images are used as inputs to the network to obtain the segmentation results of the aquaculture. The designed network have two branches, one for generating pseudo labels using saliency features and the other is a segmentation branch for multi-layer feature fusion. First, in the upstream task, large-scale unlabeled data is used to pre-train the ViT [13] in order to obtain the initialization parameters ${\theta }_{t}$ of the downstream feature extraction network, which can accelerate the convergence of the downstream segmentation network by using the pre-training weights and is crucial for the extraction of the model to salient features. The designed network is shown in Fig. 1. First, an input unlabeled marine aquaculture image, which has been stretched in a linear phase and rotated randomly, is used to augment the original image with data. The input image will go through two branches: one is the saliency pseudo-label generation branch, which will be presented in III-B, and the other is the multi-layer transformer feature fusion branch, which will be presented in III-C. In network training, the supervisory loss ${\mathcal{L}}_{s}$ is the pixel-by-pixel cross-entropy loss between the pseudo-labeled pixel level and the prediction:
+
+$$
+{\mathcal{L}}_{s} = \frac{1}{N}\mathop{\sum }\limits_{{i = 0}}^{{N - 1}}\text{CrossEntropy}\left( {{\widetilde{y}}_{i},{y}_{i}}\right) \tag{1}
+$$
+
+
+
+Fig. 1. Overview model of UFFM. (a) Obtaining saliency pseudo-label: Input the multi-head self-attention mechanism of the last layer feature map in the transformer block into Grad-CAM to obtain saliency patch features and generate saliency pseudo-label. (b) Obtaining segmentation results: The semantic information is enhanced using a multilayer transformer with PPM, and the semantic segmentation results with pseudo-labels are output by backpropagation after the loss computation. After continuous iterative updates, the network segmentation performance is improved.
+
+where $N$ denotes the number of pixels in the image $x \in$ ${\mathbb{R}}^{H \times W \times 3}$ and ${y}_{i} \in {\mathbb{R}}^{C}$ is the network’s prediction probability for pixel $i$ , where $C$ is the number of predicted classes and ${\widetilde{y}}_{i} \in {\mathbb{R}}^{C}$ is the labelling class of pixel $i$ in the pseudo-label.
+
+During the network training, the loss will be gradient back to the feature extraction network, and in particular, the weights of the two branches will be shared and updated simultaneously. Through continuous iteration of the network, the pseudo-label is updated, thus improving the segmentation performance of the network.
+
+## B. Saliency pseudo-label generation
+
+In unsupervised tasks, the design of pseudo-labels is crucial. A simple approach is to use confidence thresholds followed by direct results output as pseudo-labels. However, this approach is unsatisfactory in processing complex data and produces poor results. To solve this problem, a variant of activation graph-like Grad-CAM is used in this paper to generate significance discriminative pseudo-labels by stepwise subdivision from the target localisation method. Given an image $x$ , generate a sequence of patch embeddings ${x}_{\text{patch }} \in {\mathbb{R}}^{P \times D}$ , where $P$ is the number of patches, and $D$ is the output dimension. Then, ${x}_{CLS} \in {\mathbb{R}}^{1 \times D}$ and position embedding $\mathrm{P}$ are also added to the concatenated inputs. Therefore, the input sequence ${z}_{0}$ of ViT is described as:
+
+$$
+{z}_{0} = \left\lbrack {{x}_{\text{patch }},{x}_{CLS}}\right\rbrack + \mathrm{P} \tag{2}
+$$
+
+After that, the last layer of features is obtained through multiple layers of transformer encoders. The saliency feature map is computed using Grad-CAM. The first $\mathrm{k}$ salient patches with the largest absolute value of the gradient of the embedded image patch features are selected as the salient patches, and finally, a binary operation is performed to mark the first $k$ salient patches as 0 and the rest as 255 . The generated saliency pseudo-label $\widetilde{y}$ is written as:
+
+$$
+{g}_{k} = \operatorname{Sum}\left| \frac{\partial L\left( {f\left( x\right) , y}\right) }{\partial {x}_{\text{patch }}^{k}}\right| \tag{3}
+$$
+
+$$
+\widetilde{y} = \left\{ \begin{array}{l} 0,\text{ if }{g}_{k}\text{ in topk }\mathrm{G} \\ {255},\text{ otherwise } \end{array}\right. \tag{4}
+$$
+
+where $\mathrm{G} \in \mathbb{R} = \left\{ {{g}_{1},{g}_{2},\ldots {g}_{K}}\right\}$ is the salience map of patches ${x}_{\text{patch }} = \left\{ {{x}_{\text{patch }}^{1},\ldots {x}_{\text{patch }}^{K}}\right\}$ topk is the set of selected salient patches.
+
+## C. Multi-stage feature fusion
+
+The segmentation decoder consists of a pyramid pooling module (PPM) and a multi-scale feature pyramid to enable the network to capture contextual semantic information better. Firstly, three feature maps $\left\{ {{V}_{2},{V}_{3},{V}_{4}}\right\}$ are generated at the transformer encoder. The output feature vectors are the same size since the model chosen is the base ViT model, and the last transversal ${L}_{5}$ is generated from the last feature map ${V}_{5}$ through the PPM module. The FPN sub-network then paths down from the top to the branch to obtain ${\mathrm{F}}_{i} =$ ${\mathrm{L}}_{i} + {\mathrm{{UP}}}_{2}\left( {\mathrm{\;F}}_{i + 1}\right) , i = \{ 2,3,4\}$ , where the operation Up denotes bilinear upsampling. The FPN then uses the convolutional block ${h}_{i}$ to obtain the output ${\mathrm{P}}_{i}$ respectively. The final feature fusion of the FPN output requires bilinear upsampling of each po to ensure that they have the same spatial size and is finally connected by the channel dimension and fused by the convolutional unit block $h$
+
+$$
+\mathrm{Z} = h\left( \left\lbrack {{P}_{2};{\mathrm{{UP}}}_{2}\left( {P}_{3}\right) ;{\mathrm{{UP}}}_{4}\left( {P}_{4}\right) ;{\mathrm{{UP}}}_{8}\left( {P}_{5}\right) }\right\rbrack \right) \tag{5}
+$$
+
+
+
+Fig. 2. Visual comparison of raft marine aquqculture segmentation on the GF-3 dataset. (a) original images. (b) ground-truth labels. (c) IIC. (d) PiCIE. (e) IDUDL. (f) UFFM.
+
+The fused feature $\mathrm{Z}$ is then subjected to $1 \times 1$ convolution and $4 \times$ bilinear upsampling to obtain the final prediction $y$ .
+
+## IV. EXPERIMENTAL RESULTS
+
+## A. Experiment Setup and Datasets
+
+All experiments are conducted in PyTorch 1.8.1, using an Intel Xeon Platinum ${8255}\mathrm{C}$ with a clock speed of ${2.5}\mathrm{{GHz}}$ and an Nvidia GeForce RTX 3090. The data enhancement strategy was consistent with DINO [13]. A vit - s /16 model [7] trained with self jitter loss was used to extract features from the patches. The learning rate was set to 0.05 . In addition, a stochastic gradient descent (SGD) optimiser with a momentum of 0.9 was used. The encoder part uses ViT as the main network. The decoder part uses the UPerHead architecture to receive features from all levels of the encoder and generate the final prediction through pooling and upsampling operations. Meanwhile, the auxiliary head uses FCNHead architecture to receive features from specific encoder layers.
+
+The study area is located in the sea water aquaculture area of Changhai County, China. The remote sensing images were preprocessed with radiometric calibration and geographic correction, and the remote sensing images with horizontal-horizontal(HH) polarisation mode are selected as the experimental data. The images are subsequently cropped to ${512} \times {512}$ pixels. The self-supervised pre-training of the GF-3 dataset is more than 13,000, the downstream train datasets is 369, and the test datasets is 160 .
+
+## B. Evaluation Metrics
+
+In SAR images, there are a large number of coherent spot noise effects on raft aquaculture targets, resulting in a large number of isolated noise points in the image, which affects the accurate extraction of raft aquaculture targets. Therefore, in this paper, multiple evaluation metrics are used to evaluate the segmentation results. The metrics refer to IDUDL, which contains mIoU ( ${mIoU}$ ), Kappa coefficient(K), Overall Accuracy(OA), Precision(P), Recall(R)and F1 score $\left( {F}_{1}\right)$ .
+
+Where ${mIoU}$ evaluates the average degree of overlap between the predicted pixel categories and the true value pixel categories, which enables a better evaluation of the semantic continuity and consistency of the model predictions. $K$ considered the effect of chance coincidences when evaluating the degree of consistency. ${OA}$ evaluates the proportion of correctly predicted pixel classes in the overall correctly predicted pixel classes, reflecting the global accuracy. $P$ denotes the proportion of float samples predicted by the model. $R$ represents the ability of the model to find all positive samples. ${F}_{1}$ synthesis balances $P$ and $R$ .
+
+TABLE I
+
+QUANTITATIVE COMPARISON OF PROPOSED WITH OTHER UNSUPERVISED DEEP LEARNING METHODS ON THE SAME DATASET. THE BEST RESULTS ARE HIGHLIGHTED AS BOLD.
+
+| Methods | mloU | Kappa | OA(%) | $P\left( \% \right)$ | $R\left( \% \right)$ | F1 |
| IIC [14] | 0.4613 | 0.2375 | 70.95 | 72.76 | 89.60 | 0.8063 |
| PiCIE [15] | 0.4905 | 0.3504 | 68.73 | 80.98 | 70.60 | 0.7198 |
| IDUDL [5] | 0.6102 | 0.5364 | 78.46 | 83.07 | 91.34 | 0.8130 |
| UFFM | 0.6371 | 0.5890 | 79.44 | 91.74 | 75.30 | 0.8371 |
+
+## C. Comparison Results for Semantic Segmentation
+
+Two classical unsupervised deep learning IIC [14] methods with PiCIE [15] method and an unsupervised deep learning model IDUDL [5] specifically designed for marine aquaculture are selected. The semantic segmentation results of different methods are shown in Table I. The results showed that the proposed method improved the ${mIoU}$ by 0.0269 compared to IDUDL, while $P$ increased by ${8.67}\%$ .
+
+The visualisation results are shown in Fig. 2. In Fig. 2, the proposed method performs better in continuity and can reduce the interference of coherent spot noise. The effect of coherent spot noise in SAR images leads to many bright noises, affecting the segmentation results. The method of utilising mutual information in IIC can enhance the degree of correlation between similar samples. However, the noisy pixels are still strongly correlated with the target pixels, which leads to the impossibility of removing a large number of noisy pixels in the segmentation results.PiCIE utilises the method of geometric invariance and photometric invariance to maintain semantic consistency, but a large number of misclassifications occur. IDUDL can extract semantic features, overcome many noisy pixels, and perform the floating boundary better. However, the lack of global information leads to many missed judgments. Sample (2) shows that the proposed method can reduce the underdetermination in rafting compared to IDUDL.
+
+## V. Conclusion
+
+This paper proposes a new unsupervised feature fusion model, UFFM, for marine raft aquaculture semantic segmentation based on SAR images. The saliency obtained from representational learning generates saliency pseudo-labels in the pseudo-label generator. During network training, multistage feature fusion is designed to enhance the semantic information and the extraction of raft aquaculture target boundaries and semantic continuity. The experimental results show that UFFM can effectively reduce the problem of omission and misjudgment of raft aquaculture targets.
+
+## REFERENCES
+
+[1] Junjie Wang, Arthur HW Beusen, Xiaochen Liu, and Alexander F Bouwman. Aquaculture production is a large, spatially concentrated source of nutrients in chinese freshwater and coastal seas. Environmental Science & Technology, 54(3):1464-1474, 2019.
+
+[2] Marco Ottinger, Kersten Clauss, and Claudia Kuenzer. Aquaculture: Relevance, distribution, impacts and spatial assessments-a review. Ocean & Coastal Management, 119:244-266, 2016.
+
+[3] Jianchao Fan, Jianhua Zhao, Wentao An, and Yuanyuan Hu. Marine floating raft aquaculture detection of GF-3 PolSAR images based on collective multikernel fuzzy clustering. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(8):2741-2754, 2019.
+
+[4] Wantai Chen and Xiaofeng Li. Deep-learning-based marine aquaculture zone extractions from Dual-Polarimetric SAR imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17:8043-8057, 2024.
+
+[5] Xinzhe Wang, Jianlin Zhou, and Jianchao Fan. IDUDL: Incremental double unsupervised deep learning model for marine aquaculture SAR images segmentation. IEEE Transactions on Geoscience and Remote Sensing, 60:1-12, 2022.
+
+[6] Jianlin Zhou, Mengmeng Li, Xinzhe Wang, and Jianchao Fan. Unsupervised mutual information and superpixel constraints in SAR marine aquaculture extraction. pages 1-5, 2023.
+
+[7] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weis-senborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
+
+[8] Yuxiang Zhang, Yang Zhao, Yanni Dong, and Bo Du. Self-supervised pretraining via multimodality images with transformer for change detection. IEEE Transactions on Geoscience and Remote Sensing, 61:1-11, 2023.
+
+[9] Lilin Tu, Jiayi Li, Xin Huang, Jianya Gong, Xing Xie, and Leiguang Wang. S2hm2: A spectral-spatial hierarchical masked modeling framework for self-supervised feature learning and classification of large-scale hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing, 62:1-19, 2024.
+
+[10] Xi Chen, Yuxiang Zhang, Yanni Dong, and Bo Du. Generative self-supervised learning with spectral-spatial masking for hyperspectral target detection. IEEE Transactions on Geoscience and Remote Sensing, 62:1- 13, 2024.
+
+[11] Zaiyi Hu, Junyu Gao, Yuan Yuan, and Xuelong Li. Contrastive tokens and label activation for remote sensing weakly supervised semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 62:1-11, 2024.
+
+[12] Jianchao Fan, Jianlin Zhou, Xinzhe Wang, and Jun Wang. A self-supervised transformer with feature fusion for SAR image semantic segmentation in marine aquaculture monitoring. IEEE Transactions on Geoscience and Remote Sensing, 61:1-15, 2023.
+
+[13] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jegou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. 2021 IEEE/CVF International Conference on Computer Vision, pages 9630-9640, 2021.
+
+[14] Xu Ji, Andrea Vedaldi, and Joao Henriques. Invariant information clustering for unsupervised image classification and segmentation. 2019 IEEE/CVF International Conference on Computer Vision, pages 9864- 9873, 2019.
+
+[15] Jang Hyun Cho, Utkarsh Mall, Kavita Bala, and Bharath Hariharan. PiCIE: Unsupervised semantic segmentation using invariance and equiv-ariance in clustering. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16789-16799, 2021.
+
+[16] Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, and Luc Van Gool. Unsupervised semantic segmentation by contrasting object mask proposals. 2021 IEEE/CVF International Conference on Computer Vision, pages 10032-10042, 2021.
+
+[17] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921-2929, 2016.
+
+[18] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE international conference on computer vision, pages 618- 626, 2017.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..89194db282a425942a17ed57f816f4930f309ca9
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,155 @@
+§ UNSUPERVISED FEATURE FUSION MODEL FOR MARINE RAFT AQUACULTURE SEMATIC SEGMENTATION BASED ON SAR IMAGES
+
+${1}^{\text{ st }}$ Mengmeng Li
+
+School of Information Science and Engineering
+
+Dalian Polytechnic University
+
+Dalian, China
+
+220520854000601@xy.dlpu.edu.cn
+
+${2}^{\text{ nd }}$ Xinzhe Wang
+
+School of Information Science and Engineering
+
+Dalian Polytechnic University
+
+Dalian, China
+
+wxzagm@dlpu.edu.cn
+
+${3}^{\text{ rd }}$ Jianchao Fan *
+
+School of Control Science and Engineering
+
+Dalian University of Technology
+
+Dalian, China
+
+fjchao@dlut.edu.cn
+
+Abstract-Marine aquaculture semantic segmentation provides a scientific basis for marine regulation and plays an important role in marine ecological protection and management. Currently, most high-performance marine aquaculture segmentation networks are trained by supervised learning. This approach requires collecting a large number of accurate manually labelled samples for training, but the labelled samples are difficult to obtain. To solve this problem, this paper proposes an unsupervised feature fusion model (UFFM) for marine raft aquaculture semantic segmentation. Firstly, a pseudo-label generator is designed to label the training samples, and a coarse mask is generated using saliency feature clustering. The training samples with pseudo-labels are inputted into a multilevel feature fusion module to extract further and continuously improve the graphical shapes and categories of the objects under the guidance of cross-entropy loss. The pseudo-labels are optimised under continuous iteration to improve the model segmentation performance. Comparison experiments on the GF-3 dataset demonstrate the effectiveness of UFFM.
+
+Index Terms-unsupervised learning, pseudo-label, SAR images, semantic segmentation
+
+§ I. INTRODUCTION
+
+China has witnessed rapid growth in the scale and benefits of marine aquaculture development in recent years [1]. However, while the marine aquaculture industry has made significant progress, it is also faced with problems such as pollution around aquaculture waters, irrational layout of aquaculture, and excessive density of offshore aquaculture [2]. Synthetic aperture radar (SAR) has the advantage of being all-weather and does not need to consider factors such as cloudy weather. It has become an essential tool for monitoring marine aquaculture. The backscattering features of the mariculture raft target in SAR images are much larger than the backscattering features of the seawater surface, which makes the aquaculture rafts and seawater background present a high contrast [3]. Researchers have adopted deep learning techniques to design various mariculture semantic segmentation methods to efficiently and accurately extract the mariculture information [4].
+
+However, existing neural network models usually rely on a large amount of manually labeled data for training to obtain high-accuracy results. This approach faces two main problems: 1) the cost of obtaining high-quality manually labeled data is extremely high in complex scenarios and when dealing with massive remote sensing data, resulting in a large amount of remote sensing data that cannot be fully utilized. 2) the reliance on manual labelling as the only learning signal leads to limited feature learning. Several studies have proposed unsupervised methods for extracting information on marine aquaculture to address these challenges. Fan et al. [3] proposed using the multi-source characteristics of floating rafts and combining the neurodynamic optimization with the collective multi-core fuzzy C-means algorithm to classify unsupervised aquaculture. Wang et al. [5] designed an incremental dual unsupervised deep learning model based on the idea of alternating iterative optimization of pseudo-labels and segmentation results to maintain and strengthen the edge semantic information of pseudo-labels and effectively reduce the influence of coherent spot noise in SAR images. Subsequently, Zhou et al. [6] constructed an unsupervised semantic segmentation network for mariculture based on mutual information theory and su-perpixel algorithm, which improves the continuity and spatial consistency of mariculture target extraction through global feature learning, pseudo-label generation, and optimization with mutual information loss. However, the above unsupervised deep learning models mainly rely on single-area data training, which is difficult to generalize to intelligent image interpretation in wide-area and complex scenes.
+
+This work was supported in part by the National Natural Science Foundation of China under Grant 42076184, Grant 41876109, and Grant 41706195; in part by the National Key Research and Development Program of China under Grant 2021YFC2801000; in part by the National High Resolution Special Research under Grant 41-Y30F07-9001-20/22; in part by the Fundamental Research Funds for the Central Universities under Grant DUT23RC(3)050; and in part by the Dalian High Level Talent Innovation Support Plan under Grant2021RD04. (Corresponding author: Jianchao Fan.)
+
+With the emergence of transformer [7], a self-supervised representation learning model using unlabeled remote sensing big data to address regional feature differences. Self-supervised transformer network can learn its spatial features from a large amount of remote sensing data by constructing a pretexting task and pre-training the vision transformer model, which applies to a variety of downstream tasks by fine-tuning, e.g., change detection [8], classification [9], target detection [10], and semantic segmentation tasks [11]. Fan et al. [12] established a self-supervised feature fusion transformer model to obtain the essential features of mariculture through a large number of unlabeled samples, introduced contrast loss and mask loss, and paid attention to the global and local features of aquaculture at the same time, which mitigated the problems of mutual interference among multiple targets and imbalance of data between classes, and realized the accurate segmentation of mariculture. However, the self-supervised transformer model can rely on a large number of unlabeled floating raft aquaculture data for information extraction on a single sea area but still needs high-quality labeled data fine-tuning in the downstream segmentation network.
+
+To solve the above problems, this paper applies the saliency information obtained from self-supervised representation learning to the downstream segmentation network. It combines it with a multi-stage feature fusion module to further enhance the semantic segmentation performance of the network. Specifically, a pseudo-label generator is first designed to generate saliency pseudo-labels. Then, the semantic segmentation results output by the multilevel feature fusion module is cross-entropy loss with the pseudo-labels, which are constrained and directionally passed parameters to the network. The pseudo-labels are optimised through continuous iteration to improve network segmentation performance further.
+
+§ II. RELATED WORK
+
+§ A. SELF-SUPERVISED FEATURE LEARNING
+
+Self-supervised learning mainly utilizes auxiliary tasks to mine supervised information from large-scale unmanually labeled data. It trains the network with this constructed supervised information to learn valuable representations for downstream tasks. Common auxiliary tasks include comparative learning, generative learning, and comparative generative methods that design learning paradigms based on data distribution characteristics to obtain better feature representations. However, these methods are mainly focused on image classification tasks and thus are typically designed to generate separate global vectors from images as input. This problem leads to poor results downstream for densely predicted segmentation tasks, requiring high-quality truth-labeled fine-tuned models. However, the emergence of self-supervised transformer has made it possible to extract dense feature vectors without requiring specialized dense contrast learning methods, which can reveal hidden semantic relationships in images. In this paper, inspired by DINO [13], the upstream trained image saliency features generate pseudo-labels for the training data to fine-tune the downstream segmentation network to construct a fully unsupervised semantic segmentation model.
+
+§ B. UNSUPERVISED SEMANTIC SEGMENTATION
+
+Unsupervised semantic segmentation aims at class prediction for each pixel point in an image without artificial labels. Ji et al. [14] proposed invariant information clustering (IIC), which ensures cross-view consistency by maximising the mutual information between neighbouring pixels of different views. Li et al. [15] constructed PiCIE to learn the invariance and isotropy of photometric and geometric variations by using geometric consistency as an inductive bias. This approach is that it only works on dataset MS COCO, which does not distinguish between foreground and background classes. MaskContrast [16] first generates object masks using DINO pre-trained ViT and then uses pixel-level embeddings obtained from contrast loss. However, the method can only be applied to saliency datasets. For the multi-stage paradigm, researchers tried to utilise class activation maps (CAM) [17] to obtain initial pixel-level pseudo-labels, which were then refined using a teacher-student network. However, this would result in losing features during training, decreasing segmentation accuracy. In this paper, to solve the above problems, Grad-CAM [18] is introduced in multi-stage to generate pseudo-labels and improve the segmentation performance by multi-scale feature fusion.
+
+§ III. METHOD
+
+§ A. OVERALL FRAMEWORK
+
+In the upstream task, a large amount of unlabeled marine aquaculture data is trained from zero to obtain the pre-trained ViT weights ${\theta }_{t}$ and initialize the downstream feature extraction network. The overall architecture designed for the downstream segmentation task is shown in Fig. 1. The processed unlabeled marine aquaculture images are used as inputs to the network to obtain the segmentation results of the aquaculture. The designed network have two branches, one for generating pseudo labels using saliency features and the other is a segmentation branch for multi-layer feature fusion. First, in the upstream task, large-scale unlabeled data is used to pre-train the ViT [13] in order to obtain the initialization parameters ${\theta }_{t}$ of the downstream feature extraction network, which can accelerate the convergence of the downstream segmentation network by using the pre-training weights and is crucial for the extraction of the model to salient features. The designed network is shown in Fig. 1. First, an input unlabeled marine aquaculture image, which has been stretched in a linear phase and rotated randomly, is used to augment the original image with data. The input image will go through two branches: one is the saliency pseudo-label generation branch, which will be presented in III-B, and the other is the multi-layer transformer feature fusion branch, which will be presented in III-C. In network training, the supervisory loss ${\mathcal{L}}_{s}$ is the pixel-by-pixel cross-entropy loss between the pseudo-labeled pixel level and the prediction:
+
+$$
+{\mathcal{L}}_{s} = \frac{1}{N}\mathop{\sum }\limits_{{i = 0}}^{{N - 1}}\text{ CrossEntropy }\left( {{\widetilde{y}}_{i},{y}_{i}}\right) \tag{1}
+$$
+
+ < g r a p h i c s >
+
+Fig. 1. Overview model of UFFM. (a) Obtaining saliency pseudo-label: Input the multi-head self-attention mechanism of the last layer feature map in the transformer block into Grad-CAM to obtain saliency patch features and generate saliency pseudo-label. (b) Obtaining segmentation results: The semantic information is enhanced using a multilayer transformer with PPM, and the semantic segmentation results with pseudo-labels are output by backpropagation after the loss computation. After continuous iterative updates, the network segmentation performance is improved.
+
+where $N$ denotes the number of pixels in the image $x \in$ ${\mathbb{R}}^{H \times W \times 3}$ and ${y}_{i} \in {\mathbb{R}}^{C}$ is the network’s prediction probability for pixel $i$ , where $C$ is the number of predicted classes and ${\widetilde{y}}_{i} \in {\mathbb{R}}^{C}$ is the labelling class of pixel $i$ in the pseudo-label.
+
+During the network training, the loss will be gradient back to the feature extraction network, and in particular, the weights of the two branches will be shared and updated simultaneously. Through continuous iteration of the network, the pseudo-label is updated, thus improving the segmentation performance of the network.
+
+§ B. SALIENCY PSEUDO-LABEL GENERATION
+
+In unsupervised tasks, the design of pseudo-labels is crucial. A simple approach is to use confidence thresholds followed by direct results output as pseudo-labels. However, this approach is unsatisfactory in processing complex data and produces poor results. To solve this problem, a variant of activation graph-like Grad-CAM is used in this paper to generate significance discriminative pseudo-labels by stepwise subdivision from the target localisation method. Given an image $x$ , generate a sequence of patch embeddings ${x}_{\text{ patch }} \in {\mathbb{R}}^{P \times D}$ , where $P$ is the number of patches, and $D$ is the output dimension. Then, ${x}_{CLS} \in {\mathbb{R}}^{1 \times D}$ and position embedding $\mathrm{P}$ are also added to the concatenated inputs. Therefore, the input sequence ${z}_{0}$ of ViT is described as:
+
+$$
+{z}_{0} = \left\lbrack {{x}_{\text{ patch }},{x}_{CLS}}\right\rbrack + \mathrm{P} \tag{2}
+$$
+
+After that, the last layer of features is obtained through multiple layers of transformer encoders. The saliency feature map is computed using Grad-CAM. The first $\mathrm{k}$ salient patches with the largest absolute value of the gradient of the embedded image patch features are selected as the salient patches, and finally, a binary operation is performed to mark the first $k$ salient patches as 0 and the rest as 255 . The generated saliency pseudo-label $\widetilde{y}$ is written as:
+
+$$
+{g}_{k} = \operatorname{Sum}\left| \frac{\partial L\left( {f\left( x\right) ,y}\right) }{\partial {x}_{\text{ patch }}^{k}}\right| \tag{3}
+$$
+
+$$
+\widetilde{y} = \left\{ \begin{array}{l} 0,\text{ if }{g}_{k}\text{ in topk }\mathrm{G} \\ {255},\text{ otherwise } \end{array}\right. \tag{4}
+$$
+
+where $\mathrm{G} \in \mathbb{R} = \left\{ {{g}_{1},{g}_{2},\ldots {g}_{K}}\right\}$ is the salience map of patches ${x}_{\text{ patch }} = \left\{ {{x}_{\text{ patch }}^{1},\ldots {x}_{\text{ patch }}^{K}}\right\}$ topk is the set of selected salient patches.
+
+§ C. MULTI-STAGE FEATURE FUSION
+
+The segmentation decoder consists of a pyramid pooling module (PPM) and a multi-scale feature pyramid to enable the network to capture contextual semantic information better. Firstly, three feature maps $\left\{ {{V}_{2},{V}_{3},{V}_{4}}\right\}$ are generated at the transformer encoder. The output feature vectors are the same size since the model chosen is the base ViT model, and the last transversal ${L}_{5}$ is generated from the last feature map ${V}_{5}$ through the PPM module. The FPN sub-network then paths down from the top to the branch to obtain ${\mathrm{F}}_{i} =$ ${\mathrm{L}}_{i} + {\mathrm{{UP}}}_{2}\left( {\mathrm{\;F}}_{i + 1}\right) ,i = \{ 2,3,4\}$ , where the operation Up denotes bilinear upsampling. The FPN then uses the convolutional block ${h}_{i}$ to obtain the output ${\mathrm{P}}_{i}$ respectively. The final feature fusion of the FPN output requires bilinear upsampling of each po to ensure that they have the same spatial size and is finally connected by the channel dimension and fused by the convolutional unit block $h$
+
+$$
+\mathrm{Z} = h\left( \left\lbrack {{P}_{2};{\mathrm{{UP}}}_{2}\left( {P}_{3}\right) ;{\mathrm{{UP}}}_{4}\left( {P}_{4}\right) ;{\mathrm{{UP}}}_{8}\left( {P}_{5}\right) }\right\rbrack \right) \tag{5}
+$$
+
+ < g r a p h i c s >
+
+Fig. 2. Visual comparison of raft marine aquqculture segmentation on the GF-3 dataset. (a) original images. (b) ground-truth labels. (c) IIC. (d) PiCIE. (e) IDUDL. (f) UFFM.
+
+The fused feature $\mathrm{Z}$ is then subjected to $1 \times 1$ convolution and $4 \times$ bilinear upsampling to obtain the final prediction $y$ .
+
+§ IV. EXPERIMENTAL RESULTS
+
+§ A. EXPERIMENT SETUP AND DATASETS
+
+All experiments are conducted in PyTorch 1.8.1, using an Intel Xeon Platinum ${8255}\mathrm{C}$ with a clock speed of ${2.5}\mathrm{{GHz}}$ and an Nvidia GeForce RTX 3090. The data enhancement strategy was consistent with DINO [13]. A vit - s /16 model [7] trained with self jitter loss was used to extract features from the patches. The learning rate was set to 0.05 . In addition, a stochastic gradient descent (SGD) optimiser with a momentum of 0.9 was used. The encoder part uses ViT as the main network. The decoder part uses the UPerHead architecture to receive features from all levels of the encoder and generate the final prediction through pooling and upsampling operations. Meanwhile, the auxiliary head uses FCNHead architecture to receive features from specific encoder layers.
+
+The study area is located in the sea water aquaculture area of Changhai County, China. The remote sensing images were preprocessed with radiometric calibration and geographic correction, and the remote sensing images with horizontal-horizontal(HH) polarisation mode are selected as the experimental data. The images are subsequently cropped to ${512} \times {512}$ pixels. The self-supervised pre-training of the GF-3 dataset is more than 13,000, the downstream train datasets is 369, and the test datasets is 160 .
+
+§ B. EVALUATION METRICS
+
+In SAR images, there are a large number of coherent spot noise effects on raft aquaculture targets, resulting in a large number of isolated noise points in the image, which affects the accurate extraction of raft aquaculture targets. Therefore, in this paper, multiple evaluation metrics are used to evaluate the segmentation results. The metrics refer to IDUDL, which contains mIoU ( ${mIoU}$ ), Kappa coefficient(K), Overall Accuracy(OA), Precision(P), Recall(R)and F1 score $\left( {F}_{1}\right)$ .
+
+Where ${mIoU}$ evaluates the average degree of overlap between the predicted pixel categories and the true value pixel categories, which enables a better evaluation of the semantic continuity and consistency of the model predictions. $K$ considered the effect of chance coincidences when evaluating the degree of consistency. ${OA}$ evaluates the proportion of correctly predicted pixel classes in the overall correctly predicted pixel classes, reflecting the global accuracy. $P$ denotes the proportion of float samples predicted by the model. $R$ represents the ability of the model to find all positive samples. ${F}_{1}$ synthesis balances $P$ and $R$ .
+
+TABLE I
+
+QUANTITATIVE COMPARISON OF PROPOSED WITH OTHER UNSUPERVISED DEEP LEARNING METHODS ON THE SAME DATASET. THE BEST RESULTS ARE HIGHLIGHTED AS BOLD.
+
+max width=
+
+Methods mloU Kappa OA(%) $P\left( \% \right)$ $R\left( \% \right)$ F1
+
+1-7
+IIC [14] 0.4613 0.2375 70.95 72.76 89.60 0.8063
+
+1-7
+PiCIE [15] 0.4905 0.3504 68.73 80.98 70.60 0.7198
+
+1-7
+IDUDL [5] 0.6102 0.5364 78.46 83.07 91.34 0.8130
+
+1-7
+UFFM 0.6371 0.5890 79.44 91.74 75.30 0.8371
+
+1-7
+
+§ C. COMPARISON RESULTS FOR SEMANTIC SEGMENTATION
+
+Two classical unsupervised deep learning IIC [14] methods with PiCIE [15] method and an unsupervised deep learning model IDUDL [5] specifically designed for marine aquaculture are selected. The semantic segmentation results of different methods are shown in Table I. The results showed that the proposed method improved the ${mIoU}$ by 0.0269 compared to IDUDL, while $P$ increased by ${8.67}\%$ .
+
+The visualisation results are shown in Fig. 2. In Fig. 2, the proposed method performs better in continuity and can reduce the interference of coherent spot noise. The effect of coherent spot noise in SAR images leads to many bright noises, affecting the segmentation results. The method of utilising mutual information in IIC can enhance the degree of correlation between similar samples. However, the noisy pixels are still strongly correlated with the target pixels, which leads to the impossibility of removing a large number of noisy pixels in the segmentation results.PiCIE utilises the method of geometric invariance and photometric invariance to maintain semantic consistency, but a large number of misclassifications occur. IDUDL can extract semantic features, overcome many noisy pixels, and perform the floating boundary better. However, the lack of global information leads to many missed judgments. Sample (2) shows that the proposed method can reduce the underdetermination in rafting compared to IDUDL.
+
+§ V. CONCLUSION
+
+This paper proposes a new unsupervised feature fusion model, UFFM, for marine raft aquaculture semantic segmentation based on SAR images. The saliency obtained from representational learning generates saliency pseudo-labels in the pseudo-label generator. During network training, multistage feature fusion is designed to enhance the semantic information and the extraction of raft aquaculture target boundaries and semantic continuity. The experimental results show that UFFM can effectively reduce the problem of omission and misjudgment of raft aquaculture targets.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e8849332f2853b28d5e77358782f534ae77772a6
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,387 @@
+# Dynamic Threshold Global Performance-Guaranteed Formation Control for Wheeled Mobile Robots with Smooth Extended State Observer
+
+${1}^{\text{st }}$ Minjing Wang
+
+School of Information and Communication Engineering
+
+Hainan University
+
+Haikou, China
+
+mjwang@hainanu.edu.cn
+
+${2}^{\text{nd }}$ Di Wu
+
+School of Information and Communication Engineering
+
+Hainan University
+
+Haikou, China
+
+hainuwudi@hainanu.edu.cn
+
+${3}^{\text{rd }}$ Yibo Zhang
+
+Department of Automation
+
+Shanghai Jiao Tong University
+
+Shanghai, China
+
+zhang297@sjtu.edu.cn
+
+${4}^{\text{th }}$ Wenlong Feng
+
+School of Information and Communication Engineering
+
+Hainan University
+
+Haikou, China
+
+fwlfwl@163.com
+
+Abstract-In this paper, a dynamic threshold global performance-guaranteed formation control method is proposed for wheeled mobile robots (WMRs). Unlike existing prescribed performance formation control methods that are constrained by initial values, we design a dynamic threshold global performance-guaranteed (DTGPG) function to address the initial value constraints while being able to secondary adjust the steady state performance boundaries. Moreover, we design a smooth extended state observer (SESO) based on a sigmoid-like function to mitigate the chattering problem of the existing event-triggered ESO. Then a DTGPG-based guidance law and a SESO-based control law are designed to implement the formation control. The proof shows that the total closed-loop system is input-to-state stable (ISS). Through simulation, the benefits and validity of the proposed control methodology are confirmed.
+
+Index Terms-WMRs, dynamic threshold global performance-guaranteed function, formation control, SESO
+
+## I. INTRODUCTION
+
+Multi-wheeled mobile robots (WMRs) formation control with extremely high demands on transient and steady state performance. In the transient phase, small overshoots and fast convergence can avoid collisions between WMRs. In the steady state phase, high accuracy tracking performance can significantly improve the overall coordination and task execution efficiency. Therefore, it is crucial to prescribe the performance of the multi-WMRs system. In [1], a collision avoidance prescribed performance control (PPC) method is proposed for WMR formations, which guarantees the performance of the multi-WMR system by adding communication limits and collision limits to the prescribed performance function. In [2], a fixed-time performance-guaranteed formation control problem for multi-WMRs is investigated, which achieves fixed-time convergence by introducing a segmented time-varying function into the performance function. In [3], a field-of-view constrained performance-guaranteed formation control method is proposed for multi-WMRs, which designs a guaranteed performance function that considers leader and follower distance maintenance to avoid collisions. Although the above work [1]-[3] can effectively improve the performance of multi-WMRs, there are still two points that need to be improved: 1. They are all subject to initial conditions, which will increase the human intervention in practical applications, i.e., calculating the starting position of the WMRs in advance. 2. The standard PPC cannot perform a secondary adjustment of the performance boundaries after reaching the steady state.
+
+On the other hand, when performing tasks in complex environments, frozen and uneven road surfaces are usually encountered. These disturbances may affect the stability of WMR formations. Therefore, how to quickly and accurately estimate the external disturbances is also crucial. In [4], a nonlinear extended state observer (ESO) is proposed to estimate the external disturbance, which recovers the velocity and estimates the external disturbance through position and heading errors. Then to improve the estimation rate, a finite time ESO is designed. In [5], an event-triggered ESO is designed to adjust the allocation of resources. Note that event-triggered ESO [5] can save resources when estimating disturbances, but will inevitably have chattering problems.
+
+Inspired by the aforementioned observations, we propose a dynamic threshold global performance-guaranteed (DTGPG) formation control method for WMRs with a smooth extended state observer (SESO). The key contributions of this work are: Unlike the standard PPC methods described in [6] and the TPP methods in [7]-[9], this paper proposes DTGPG capable of solving the initial value constraints problem and secondary adjustment of the steady state performance bounds. In contrast to event-triggered ESO [5], we design the SESO to mitigate chattering by introducing a sigmoid-like function to smooth the estimation error. The total closed-loop system is proved to be input-to-state stable (ISS). Some of the symbols in this paper are defined in Table I.
+
+---
+
+This work is partly distributed under the "South China Sea Rising Star" Education Platform Foundation of Hainan Province (JYNHXX2023-17G), the Natural Science Foundation of Hainan Province (624MS036). (Corresponding author: Di Wu)
+
+---
+
+TABLE I
+
+SYMBOL DEFINITION
+
+| Symbol | Definition |
| ${\mathbb{R}}^{n}$ | $n$ -dimensional Euclidean Space |
| ${\mathbb{R}}^{ + }$ | Positive real space |
| $\parallel \cdot \parallel$ | Euclidean norm |
| diag $\{ \cdots \}$ | Block-diagonal matrix |
| ${\lambda }_{\max }\left( \cdot \right)$ | Maximum eigenvalue of a matrix |
| ${\lambda }_{\min }\left( \cdot \right)$ | Minimum eigenvalue of a matrix |
| $\operatorname{sgn}\left( \cdot \right)$ | Sign function |
| $\exp \left( \cdot \right)$ | Exponential function |
| $\operatorname{col}\left( \cdot \right)$ | Column vector |
+
+## II. PRELIMINARIES AND PROBLEM STATEMENT
+
+## A. Graph Theory
+
+To describe the communication among the virtual leader and WMRs, a directed graph is described as $\mathcal{G} = \{ \mathcal{V},\mathcal{M}\}$ . $\mathcal{V} = \left\{ {{n}_{1},\ldots ,{n}_{M}}\right\}$ and $\mathcal{M} = \left\{ {\left( {{n}_{i},{n}_{j}}\right) \in \mathcal{V} \times \mathcal{V}}\right\}$ represent a vertex set and an edge set, respectively. An adjacency matrix associated with $\mathcal{G}$ is defined as $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{M \times M}$ . Correspondingly, a degree matrix connected with $\mathcal{G}$ is characterized as $\mathcal{D} = \operatorname{diag}\left\{ {d}_{i}\right\} \in {\mathbb{R}}^{M \times M}$ with ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{M}{a}_{ij}$ . Additionally, a Laplacian matrix associated with $\mathcal{G}$ is defined as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ . Note that here $i = 1,\ldots , M, j = 1,\ldots , M$ .
+
+## B. Problem Statement
+
+Suppose that there exist $N$ followers, labeled as agents ${n}_{1}$ to ${n}_{N}$ , and $M - N$ leaders, labeled as agents ${n}_{N + 1}$ to ${n}_{M}$ , under a communication topology graph. A group of followers consisting of $N$ wheeled mobile robots is modelled as follows
+
+$$
+\begin{cases} {\dot{\mathbf{\eta }}}_{i} & = {\mathbf{R}}_{i}{\mathbf{\nu }}_{i} \\ {\dot{\mathbf{\nu }}}_{i} & = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\mathcal{T}}}_{i} \\ & - {D}_{i\theta }{r}_{i}^{2}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{J}}_{i}{\mathbf{R}}_{i}^{-1}{\dot{\mathbf{\eta }}}_{i} - {\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\mathcal{F}}}_{i}{r}_{i}^{2} \end{cases} \tag{1}
+$$
+
+where $i = 1,\ldots , N.{\mathbf{\eta }}_{i} = {\left\lbrack {x}_{i},{y}_{i},{\psi }_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ denotes the position and yaw angle. ${\mathbf{\nu }}_{i} = {\left\lbrack {u}_{i},{v}_{i},{w}_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ denotes the velocity vector. ${\tau }_{i} = {\left\lbrack {\tau }_{i1},{\tau }_{i2},{\tau }_{i3},{\tau }_{i4}\right\rbrack }^{T} \in {\mathbb{R}}^{4}$ denotes the control input. ${\mathcal{T}}_{i} = {\left\lbrack {\mathcal{T}}_{i1},{\mathcal{T}}_{i2},{\mathcal{T}}_{i3},{\mathcal{T}}_{i4}\right\rbrack }^{T} \in {\mathbb{R}}^{4}$ denotes the external disturbance. The kinetic parameters and matrices of this WMR can be found in [10]. ${\mathbf{J}}_{i} \in {\mathbb{R}}^{4 \times 3}$ and ${\mathbf{J}}_{i}^{ + } \in {\mathbb{R}}^{3 \times 4}$ satisfy the relationship ${\mathbf{J}}_{i}^{ + }{\mathbf{J}}_{i} = {\mathbf{I}}_{3}$ .
+
+Assumption 1: The graph $\mathcal{G}$ contains a spanning tree with the virtual leader as the root node.
+
+C. Dynamic Threshold Global Performance-Guaranteed and Barrier Function
+
+We define the distributed error as follows
+
+$$
+{\mathbf{E}}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}\left( {{\mathbf{\eta }}_{i} - {\mathbf{\eta }}_{j}}\right) + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}\left( {{\mathbf{\eta }}_{i} - {\mathbf{\eta }}_{jr}}\right) \tag{2}
+$$
+
+where ${\mathbf{\eta }}_{jr} = {\left\lbrack {\eta }_{jx},{\eta }_{jy},{\eta }_{j\psi }\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ represents the trajectory of the virtual leader. The coefficient ${a}_{ij}$ is defined in [11]. To ensure that the developed control is free from the influence of initial conditions and can dynamically adjust prescribed thresholds, the error is constrained within the following prescribed regions
+
+$$
+{\mathcal{I}}_{ik}\left( {-{\mathcal{W}}_{ik}}\right) \leq {E}_{ik} \leq {\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right) ,\;k = x, y,\psi \tag{3}
+$$
+
+where ${\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right)$ is a dynamic threshold global performance-guaranteed (DTGPG) function similar to the [12], and is defined as follows
+
+$$
+{\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right) = \frac{\sqrt{{l}_{ik}}{\mathcal{W}}_{ik}}{\sqrt{1 - {\mathcal{W}}_{ik}^{2}}} \tag{4}
+$$
+
+with ${\mathcal{W}}_{ik} = 1/{\mathcal{P}}_{ik}.{\mathcal{P}}_{ik}$ is a dynamic threshold finite-time prescribed function similar to the [13]
+
+$$
+{\mathcal{P}}_{ik}\left( t\right) = \left\{ \begin{array}{ll} \left( {1 - {\Theta }_{{ik},\infty }}\right) \exp \left( {-{\varrho }_{ik}\frac{{T}_{{ik}, a}}{{T}_{{ik}, a} - t}}\right) + {\Theta }_{{ik},\infty },0 \leq t < {T}_{{ik}, a} & \\ {\Theta }_{{ik},\infty }\left( {1 - \frac{{\omega }_{ik}}{2} + \frac{{\omega }_{ik}}{2}\cos \left( {\frac{\pi }{{c}_{ik}}\left( {t - {T}_{{ik}, a}}\right) }\right) }\right) ,{T}_{{ik}, a} \leq t < {T}_{{ik}, b} & \\ {\Theta }_{{ik},\infty }\left( {1 - {\omega }_{ik}}\right) , & t \geq {T}_{{ik}, b} \end{array}\right. \tag{5}
+$$
+
+where ${l}_{ik}$ and ${\omega }_{ik}$ are positive constants. ${\Theta }_{{ik},\infty } =$ $\mathop{\lim }\limits_{{t \rightarrow \infty }}{\Theta }_{ik}\left( t\right)$ is the steady-state value. ${\varrho }_{ik} > 0$ represents the convergence rate. ${T}_{{ik}, a}$ is the settling time to reach steady state. ${c}_{ik} = {T}_{{ik}, b} - {T}_{{ik}, a}$ is the duration of the dynamic adjustment.
+
+Then, we employ the following barrier function to implement the error constraint in (3)
+
+$$
+{\mathcal{Z}}_{ik} = \frac{{\mathcal{J}}_{ik}}{1 - {\mathcal{J}}_{ik}^{2}} \tag{6}
+$$
+
+where ${\mathcal{J}}_{ik} = {\mathcal{P}}_{ik}{\mathcal{H}}_{ik}$ with ${\mathcal{H}}_{ik} = {E}_{ik}/\sqrt{{E}_{ik}^{2} + {l}_{ik}}$ . The properties of the barrier function are described in [12].
+
+## III. Controller Design and Analysis
+
+## A. Smooth Extended State Observer
+
+To facilitate the subsequent strategy design, define ${\mathbf{\Lambda }}_{i} =$ ${r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathcal{T}}_{i} - {D}_{i\theta }{r}_{i}^{2}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{J}}_{i}{\mathbf{R}}_{i}^{-1}{\dot{\mathbf{\eta }}}_{i} - {\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathcal{F}}_{i}{r}_{i}^{2}$ to denote internal uncertainty and external disturbances suffered by the $i$ th WMR. (1) can be reformulated as
+
+$$
+\left\{ \begin{array}{l} {\dot{\mathbf{\eta }}}_{i} = {\mathbf{R}}_{i}{\mathbf{\nu }}_{i} \\ {\dot{\mathbf{\nu }}}_{i} = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {\mathbf{\Lambda }}_{i}. \end{array}\right. \tag{7}
+$$
+
+Assumption 2: For the multi-WMR system, the unknown total disturbance ${\mathbf{\Lambda }}_{i}$ is smooth and continuous.
+
+Then, we regard the total disturbances ${\mathbf{\Lambda }}_{i}$ as an extended state, and to avoid unnecessary waste of resources when approximating the disturbances, an ESO based on event-triggered mechanism is designed as [5]
+
+$$
+\left\{ \begin{array}{l} {\widetilde{\mathbf{\nu }}}_{i}^{s} = {\widehat{\mathbf{\nu }}}_{i} - {\mathbf{\nu }}_{i}^{ \star } \\ {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\varepsilon }_{i1}{\widetilde{\mathbf{\nu }}}_{i}^{s} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\varepsilon }_{i2}{\widetilde{\mathbf{\nu }}}_{i}^{s} \end{array}\right. \tag{8}
+$$
+
+where ${\varepsilon }_{i1}$ and ${\varepsilon }_{i2} \in {\mathbb{R}}^{3 \times 3}$ denote positive diagonal matrices. The variables ${\widehat{\mathbf{\nu }}}_{i} = {\left\lbrack {\widehat{u}}_{i},{\widehat{v}}_{i},{\widehat{w}}_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ and ${\widehat{\mathbf{\Lambda }}}_{i} = {\left\lbrack {\widehat{\Lambda }}_{iu},{\widehat{\Lambda }}_{iv},{\widehat{\Lambda }}_{iw}\right\rbrack }^{T} \in$ ${\mathbb{R}}^{3}$ denote the estimates of ${\mathbf{\nu }}_{i}$ and ${\mathbf{\Lambda }}_{i}$ , respectively. ${\mathbf{\nu }}_{i}^{ \star } \in {\mathbb{R}}^{3}$ represents the aperiodic sampling of ${\mathbf{\nu }}_{i}$ . The event-triggered mechanism is defined as
+
+$$
+\left\{ \begin{array}{l} {\mathbf{\nu }}_{i}^{ \star }\left( t\right) = {\mathbf{\nu }}_{i}\left( {t}_{\varpi }^{{\nu }_{i}}\right) ,\forall t \in \left\lbrack {{t}_{\varpi }^{{\nu }_{i}},{t}_{\varpi + 1}^{{\nu }_{i}}}\right) ,{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) = {\mathbf{\nu }}_{i}^{ \star }\left( t\right) - {\mathbf{\nu }}_{i}\left( t\right) \\ {t}_{\varpi + 1}^{{\nu }_{i}} = \inf \left\{ {t \in \mathbb{R} \mid \begin{Vmatrix}{{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) }\end{Vmatrix} \geq {\mathcal{X}}_{i}}\right\} \end{array}\right. \tag{9}
+$$
+
+where ${\mathcal{X}}_{i} \in {\mathbb{R}}^{ + }$ denotes the event triggering threshold, and ${\widetilde{\mathbf{\nu }}}_{is}\left( t\right)$ denotes the aperiodic sampling error. When $\begin{Vmatrix}{{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) }\end{Vmatrix} \geq$ ${\mathcal{X}}_{i}$ , update ${\nu }_{i}^{ \star }\left( t\right)$ ; otherwise, maintain the last updated value.
+
+Remark 1: In addition to using ESO to estimate the external disturbances, the neural network [14] and the neural predictor [15] also achieve the same objective.
+
+Existing ESO based on event-triggered mechanism [5] suffers from unavoidable chattering when approximating the disturbances. To solve the chattering problem, we design the SESO as follows
+
+$$
+\left\{ \begin{array}{l} {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\varepsilon }_{i1}{\widetilde{\mathbf{\nu }}}_{i}^{s} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\varepsilon }_{i2}\mathcal{B}\left( {\widetilde{\mathbf{\nu }}}_{i}^{s}\right) \end{array}\right. \tag{10}
+$$
+
+where $\mathcal{B}\left( {\widetilde{\nu }}_{i}^{s}\right) = \operatorname{col}\left( {\mathcal{B}\left( {\widetilde{\nu }}_{i\mathcal{E}}^{s}\right) }\right) ,\Xi = u, v, w \in {\mathbb{R}}^{3}$ is the sigmoid-like function vector, defined as follows
+
+$$
+\mathcal{B}\left( {\widetilde{\nu }}_{i\Xi }^{s}\right) = \left\{ \begin{array}{ll} \frac{1 - \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }{1 + \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }\frac{{\widetilde{\nu }}_{i\Xi }^{s}}{\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }, & {\widetilde{\nu }}_{i\Xi }^{s} \neq 0 \\ {\widetilde{\nu }}_{i\Xi }^{s}, & {\widetilde{\nu }}_{i\Xi }^{s} = 0. \end{array}\right. \tag{11}
+$$
+
+Next, to facilitate the stability analysis of the SESO, define a positive vector ${\mathcal{V}}_{i} = \operatorname{diag}\left\{ {\mathcal{V}}_{i\Xi }\right\} \in {\mathbb{R}}^{3 \times 3}$ with
+
+$$
+{\mathcal{V}}_{i\Xi } = \left\{ \begin{array}{ll} \frac{1 - \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }{1 + \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }\frac{1}{\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }, & {\widetilde{\nu }}_{i\Xi }^{s} \neq 0 \\ 1, & {\widetilde{\nu }}_{i\Xi }^{s} = 0. \end{array}\right. \tag{12}
+$$
+
+The (10) can be rewritten as
+
+$$
+\left\{ \begin{array}{l} {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\mathbf{\varepsilon }}_{i1}{\widetilde{\mathbf{\nu }}}_{i} + {\mathbf{\varepsilon }}_{i1}{\widetilde{\mathbf{\nu }}}_{is} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\mathbf{\varepsilon }}_{i2}{\mathbf{V}}_{i}{\widetilde{\mathbf{\nu }}}_{i} + {\mathbf{\varepsilon }}_{i2}{\mathbf{V}}_{i}{\widetilde{\mathbf{\nu }}}_{is} \end{array}\right. \tag{13}
+$$
+
+where ${\widetilde{\mathbf{\nu }}}_{i} = {\widehat{\mathbf{\nu }}}_{i} - {\mathbf{\nu }}_{i},{\widetilde{\mathbf{\Lambda }}}_{i} = {\widehat{\mathbf{\Lambda }}}_{i} - {\mathbf{\Lambda }}_{i}$ . Define ${\mathcal{N}}_{i1} = {\left\lbrack {\widetilde{\mathbf{\nu }}}_{i},{\widetilde{\mathbf{\Lambda }}}_{i}\right\rbrack }^{T} \in$ ${\mathbb{R}}^{6}$ , one has
+
+$$
+{\dot{\mathcal{N}}}_{i1} = {\mathbf{A}}_{i1}{\mathcal{N}}_{i1} + {\mathbf{B}}_{i1}{\widetilde{\mathbf{\nu }}}_{is} + {\mathbf{C}}_{i1}{\dot{\mathbf{\Lambda }}}_{i} \tag{14}
+$$
+
+where
+
+$$
+\left\{ {{\mathbf{A}}_{i1} = \left\lbrack \begin{matrix} - {\varepsilon }_{i1}{\mathbf{I}}_{3} & {\mathbf{I}}_{3} \\ - {\varepsilon }_{i2}{\mathbf{V}}_{i} & {\mathbf{O}}_{3} \end{matrix}\right\rbrack {\mathbf{B}}_{i1} = \left\lbrack \begin{matrix} {\varepsilon }_{i1}{\mathbf{I}}_{3} \\ {\varepsilon }_{i2}{\mathbf{V}}_{i} \end{matrix}\right\rbrack {\mathbf{C}}_{i1} = \left\lbrack \begin{matrix} {\mathbf{O}}_{3} \\ {\mathbf{I}}_{3} \end{matrix}\right\rbrack .}\right.
+$$
+
+Note that the matrix ${\mathbf{A}}_{i1}$ is a Hurwitz matrix. There exists a positive-definite matrix ${\mathbf{P}}_{i1}$ satisfying the following inequality
+
+$$
+{\mathbf{A}}_{i1}^{T}{\mathbf{P}}_{i1} + {\mathbf{P}}_{i1}{\mathbf{A}}_{i1} \leq - {\jmath }_{i1}{\mathbf{I}}_{6}. \tag{15}
+$$
+
+Lemma 1: The system (14) is ISS.
+
+Proof: Consider a Lyapunov function candidate as follows
+
+$$
+{V}_{1} = \frac{1}{2}\mathop{\sum }\limits_{{i = 1}}^{N}{\mathcal{N}}_{i1}^{T}{\mathbf{P}}_{i1}{\mathcal{N}}_{i1}. \tag{16}
+$$
+
+The time derivative ${V}_{1}$ based on (14) and (15) satisfies
+
+$$
+{\dot{V}}_{1} \leq - \frac{{j}_{1}}{2}{\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}}^{2} + \begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix} \tag{17}
+$$
+
+$$
++ \begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\parallel \dot{\mathbf{\Lambda }}\parallel
+$$
+
+where ${\jmath }_{1} = \mathop{\min }\limits_{{i = 1,\ldots , N}}\left( {\jmath }_{i1}\right) ,{\mathcal{N}}_{1} = {\left\lbrack {\mathcal{N}}_{11}^{T},\ldots ,{\mathcal{N}}_{N1}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{6N},{\widetilde{\mathbf{\nu }}}_{s} =$ ${\left\lbrack {\widetilde{\mathbf{\nu }}}_{1s}^{T},\ldots ,{\widetilde{\mathbf{\nu }}}_{Ns}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},\dot{\mathbf{\Lambda }} = {\left\lbrack {\dot{\mathbf{\Lambda }}}_{1}^{T},\ldots ,{\dot{\mathbf{\Lambda }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\mathbf{P}}_{1} =$ $\operatorname{diag}\left\{ {{\mathbf{P}}_{11},\ldots ,{\mathbf{P}}_{N1}}\right\} \in {\mathbb{R}}^{{6N} \times {6N}},{\mathbf{B}}_{1} = \operatorname{diag}\left\{ {{\mathbf{B}}_{11},\ldots ,{\mathbf{B}}_{N1}}\right\} \in$ ${\mathbb{R}}^{{6N} \times {3N}}$ , and ${\mathbf{C}}_{1} = \operatorname{diag}\left\{ {{\mathbf{C}}_{11},\ldots ,{\mathbf{C}}_{N1}}\right\} \in {\mathbb{R}}^{{6N} \times {3N}}$ . Since $\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix} \geq 2\left( {\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\parallel \dot{\mathbf{\Lambda }}\parallel }\right) /{\jmath }_{1}{\sigma }_{1}$ , one has ${\dot{V}}_{1} \leq - {\jmath }_{1}\left( {1 - {\sigma }_{1}}\right) {\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}}^{2}/2$ , where $0 < {\sigma }_{1} < 1$ . It follows that the subsystem (14) is ISS. There exists a $\mathcal{K}\mathcal{L}$ function ${\mathcal{Y}}_{1}\left( \cdot \right)$ and ${\mathcal{K}}_{\infty }$ function ${\mathcal{C}}^{{\widehat{\mathcal{\nu }}}_{s}}\left( \cdot \right)$ and ${\mathcal{C}}^{\Lambda }\left( \cdot \right)$ satisfying $\begin{Vmatrix}{{\mathcal{N}}_{1}\left( t\right) }\end{Vmatrix} \leq$ ${\mathcal{Y}}_{1}\left( {\begin{Vmatrix}{{\mathcal{N}}_{1}\left( 0\right) }\end{Vmatrix}, t}\right) + {\mathcal{C}}^{{\widetilde{\mathbf{\nu }}}_{s}}\left( \begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix}\right) + {\mathcal{C}}^{\mathbf{\Lambda }}\left( {\parallel \dot{\mathbf{\Lambda }}\parallel }\right)$ , where ${\mathcal{C}}^{{\widetilde{\mathbf{\nu }}}_{s}}\left( s\right) =$ $\left( {\left( {{2s}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right) }\right)$ and ${\mathcal{C}}^{\dot{\mathbf{\Lambda }}}\left( s\right) =$ $\left( {\left( {{2s}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right) }\right)$ .
+
+## B. Design of Guidance Law and Control Law
+
+In this section, we design the DTGPG-based guidance law and the SESO-based control law. First, we design the guidance law. The time derivative of (6) is represented by
+
+$$
+{\dot{\mathcal{Z}}}_{ik} = {\mu }_{ik}{\mathcal{P}}_{ik}{\rho }_{ik}{\dot{E}}_{ik} + {\mu }_{ik}{\dot{\mathcal{P}}}_{ik}{\mathcal{H}}_{ik} \tag{18}
+$$
+
+where ${\mu }_{ik} = \left( {1 + {\mathcal{J}}_{ik}^{2}}\right) /{\left( 1 - {\mathcal{J}}_{ik}^{2}\right) }^{2}$ and ${\rho }_{ik} =$ ${l}_{ik}/\left( {\sqrt{{E}_{ik}^{2} + {l}_{ik}}\left( {{E}_{ik}^{2} + {l}_{ik}}\right) }\right)$ .
+
+Next, to simplify the design of the controller, we rewrite (18) in a vector form
+
+$$
+{\dot{\mathbf{Z}}}_{i} = {\mathbf{\mu }}_{i1}{\dot{\mathbf{E}}}_{i} + {\mathbf{\mu }}_{i2} \tag{19}
+$$
+
+where ${\mathcal{Z}}_{i} = {\left\lbrack {\mathcal{Z}}_{ix},{\mathcal{Z}}_{iy},{\mathcal{Z}}_{i\psi }\right\rbrack }^{T} \in {\mathbb{R}}^{3},{\mathbf{E}}_{i} = {\left\lbrack {E}_{ix},{E}_{iy},{E}_{i\psi }\right\rbrack }^{T} \in$ ${\mathbb{R}}^{3},{\mathbf{\mu }}_{i1} = \operatorname{diag}\left\{ {{\mu }_{ix}{\mathcal{P}}_{ix}{\rho }_{ix},{\mu }_{iy}{\mathcal{P}}_{iy}{\rho }_{iy},{\mu }_{i\psi }{\mathcal{P}}_{i\psi }{\rho }_{i\psi }}\right\} \in {\mathbb{R}}^{3 \times 3}$ , and ${\mathbf{\mu }}_{i2} = \operatorname{diag}\left\{ {{\mu }_{ix}{\dot{\mathcal{P}}}_{ix}{\mathcal{H}}_{ix},{\mu }_{iy}{\dot{\mathcal{P}}}_{iy}{\mathcal{H}}_{iy},{\mu }_{i\psi }{\dot{\mathcal{P}}}_{i\psi }{\mathcal{H}}_{i\psi }}\right\} \in {\mathbb{R}}^{3 \times 3}$ .
+
+Take the time derivative of (2) based on (1) satisfies
+
+$$
+{\dot{\mathbf{E}}}_{i} = {\iota }_{i}{\mathbf{R}}_{i}{\mathbf{\nu }}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\mathbf{\nu }}_{j} - \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr} \tag{20}
+$$
+
+where ${\iota }_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij} + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}$ . Substituting (20) into (19) results in
+
+$$
+{\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\left( {{\iota }_{i}{\mathbf{R}}_{i}{\mathbf{\nu }}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\mathbf{\nu }}_{j} - \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr}}\right) + {\mathbf{\mu }}_{i2}. \tag{21}
+$$
+
+From (21), the DTGPG-based guidance law is chosen as
+
+$$
+{\mathbf{\alpha }}_{i} = \frac{1}{{\iota }_{i}{\mathbf{R}}_{i}}\left( {\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widehat{\mathbf{\nu }}}_{j} + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr} - \frac{1}{{\mathbf{\mu }}_{i1}}\left( {{\mathbf{\kappa }}_{i1}{\mathbf{\mathcal{Z}}}_{i} + {\mathbf{\mu }}_{i2}}\right) }\right) . \tag{22}
+$$
+
+We substitute (22) into (21), and it follows that
+
+$$
+{\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widetilde{\mathbf{\nu }}}_{j} - {\mathbf{\kappa }}_{i1}{\mathcal{Z}}_{i} \tag{23}
+$$
+
+with ${\kappa }_{i1} \in {\mathbb{R}}^{3 \times 3}$ being a positive diagonal matrix.
+
+Differing from the first-order low-pass filtering method in the traditional DSC, a second-order linear tracking differentiator (LTD) with respect to ${\mathbf{\alpha }}_{i}$ is introduced
+
+$$
+\left\{ \begin{array}{l} {\dot{\mathbf{\alpha }}}_{if} = {\mathbf{\alpha }}_{if}^{ * } \\ {\dot{\mathbf{\alpha }}}_{if}^{ * } = - {\gamma }_{i}^{2}\left( {\left( {{\mathbf{\alpha }}_{if} - {\mathbf{\alpha }}_{i}}\right) + 2\left( {{\mathbf{\alpha }}_{if}^{ * }/{\gamma }_{i}}\right) }\right) \end{array}\right. \tag{24}
+$$
+
+where ${\mathbf{\alpha }}_{if}^{ * } \in {\mathbb{R}}^{3}$ is the filtered value of ${\dot{\mathbf{\alpha }}}_{i}$ , and ${\gamma }_{i} \in {\mathbb{R}}^{ + }$ .
+
+Second, we design the control law. Defining a velocity error ${\mathcal{Z}}_{ie} = {\mathbf{\nu }}_{i} - {\mathbf{\alpha }}_{i} \in {\mathbb{R}}^{3},{\dot{\mathcal{Z}}}_{ie}$ along (7) satisfies
+
+$$
+{\dot{\mathbf{Z}}}_{ie} = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {\mathbf{\Lambda }}_{i} - {\dot{\mathbf{\alpha }}}_{i}. \tag{25}
+$$
+
+Then, we designed the SESO-based control law to stabilize (25)
+
+$$
+{\mathbf{\tau }}_{i} = \frac{{\mathbf{M}}_{i}{\mathbf{J}}_{i}}{{r}_{i}}\left( {{\mathbf{\alpha }}_{if}^{ * } - {\widehat{\mathbf{\Lambda }}}_{i} - {\mathbf{\kappa }}_{i2}{\mathbf{\mathcal{Z}}}_{ie}}\right) \tag{26}
+$$
+
+with ${\kappa }_{i2} \in {\mathbb{R}}^{3 \times 3}$ being a positive diagonal matrix.
+
+The dynamics of ${\mathcal{Z}}_{ie}$ is further obtained by substituting (26) into (25)
+
+$$
+{\dot{\mathcal{Z}}}_{ie} = {\widetilde{\alpha }}_{i}^{ * } - {\widetilde{\Lambda }}_{i} - {\kappa }_{i2}{\mathcal{Z}}_{ie} \tag{27}
+$$
+
+where ${\widetilde{\mathbf{\alpha }}}_{i}^{ * } = {\mathbf{\alpha }}_{if}^{ * } - {\dot{\mathbf{\alpha }}}_{i}$ .
+
+From (23) and (27), we can obtain the following subsystems
+
+$$
+\left\{ \begin{array}{l} {\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widetilde{\mathbf{\nu }}}_{j} - {\mathbf{\kappa }}_{i1}{\mathcal{Z}}_{i} \\ {\dot{\mathcal{Z}}}_{ie} = {\widetilde{\mathbf{\alpha }}}_{i}^{ * } - {\mathbf{\Lambda }}_{i} - {\mathbf{\kappa }}_{i2}{\mathcal{Z}}_{ie}. \end{array}\right. \tag{28}
+$$
+
+Lemma 2: The system (28) is ISS.
+
+Proof: Consider a Lyapunov function candidate as ${V}_{2} =$ $\left( {1/2}\right) \mathop{\sum }\limits_{{i = 1}}^{N}\left( {{\mathbf{Z}}_{i}^{T}{\mathbf{Z}}_{i} + {\mathbf{Z}}_{ie}^{T}{\mathbf{Z}}_{ie}}\right)$ . The time derivative of ${V}_{2}$ based on (28) satisfies
+
+$$
+{\dot{V}}_{2} \leq - {n}_{1}\parallel \mathcal{Z}{\parallel }^{2} - {n}_{2}{\begin{Vmatrix}{\mathcal{Z}}_{e}\end{Vmatrix}}^{2} + {n}_{3}{n}^{ * }\parallel \mathcal{Z}\parallel \parallel \widetilde{\nu }\parallel \tag{29}
+$$
+
+$$
++ \begin{Vmatrix}{\mathbf{Z}}_{e}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \begin{Vmatrix}{\mathbf{Z}}_{e}\end{Vmatrix}\parallel \widetilde{\mathbf{\Lambda }}\parallel
+$$
+
+where ${n}_{1} = {\lambda }_{\min }\left( {\mathbf{\kappa }}_{1}\right)$ with ${\mathbf{\kappa }}_{1} = \operatorname{diag}\left\{ {{\mathbf{\kappa }}_{11},\ldots ,{\mathbf{\kappa }}_{N1}}\right\} \in$ ${\mathbb{R}}^{{3N} \times {3N}} \cdot {n}_{2} = {\lambda }_{\min }\left( {\mathbf{\kappa }}_{2}\right)$ with ${\mathbf{\kappa }}_{2} = \operatorname{diag}\left\{ {{\mathbf{\kappa }}_{12},\ldots ,{\mathbf{\kappa }}_{N2}}\right\} \in$ ${\mathbb{R}}^{{3N} \times {3N}}.{n}_{3} = \mathop{\max }\limits_{{i = 1,\ldots , N}}\left( {{\lambda }_{\max }\left( {\mathbf{\mu }}_{i1}\right) }\right) .{n}^{ * } = \mathop{\max }\limits_{{i = 1,\ldots , N}}\left( {n}_{i}^{ * }\right)$ with ${n}_{i}^{ * } = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ji}.\mathcal{Z} = {\left\lbrack {\mathcal{Z}}_{1}^{T},\ldots ,{\mathcal{Z}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\mathcal{Z}}_{e} =$ ${\left\lbrack {\mathbf{\mathcal{Z}}}_{1e}^{T},\ldots ,{\mathbf{\mathcal{Z}}}_{Ne}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},\widetilde{\mathbf{\nu }} = {\left\lbrack {\widetilde{\mathbf{\nu }}}_{1}^{T},\ldots ,{\widetilde{\mathbf{\nu }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\widetilde{\mathbf{\alpha }}}^{ * } =$ ${\left\lbrack {\widetilde{\mathbf{\alpha }}}_{1}^{*T},\ldots ,{\widetilde{\mathbf{\alpha }}}_{N}^{*T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N}$ , and $\widetilde{\mathbf{\Lambda }} = {\left\lbrack {\widetilde{\mathbf{\Lambda }}}_{1}^{T},\ldots ,{\widetilde{\mathbf{\Lambda }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N}$ .
+
+Define $n = \min \left( {{n}_{1},{n}_{2}}\right)$ and ${\mathcal{N}}_{2} = {\left\lbrack \parallel \mathcal{Z}\parallel ,\begin{Vmatrix}{\mathcal{Z}}_{e}\end{Vmatrix}\right\rbrack }^{T} \in {\mathbb{R}}^{2}$ . Then, (29) is further put into
+
+$$
+{\dot{V}}_{2} \leq - n{\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}}^{2} + {n}_{3}{n}^{ * }\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\parallel \widetilde{\nu }\parallel \tag{30}
+$$
+
+$$
++ \begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\parallel \widetilde{\mathbf{\Lambda }}\parallel \text{.}
+$$
+
+Since $\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix} \geq 2\left( {{n}_{3}{n}^{ * }\parallel \widetilde{\mathbf{\nu }}\parallel + \begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \parallel \widetilde{\mathbf{\Lambda }}\parallel }\right) /n$ , one has ${\dot{V}}_{2} \leq$ $- n{\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}}^{2}/2$ . It follows that the subsystem (28) is ISS. There exists a $\mathcal{K}\mathcal{L}$ function ${\mathcal{Y}}_{2}\left( \cdot \right)$ and ${\mathcal{K}}_{\infty }$ function ${\mathcal{C}}^{\widetilde{\nu }}\left( \cdot \right) ,{\mathcal{C}}^{{\widetilde{\alpha }}^{ * }}\left( \cdot \right)$ , and ${\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( \cdot \right)$ satisfying $\begin{Vmatrix}{{\mathcal{N}}_{2}\left( t\right) }\end{Vmatrix} \leq {\mathcal{Y}}_{2}\left( {\begin{Vmatrix}{{\mathcal{N}}_{2}\left( 0\right) }\end{Vmatrix}, t}\right) + {\mathcal{C}}^{\widetilde{\mathbf{\nu }}}\left( {\parallel \widetilde{\mathbf{\nu }}\parallel }\right) +$ ${\mathcal{C}}^{{\widetilde{\mathbf{\alpha }}}^{ * }}\left( \begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix}\right) + {\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( {\parallel \widetilde{\mathbf{\Lambda }}\parallel }\right)$ , where ${\mathcal{C}}^{\widetilde{\mathbf{\nu }}}\left( s\right) = 2{n}_{3}{n}^{ * }s/n,{\mathcal{C}}^{{\widetilde{\mathbf{\alpha }}}^{ * }}\left( s\right) =$ ${2s}/n$ , and ${\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( s\right) = {2s}/n$ .
+
+
+
+Fig. 1. Circular formation using the proposed method.
+
+Theorem 1: For multi-WMRs (1) subject to initial conditions, the closed-loop system is ISS consisting of SESO (10), the DTGPG-based guidance law (22), and the SESO-based control law (26). Moreover, Zeno behavior can be avoided.
+
+Proof: The ISS properties of subsystems (14) and (28) are proven through Lemma 1 and Lemma 2, respectively. The state of the subsystem (14), $\widetilde{\mathbf{\nu }}$ , and $\widetilde{\mathbf{\Lambda }}$ are inputs of the subsystem (28). Under Assumptions 1-2, according to the cascade stability theorem, the closed-loop system is ISS. It yields that the ultimate boundedness of $\begin{Vmatrix}{{\mathcal{N}}_{2}\left( t\right) }\end{Vmatrix}$ as $t \rightarrow \infty$
+
+$$
+{\begin{Vmatrix}{\mathcal{N}}_{2}\left( t\right) \end{Vmatrix}}_{t \rightarrow \infty } \leq \frac{2\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix}}{n} + {\mathcal{H}}^{ * }\left( {\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix} + \parallel \dot{\mathbf{\Lambda }}\parallel \begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}}\right. \tag{31}
+$$
+
+with ${\mathcal{H}}^{ * } = \left( {4\left( {{n}_{3}{n}^{ * } + 1}\right) \sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {n{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right)$ . The detailed proof process of the Zeno behavior can be referred to [5]. The proof of Theorem 1 is complete.
+
+## IV. Simulation Results
+
+From Fig. 1, it can be seen that we consider a communication topology consisting of three followers ${n}_{1},{n}_{2}$ , and ${n}_{3}$ , as well as two virtual leaders ${n}_{4}$ and ${n}_{5}$ to verify the effectiveness of the proposed controller. The physical parameters of the WMR can refer to [10]. This external disturbance is similar to [16]. The initial values of three followers are chosen as ${\mathbf{\eta }}_{1}\left( 0\right) = {\left\lbrack 0,0,3\pi /2\right\rbrack }^{T},{\mathbf{\eta }}_{2}\left( 0\right) = {\left\lbrack 2, - {10},\pi /2\right\rbrack }^{T},{\mathbf{\eta }}_{3}\left( 0\right) =$ ${\left\lbrack 2, - {17},4\pi /3\right\rbrack }^{T}$ . The trajectories of the two virtual leaders are chosen as
+
+$$
+\left\{ \begin{array}{l} {\mathbf{\eta }}_{4r} = {\left\lbrack -5\sin \left( {0.2}t\right) , - 5\cos \left( {0.2}t\right) ,\operatorname{atan}2\left( {\dot{\eta }}_{4y},{\dot{\eta }}_{4x}\right) \right\rbrack }^{T} \\ {\mathbf{\eta }}_{5r} = {\left\lbrack -{15}\sin \left( {0.2}t\right) , - {15}\cos \left( {0.2}t\right) ,\operatorname{atan}2\left( {\dot{\eta }}_{5y},{\dot{\eta }}_{5x}\right) \right\rbrack }^{T}. \end{array}\right.
+$$
+
+The main design parameters are set as ${\kappa }_{11} = \operatorname{diag}\{ {12},7,{10}\}$ , ${\kappa }_{21} = \operatorname{diag}\{ 7,7,{10}\} ,{\kappa }_{31} = \operatorname{diag}\{ {12},9,{10}\} ,{\kappa }_{i2} = \operatorname{diag}\{ {20},{20},{20}\}$ , ${\varepsilon }_{i1} = \operatorname{diag}\{ 2,2,2\} ,{\varepsilon }_{i2} = \operatorname{diag}\{ {40},{40},{40}\} ,{T}_{{1x}, a} = {T}_{{1\psi }, a} =$ ${T}_{{2x}, a} = {T}_{{2\psi }, a} = {T}_{{3x}, a} = {T}_{{3\psi }, a} = {0.5},{T}_{{1y}, a} = {T}_{{2y}, a} =$
+
+
+
+
+
+Fig. 4. The number of triggering events.
+
+Fig. 2. Tracking errors using the DTGPG. Fig. 3. The estimated disturbances using the SESO. ${T}_{{3y}, a} = 1,{T}_{{1x}, b} = {T}_{{2x}, b} = {T}_{{3x}, b} = {0.7},{T}_{{1y}, b} = {T}_{{2y}, b} =$ ${T}_{{3y}, b} = {1.2},{T}_{{1\psi }, b} = {T}_{{2\psi }, b} = {T}_{{3\psi }, b} = {1.5},{\omega }_{ik} =$ ${0.7},{\Theta }_{{ik},\infty } = {0.9},{\varrho }_{ik} = 2,{l}_{ik} = {10},{\mathcal{X}}_{1} = {\mathcal{X}}_{2} = {\mathcal{X}}_{3} = {0.06}$ .
+
+Simulation results are depicted in Figs 1-4. Fig. 1 demonstrates these three vehicles forming a circular formation guided by two virtual leaders. Fig. 2 shows that the tracking profile is not constrained by the initial value and is able to dynamically adjust the performance boundaries using the proposed DTGPG control scheme. Fig. 3 shows that SESO is not only able to estimate internal uncertainties and external disturbances but also to reduce chattering. Fig. 4 shows the number of triggering events. ${\nu }_{1}^{ \star },{\nu }_{2}^{ \star }$ , and ${\nu }_{3}^{ \star }$ are triggered 179,213, and 211 times respectively. Compared to time triggering 2800 times, it effectively saves resources.
+
+## V. Conclusion
+
+In this paper, the dynamic threshold global prescribed performance formation control problem was investigated for WMRs in the presence of unknown total disturbances. A dynamic threshold global performance-guaranteed formation control method based on SESO was proposed, which had three advantages: 1) it could adjust the steady-state performance boundary twice, 2) it resolved the initial value constraints present in standard PPC, and 3) it mitigated the chattering problem in event-triggered ESO. This cascade system consisting of the SESO, the DTGPG-based guidance law, and the SESO-based control law was proved to be ISS. The main results were demonstrated by the simulation examples.
+
+## REFERENCES
+
+[1] S.-L. Dai, S. He, X. Chen, and X. Jin, "Adaptive leader-follower formation control of nonholonomic mobile robots with prescribed transient and steady-state performance," IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 3662-3671, 2019.
+
+[2] S. Chang, Y. Wang, Z. Zuo, and H. Yang, "Fixed-time formation control for wheeled mobile robots with prescribed performance," IEEE Transactions on Control Systems Technology, vol. 30, no. 2, pp. 844- 851, 2021.
+
+[3] S.-L. Dai, K. Lu, and X. Jin, "Fixed-time formation control of unicycle-type mobile robots with visibility and performance constraints," IEEE Transactions on Industrial Electronics, vol. 68, no. 12, pp. 12615- 12625, 2020.
+
+[4] L. Liu, D. Wang, and Z. Peng, "State recovery and disturbance estimation of unmanned surface vehicles based on nonlinear extended state observers," Ocean Engineering, vol. 171, pp. 625-632, 2019.
+
+[5] C. Wang, D. Wang, and Z. Peng, "Distributed output-feedback control of unmanned container transporter platooning with uncertainties and disturbances using event-triggered mechanism," IEEE Transactions on Vehicular Technology, vol. 71, no. 1, pp. 162-170, 2021.
+
+[6] J. Li, J. Du, and C. P. Chen, "Command-filtered robust adaptive nn control with the prescribed performance for the 3-D trajectory tracking of underactuated AUVs," IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 11, pp. 6545-6557, 2021.
+
+[7] W. Wu, R. Ji, W. Zhang, and Y. Zhang, "Transient-reinforced tunnel coordinated control of underactuated marine surface vehicles with actuator faults," IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 2, pp. 1872-1881, 2024.
+
+[8] D. Wu, Y. Zhang, W. Wu, E. Q. Wu, and W. Zhang, "Tunnel prescribed performance control for distributed path maneuvering of multi-UAV swarms via distributed neural predictor," IEEE Transactions on Circuits and Systems II: Express Briefs, 2024, doi:10.1109/TCSII.2024.3371981.
+
+[9] W. Wu, D. Wu, Y. Zhang, S. Chen, and W. Zhang, "Safety-critical trajectory tracking for mobile robots with guaranteed performance," IEEE/CAA Journal of Automatica Sinica, 2024, doi:10.1109/JAS.2023.123864.
+
+[10] D. Yu, C. P. Chen, and H. Xu, "Fuzzy swarm control based on sliding-mode strategy with self-organized omnidirectional mobile robots system," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 4, pp. 2262-2274, 2021.
+
+[11] Z. Peng, J. Wang, and D. Wang, "Distributed containment maneuvering of multiple marine vessels via neurodynamics-based output feedback," IEEE Transactions on Industrial Electronics, vol. 64, no. 5, pp. 3831- 3839, 2017.
+
+[12] K. Zhao, Y. Song, C. P. Chen, and L. Chen, "Adaptive asymptotic tracking with global performance for nonlinear systems with unknown control directions," IEEE Transactions on Automatic Control, vol. 67, no. 3, pp. 1566-1573, 2021.
+
+[13] X. Liu, H. Zhang, J. Sun, and X. Guo, "Dynamic threshold finite-time prescribed performance control for nonlinear systems with dead-zone output," IEEE Transactions on Cybernetics, vol. 54, no. 1, pp. 655-664, 2023.
+
+[14] T.-S. Li, D. Wang, G. Feng, and S.-C. Tong, "A DSC approach to robust adaptive NN tracking control for strict-feedback nonlinear systems," IEEE Transactions on Systems, Man, and Cybernetics, part $b$ (Cybernetics), vol. 40, no. 3, pp. 915-927,2009.
+
+[15] Y. Zhang, W. Wu, W. Chen, H. Lu, and W. Zhang, "Output-feedback consensus maneuvering of uncertain MIMO strict-feedback multiagent systems based on a high-order neural observer," IEEE Transactions on Cybernetics, 2024, doi:10.1109/TCYB.2024.3351476.
+
+[16] T. Zhao, X. Zou, and S. Dian, "Fixed-time observer-based adaptive fuzzy tracking control for Mecanum-wheel mobile robots with guaranteed transient performance," Nonlinear Dynamics, pp. 1-17, 2022.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..06671463302b17c142f04e7f56f6ad5d79677157
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,380 @@
+§ DYNAMIC THRESHOLD GLOBAL PERFORMANCE-GUARANTEED FORMATION CONTROL FOR WHEELED MOBILE ROBOTS WITH SMOOTH EXTENDED STATE OBSERVER
+
+${1}^{\text{ st }}$ Minjing Wang
+
+School of Information and Communication Engineering
+
+Hainan University
+
+Haikou, China
+
+mjwang@hainanu.edu.cn
+
+${2}^{\text{ nd }}$ Di Wu
+
+School of Information and Communication Engineering
+
+Hainan University
+
+Haikou, China
+
+hainuwudi@hainanu.edu.cn
+
+${3}^{\text{ rd }}$ Yibo Zhang
+
+Department of Automation
+
+Shanghai Jiao Tong University
+
+Shanghai, China
+
+zhang297@sjtu.edu.cn
+
+${4}^{\text{ th }}$ Wenlong Feng
+
+School of Information and Communication Engineering
+
+Hainan University
+
+Haikou, China
+
+fwlfwl@163.com
+
+Abstract-In this paper, a dynamic threshold global performance-guaranteed formation control method is proposed for wheeled mobile robots (WMRs). Unlike existing prescribed performance formation control methods that are constrained by initial values, we design a dynamic threshold global performance-guaranteed (DTGPG) function to address the initial value constraints while being able to secondary adjust the steady state performance boundaries. Moreover, we design a smooth extended state observer (SESO) based on a sigmoid-like function to mitigate the chattering problem of the existing event-triggered ESO. Then a DTGPG-based guidance law and a SESO-based control law are designed to implement the formation control. The proof shows that the total closed-loop system is input-to-state stable (ISS). Through simulation, the benefits and validity of the proposed control methodology are confirmed.
+
+Index Terms-WMRs, dynamic threshold global performance-guaranteed function, formation control, SESO
+
+§ I. INTRODUCTION
+
+Multi-wheeled mobile robots (WMRs) formation control with extremely high demands on transient and steady state performance. In the transient phase, small overshoots and fast convergence can avoid collisions between WMRs. In the steady state phase, high accuracy tracking performance can significantly improve the overall coordination and task execution efficiency. Therefore, it is crucial to prescribe the performance of the multi-WMRs system. In [1], a collision avoidance prescribed performance control (PPC) method is proposed for WMR formations, which guarantees the performance of the multi-WMR system by adding communication limits and collision limits to the prescribed performance function. In [2], a fixed-time performance-guaranteed formation control problem for multi-WMRs is investigated, which achieves fixed-time convergence by introducing a segmented time-varying function into the performance function. In [3], a field-of-view constrained performance-guaranteed formation control method is proposed for multi-WMRs, which designs a guaranteed performance function that considers leader and follower distance maintenance to avoid collisions. Although the above work [1]-[3] can effectively improve the performance of multi-WMRs, there are still two points that need to be improved: 1. They are all subject to initial conditions, which will increase the human intervention in practical applications, i.e., calculating the starting position of the WMRs in advance. 2. The standard PPC cannot perform a secondary adjustment of the performance boundaries after reaching the steady state.
+
+On the other hand, when performing tasks in complex environments, frozen and uneven road surfaces are usually encountered. These disturbances may affect the stability of WMR formations. Therefore, how to quickly and accurately estimate the external disturbances is also crucial. In [4], a nonlinear extended state observer (ESO) is proposed to estimate the external disturbance, which recovers the velocity and estimates the external disturbance through position and heading errors. Then to improve the estimation rate, a finite time ESO is designed. In [5], an event-triggered ESO is designed to adjust the allocation of resources. Note that event-triggered ESO [5] can save resources when estimating disturbances, but will inevitably have chattering problems.
+
+Inspired by the aforementioned observations, we propose a dynamic threshold global performance-guaranteed (DTGPG) formation control method for WMRs with a smooth extended state observer (SESO). The key contributions of this work are: Unlike the standard PPC methods described in [6] and the TPP methods in [7]-[9], this paper proposes DTGPG capable of solving the initial value constraints problem and secondary adjustment of the steady state performance bounds. In contrast to event-triggered ESO [5], we design the SESO to mitigate chattering by introducing a sigmoid-like function to smooth the estimation error. The total closed-loop system is proved to be input-to-state stable (ISS). Some of the symbols in this paper are defined in Table I.
+
+This work is partly distributed under the "South China Sea Rising Star" Education Platform Foundation of Hainan Province (JYNHXX2023-17G), the Natural Science Foundation of Hainan Province (624MS036). (Corresponding author: Di Wu)
+
+TABLE I
+
+SYMBOL DEFINITION
+
+max width=
+
+Symbol Definition
+
+1-2
+${\mathbb{R}}^{n}$ $n$ -dimensional Euclidean Space
+
+1-2
+${\mathbb{R}}^{ + }$ Positive real space
+
+1-2
+$\parallel \cdot \parallel$ Euclidean norm
+
+1-2
+diag $\{ \cdots \}$ Block-diagonal matrix
+
+1-2
+${\lambda }_{\max }\left( \cdot \right)$ Maximum eigenvalue of a matrix
+
+1-2
+${\lambda }_{\min }\left( \cdot \right)$ Minimum eigenvalue of a matrix
+
+1-2
+$\operatorname{sgn}\left( \cdot \right)$ Sign function
+
+1-2
+$\exp \left( \cdot \right)$ Exponential function
+
+1-2
+$\operatorname{col}\left( \cdot \right)$ Column vector
+
+1-2
+
+§ II. PRELIMINARIES AND PROBLEM STATEMENT
+
+§ A. GRAPH THEORY
+
+To describe the communication among the virtual leader and WMRs, a directed graph is described as $\mathcal{G} = \{ \mathcal{V},\mathcal{M}\}$ . $\mathcal{V} = \left\{ {{n}_{1},\ldots ,{n}_{M}}\right\}$ and $\mathcal{M} = \left\{ {\left( {{n}_{i},{n}_{j}}\right) \in \mathcal{V} \times \mathcal{V}}\right\}$ represent a vertex set and an edge set, respectively. An adjacency matrix associated with $\mathcal{G}$ is defined as $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{M \times M}$ . Correspondingly, a degree matrix connected with $\mathcal{G}$ is characterized as $\mathcal{D} = \operatorname{diag}\left\{ {d}_{i}\right\} \in {\mathbb{R}}^{M \times M}$ with ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{M}{a}_{ij}$ . Additionally, a Laplacian matrix associated with $\mathcal{G}$ is defined as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ . Note that here $i = 1,\ldots ,M,j = 1,\ldots ,M$ .
+
+§ B. PROBLEM STATEMENT
+
+Suppose that there exist $N$ followers, labeled as agents ${n}_{1}$ to ${n}_{N}$ , and $M - N$ leaders, labeled as agents ${n}_{N + 1}$ to ${n}_{M}$ , under a communication topology graph. A group of followers consisting of $N$ wheeled mobile robots is modelled as follows
+
+$$
+\begin{cases} {\dot{\mathbf{\eta }}}_{i} & = {\mathbf{R}}_{i}{\mathbf{\nu }}_{i} \\ {\dot{\mathbf{\nu }}}_{i} & = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\mathcal{T}}}_{i} \\ & - {D}_{i\theta }{r}_{i}^{2}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{J}}_{i}{\mathbf{R}}_{i}^{-1}{\dot{\mathbf{\eta }}}_{i} - {\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\mathcal{F}}}_{i}{r}_{i}^{2} \end{cases} \tag{1}
+$$
+
+where $i = 1,\ldots ,N.{\mathbf{\eta }}_{i} = {\left\lbrack {x}_{i},{y}_{i},{\psi }_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ denotes the position and yaw angle. ${\mathbf{\nu }}_{i} = {\left\lbrack {u}_{i},{v}_{i},{w}_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ denotes the velocity vector. ${\tau }_{i} = {\left\lbrack {\tau }_{i1},{\tau }_{i2},{\tau }_{i3},{\tau }_{i4}\right\rbrack }^{T} \in {\mathbb{R}}^{4}$ denotes the control input. ${\mathcal{T}}_{i} = {\left\lbrack {\mathcal{T}}_{i1},{\mathcal{T}}_{i2},{\mathcal{T}}_{i3},{\mathcal{T}}_{i4}\right\rbrack }^{T} \in {\mathbb{R}}^{4}$ denotes the external disturbance. The kinetic parameters and matrices of this WMR can be found in [10]. ${\mathbf{J}}_{i} \in {\mathbb{R}}^{4 \times 3}$ and ${\mathbf{J}}_{i}^{ + } \in {\mathbb{R}}^{3 \times 4}$ satisfy the relationship ${\mathbf{J}}_{i}^{ + }{\mathbf{J}}_{i} = {\mathbf{I}}_{3}$ .
+
+Assumption 1: The graph $\mathcal{G}$ contains a spanning tree with the virtual leader as the root node.
+
+C. Dynamic Threshold Global Performance-Guaranteed and Barrier Function
+
+We define the distributed error as follows
+
+$$
+{\mathbf{E}}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}\left( {{\mathbf{\eta }}_{i} - {\mathbf{\eta }}_{j}}\right) + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}\left( {{\mathbf{\eta }}_{i} - {\mathbf{\eta }}_{jr}}\right) \tag{2}
+$$
+
+where ${\mathbf{\eta }}_{jr} = {\left\lbrack {\eta }_{jx},{\eta }_{jy},{\eta }_{j\psi }\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ represents the trajectory of the virtual leader. The coefficient ${a}_{ij}$ is defined in [11]. To ensure that the developed control is free from the influence of initial conditions and can dynamically adjust prescribed thresholds, the error is constrained within the following prescribed regions
+
+$$
+{\mathcal{I}}_{ik}\left( {-{\mathcal{W}}_{ik}}\right) \leq {E}_{ik} \leq {\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right) ,\;k = x,y,\psi \tag{3}
+$$
+
+where ${\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right)$ is a dynamic threshold global performance-guaranteed (DTGPG) function similar to the [12], and is defined as follows
+
+$$
+{\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right) = \frac{\sqrt{{l}_{ik}}{\mathcal{W}}_{ik}}{\sqrt{1 - {\mathcal{W}}_{ik}^{2}}} \tag{4}
+$$
+
+with ${\mathcal{W}}_{ik} = 1/{\mathcal{P}}_{ik}.{\mathcal{P}}_{ik}$ is a dynamic threshold finite-time prescribed function similar to the [13]
+
+$$
+{\mathcal{P}}_{ik}\left( t\right) = \left\{ \begin{array}{ll} \left( {1 - {\Theta }_{{ik},\infty }}\right) \exp \left( {-{\varrho }_{ik}\frac{{T}_{{ik},a}}{{T}_{{ik},a} - t}}\right) + {\Theta }_{{ik},\infty },0 \leq t < {T}_{{ik},a} & \\ {\Theta }_{{ik},\infty }\left( {1 - \frac{{\omega }_{ik}}{2} + \frac{{\omega }_{ik}}{2}\cos \left( {\frac{\pi }{{c}_{ik}}\left( {t - {T}_{{ik},a}}\right) }\right) }\right) ,{T}_{{ik},a} \leq t < {T}_{{ik},b} & \\ {\Theta }_{{ik},\infty }\left( {1 - {\omega }_{ik}}\right) , & t \geq {T}_{{ik},b} \end{array}\right. \tag{5}
+$$
+
+where ${l}_{ik}$ and ${\omega }_{ik}$ are positive constants. ${\Theta }_{{ik},\infty } =$ $\mathop{\lim }\limits_{{t \rightarrow \infty }}{\Theta }_{ik}\left( t\right)$ is the steady-state value. ${\varrho }_{ik} > 0$ represents the convergence rate. ${T}_{{ik},a}$ is the settling time to reach steady state. ${c}_{ik} = {T}_{{ik},b} - {T}_{{ik},a}$ is the duration of the dynamic adjustment.
+
+Then, we employ the following barrier function to implement the error constraint in (3)
+
+$$
+{\mathcal{Z}}_{ik} = \frac{{\mathcal{J}}_{ik}}{1 - {\mathcal{J}}_{ik}^{2}} \tag{6}
+$$
+
+where ${\mathcal{J}}_{ik} = {\mathcal{P}}_{ik}{\mathcal{H}}_{ik}$ with ${\mathcal{H}}_{ik} = {E}_{ik}/\sqrt{{E}_{ik}^{2} + {l}_{ik}}$ . The properties of the barrier function are described in [12].
+
+§ III. CONTROLLER DESIGN AND ANALYSIS
+
+§ A. SMOOTH EXTENDED STATE OBSERVER
+
+To facilitate the subsequent strategy design, define ${\mathbf{\Lambda }}_{i} =$ ${r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathcal{T}}_{i} - {D}_{i\theta }{r}_{i}^{2}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{J}}_{i}{\mathbf{R}}_{i}^{-1}{\dot{\mathbf{\eta }}}_{i} - {\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathcal{F}}_{i}{r}_{i}^{2}$ to denote internal uncertainty and external disturbances suffered by the $i$ th WMR. (1) can be reformulated as
+
+$$
+\left\{ \begin{array}{l} {\dot{\mathbf{\eta }}}_{i} = {\mathbf{R}}_{i}{\mathbf{\nu }}_{i} \\ {\dot{\mathbf{\nu }}}_{i} = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {\mathbf{\Lambda }}_{i}. \end{array}\right. \tag{7}
+$$
+
+Assumption 2: For the multi-WMR system, the unknown total disturbance ${\mathbf{\Lambda }}_{i}$ is smooth and continuous.
+
+Then, we regard the total disturbances ${\mathbf{\Lambda }}_{i}$ as an extended state, and to avoid unnecessary waste of resources when approximating the disturbances, an ESO based on event-triggered mechanism is designed as [5]
+
+$$
+\left\{ \begin{array}{l} {\widetilde{\mathbf{\nu }}}_{i}^{s} = {\widehat{\mathbf{\nu }}}_{i} - {\mathbf{\nu }}_{i}^{ \star } \\ {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\varepsilon }_{i1}{\widetilde{\mathbf{\nu }}}_{i}^{s} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\varepsilon }_{i2}{\widetilde{\mathbf{\nu }}}_{i}^{s} \end{array}\right. \tag{8}
+$$
+
+where ${\varepsilon }_{i1}$ and ${\varepsilon }_{i2} \in {\mathbb{R}}^{3 \times 3}$ denote positive diagonal matrices. The variables ${\widehat{\mathbf{\nu }}}_{i} = {\left\lbrack {\widehat{u}}_{i},{\widehat{v}}_{i},{\widehat{w}}_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ and ${\widehat{\mathbf{\Lambda }}}_{i} = {\left\lbrack {\widehat{\Lambda }}_{iu},{\widehat{\Lambda }}_{iv},{\widehat{\Lambda }}_{iw}\right\rbrack }^{T} \in$ ${\mathbb{R}}^{3}$ denote the estimates of ${\mathbf{\nu }}_{i}$ and ${\mathbf{\Lambda }}_{i}$ , respectively. ${\mathbf{\nu }}_{i}^{ \star } \in {\mathbb{R}}^{3}$ represents the aperiodic sampling of ${\mathbf{\nu }}_{i}$ . The event-triggered mechanism is defined as
+
+$$
+\left\{ \begin{array}{l} {\mathbf{\nu }}_{i}^{ \star }\left( t\right) = {\mathbf{\nu }}_{i}\left( {t}_{\varpi }^{{\nu }_{i}}\right) ,\forall t \in \left\lbrack {{t}_{\varpi }^{{\nu }_{i}},{t}_{\varpi + 1}^{{\nu }_{i}}}\right) ,{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) = {\mathbf{\nu }}_{i}^{ \star }\left( t\right) - {\mathbf{\nu }}_{i}\left( t\right) \\ {t}_{\varpi + 1}^{{\nu }_{i}} = \inf \left\{ {t \in \mathbb{R} \mid \begin{Vmatrix}{{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) }\end{Vmatrix} \geq {\mathcal{X}}_{i}}\right\} \end{array}\right. \tag{9}
+$$
+
+where ${\mathcal{X}}_{i} \in {\mathbb{R}}^{ + }$ denotes the event triggering threshold, and ${\widetilde{\mathbf{\nu }}}_{is}\left( t\right)$ denotes the aperiodic sampling error. When $\begin{Vmatrix}{{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) }\end{Vmatrix} \geq$ ${\mathcal{X}}_{i}$ , update ${\nu }_{i}^{ \star }\left( t\right)$ ; otherwise, maintain the last updated value.
+
+Remark 1: In addition to using ESO to estimate the external disturbances, the neural network [14] and the neural predictor [15] also achieve the same objective.
+
+Existing ESO based on event-triggered mechanism [5] suffers from unavoidable chattering when approximating the disturbances. To solve the chattering problem, we design the SESO as follows
+
+$$
+\left\{ \begin{array}{l} {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\varepsilon }_{i1}{\widetilde{\mathbf{\nu }}}_{i}^{s} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\varepsilon }_{i2}\mathcal{B}\left( {\widetilde{\mathbf{\nu }}}_{i}^{s}\right) \end{array}\right. \tag{10}
+$$
+
+where $\mathcal{B}\left( {\widetilde{\nu }}_{i}^{s}\right) = \operatorname{col}\left( {\mathcal{B}\left( {\widetilde{\nu }}_{i\mathcal{E}}^{s}\right) }\right) ,\Xi = u,v,w \in {\mathbb{R}}^{3}$ is the sigmoid-like function vector, defined as follows
+
+$$
+\mathcal{B}\left( {\widetilde{\nu }}_{i\Xi }^{s}\right) = \left\{ \begin{array}{ll} \frac{1 - \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }{1 + \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }\frac{{\widetilde{\nu }}_{i\Xi }^{s}}{\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }, & {\widetilde{\nu }}_{i\Xi }^{s} \neq 0 \\ {\widetilde{\nu }}_{i\Xi }^{s}, & {\widetilde{\nu }}_{i\Xi }^{s} = 0. \end{array}\right. \tag{11}
+$$
+
+Next, to facilitate the stability analysis of the SESO, define a positive vector ${\mathcal{V}}_{i} = \operatorname{diag}\left\{ {\mathcal{V}}_{i\Xi }\right\} \in {\mathbb{R}}^{3 \times 3}$ with
+
+$$
+{\mathcal{V}}_{i\Xi } = \left\{ \begin{array}{ll} \frac{1 - \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }{1 + \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }\frac{1}{\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }, & {\widetilde{\nu }}_{i\Xi }^{s} \neq 0 \\ 1, & {\widetilde{\nu }}_{i\Xi }^{s} = 0. \end{array}\right. \tag{12}
+$$
+
+The (10) can be rewritten as
+
+$$
+\left\{ \begin{array}{l} {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\mathbf{\varepsilon }}_{i1}{\widetilde{\mathbf{\nu }}}_{i} + {\mathbf{\varepsilon }}_{i1}{\widetilde{\mathbf{\nu }}}_{is} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\mathbf{\varepsilon }}_{i2}{\mathbf{V}}_{i}{\widetilde{\mathbf{\nu }}}_{i} + {\mathbf{\varepsilon }}_{i2}{\mathbf{V}}_{i}{\widetilde{\mathbf{\nu }}}_{is} \end{array}\right. \tag{13}
+$$
+
+where ${\widetilde{\mathbf{\nu }}}_{i} = {\widehat{\mathbf{\nu }}}_{i} - {\mathbf{\nu }}_{i},{\widetilde{\mathbf{\Lambda }}}_{i} = {\widehat{\mathbf{\Lambda }}}_{i} - {\mathbf{\Lambda }}_{i}$ . Define ${\mathcal{N}}_{i1} = {\left\lbrack {\widetilde{\mathbf{\nu }}}_{i},{\widetilde{\mathbf{\Lambda }}}_{i}\right\rbrack }^{T} \in$ ${\mathbb{R}}^{6}$ , one has
+
+$$
+{\dot{\mathcal{N}}}_{i1} = {\mathbf{A}}_{i1}{\mathcal{N}}_{i1} + {\mathbf{B}}_{i1}{\widetilde{\mathbf{\nu }}}_{is} + {\mathbf{C}}_{i1}{\dot{\mathbf{\Lambda }}}_{i} \tag{14}
+$$
+
+where
+
+$$
+\left\{ {{\mathbf{A}}_{i1} = \left\lbrack \begin{matrix} - {\varepsilon }_{i1}{\mathbf{I}}_{3} & {\mathbf{I}}_{3} \\ - {\varepsilon }_{i2}{\mathbf{V}}_{i} & {\mathbf{O}}_{3} \end{matrix}\right\rbrack {\mathbf{B}}_{i1} = \left\lbrack \begin{matrix} {\varepsilon }_{i1}{\mathbf{I}}_{3} \\ {\varepsilon }_{i2}{\mathbf{V}}_{i} \end{matrix}\right\rbrack {\mathbf{C}}_{i1} = \left\lbrack \begin{matrix} {\mathbf{O}}_{3} \\ {\mathbf{I}}_{3} \end{matrix}\right\rbrack .}\right.
+$$
+
+Note that the matrix ${\mathbf{A}}_{i1}$ is a Hurwitz matrix. There exists a positive-definite matrix ${\mathbf{P}}_{i1}$ satisfying the following inequality
+
+$$
+{\mathbf{A}}_{i1}^{T}{\mathbf{P}}_{i1} + {\mathbf{P}}_{i1}{\mathbf{A}}_{i1} \leq - {\jmath }_{i1}{\mathbf{I}}_{6}. \tag{15}
+$$
+
+Lemma 1: The system (14) is ISS.
+
+Proof: Consider a Lyapunov function candidate as follows
+
+$$
+{V}_{1} = \frac{1}{2}\mathop{\sum }\limits_{{i = 1}}^{N}{\mathcal{N}}_{i1}^{T}{\mathbf{P}}_{i1}{\mathcal{N}}_{i1}. \tag{16}
+$$
+
+The time derivative ${V}_{1}$ based on (14) and (15) satisfies
+
+$$
+{\dot{V}}_{1} \leq - \frac{{j}_{1}}{2}{\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}}^{2} + \begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix} \tag{17}
+$$
+
+$$
++ \begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\parallel \dot{\mathbf{\Lambda }}\parallel
+$$
+
+where ${\jmath }_{1} = \mathop{\min }\limits_{{i = 1,\ldots ,N}}\left( {\jmath }_{i1}\right) ,{\mathcal{N}}_{1} = {\left\lbrack {\mathcal{N}}_{11}^{T},\ldots ,{\mathcal{N}}_{N1}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{6N},{\widetilde{\mathbf{\nu }}}_{s} =$ ${\left\lbrack {\widetilde{\mathbf{\nu }}}_{1s}^{T},\ldots ,{\widetilde{\mathbf{\nu }}}_{Ns}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},\dot{\mathbf{\Lambda }} = {\left\lbrack {\dot{\mathbf{\Lambda }}}_{1}^{T},\ldots ,{\dot{\mathbf{\Lambda }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\mathbf{P}}_{1} =$ $\operatorname{diag}\left\{ {{\mathbf{P}}_{11},\ldots ,{\mathbf{P}}_{N1}}\right\} \in {\mathbb{R}}^{{6N} \times {6N}},{\mathbf{B}}_{1} = \operatorname{diag}\left\{ {{\mathbf{B}}_{11},\ldots ,{\mathbf{B}}_{N1}}\right\} \in$ ${\mathbb{R}}^{{6N} \times {3N}}$ , and ${\mathbf{C}}_{1} = \operatorname{diag}\left\{ {{\mathbf{C}}_{11},\ldots ,{\mathbf{C}}_{N1}}\right\} \in {\mathbb{R}}^{{6N} \times {3N}}$ . Since $\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix} \geq 2\left( {\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\parallel \dot{\mathbf{\Lambda }}\parallel }\right) /{\jmath }_{1}{\sigma }_{1}$ , one has ${\dot{V}}_{1} \leq - {\jmath }_{1}\left( {1 - {\sigma }_{1}}\right) {\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}}^{2}/2$ , where $0 < {\sigma }_{1} < 1$ . It follows that the subsystem (14) is ISS. There exists a $\mathcal{K}\mathcal{L}$ function ${\mathcal{Y}}_{1}\left( \cdot \right)$ and ${\mathcal{K}}_{\infty }$ function ${\mathcal{C}}^{{\widehat{\mathcal{\nu }}}_{s}}\left( \cdot \right)$ and ${\mathcal{C}}^{\Lambda }\left( \cdot \right)$ satisfying $\begin{Vmatrix}{{\mathcal{N}}_{1}\left( t\right) }\end{Vmatrix} \leq$ ${\mathcal{Y}}_{1}\left( {\begin{Vmatrix}{{\mathcal{N}}_{1}\left( 0\right) }\end{Vmatrix},t}\right) + {\mathcal{C}}^{{\widetilde{\mathbf{\nu }}}_{s}}\left( \begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix}\right) + {\mathcal{C}}^{\mathbf{\Lambda }}\left( {\parallel \dot{\mathbf{\Lambda }}\parallel }\right)$ , where ${\mathcal{C}}^{{\widetilde{\mathbf{\nu }}}_{s}}\left( s\right) =$ $\left( {\left( {{2s}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right) }\right)$ and ${\mathcal{C}}^{\dot{\mathbf{\Lambda }}}\left( s\right) =$ $\left( {\left( {{2s}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right) }\right)$ .
+
+§ B. DESIGN OF GUIDANCE LAW AND CONTROL LAW
+
+In this section, we design the DTGPG-based guidance law and the SESO-based control law. First, we design the guidance law. The time derivative of (6) is represented by
+
+$$
+{\dot{\mathcal{Z}}}_{ik} = {\mu }_{ik}{\mathcal{P}}_{ik}{\rho }_{ik}{\dot{E}}_{ik} + {\mu }_{ik}{\dot{\mathcal{P}}}_{ik}{\mathcal{H}}_{ik} \tag{18}
+$$
+
+where ${\mu }_{ik} = \left( {1 + {\mathcal{J}}_{ik}^{2}}\right) /{\left( 1 - {\mathcal{J}}_{ik}^{2}\right) }^{2}$ and ${\rho }_{ik} =$ ${l}_{ik}/\left( {\sqrt{{E}_{ik}^{2} + {l}_{ik}}\left( {{E}_{ik}^{2} + {l}_{ik}}\right) }\right)$ .
+
+Next, to simplify the design of the controller, we rewrite (18) in a vector form
+
+$$
+{\dot{\mathbf{Z}}}_{i} = {\mathbf{\mu }}_{i1}{\dot{\mathbf{E}}}_{i} + {\mathbf{\mu }}_{i2} \tag{19}
+$$
+
+where ${\mathcal{Z}}_{i} = {\left\lbrack {\mathcal{Z}}_{ix},{\mathcal{Z}}_{iy},{\mathcal{Z}}_{i\psi }\right\rbrack }^{T} \in {\mathbb{R}}^{3},{\mathbf{E}}_{i} = {\left\lbrack {E}_{ix},{E}_{iy},{E}_{i\psi }\right\rbrack }^{T} \in$ ${\mathbb{R}}^{3},{\mathbf{\mu }}_{i1} = \operatorname{diag}\left\{ {{\mu }_{ix}{\mathcal{P}}_{ix}{\rho }_{ix},{\mu }_{iy}{\mathcal{P}}_{iy}{\rho }_{iy},{\mu }_{i\psi }{\mathcal{P}}_{i\psi }{\rho }_{i\psi }}\right\} \in {\mathbb{R}}^{3 \times 3}$ , and ${\mathbf{\mu }}_{i2} = \operatorname{diag}\left\{ {{\mu }_{ix}{\dot{\mathcal{P}}}_{ix}{\mathcal{H}}_{ix},{\mu }_{iy}{\dot{\mathcal{P}}}_{iy}{\mathcal{H}}_{iy},{\mu }_{i\psi }{\dot{\mathcal{P}}}_{i\psi }{\mathcal{H}}_{i\psi }}\right\} \in {\mathbb{R}}^{3 \times 3}$ .
+
+Take the time derivative of (2) based on (1) satisfies
+
+$$
+{\dot{\mathbf{E}}}_{i} = {\iota }_{i}{\mathbf{R}}_{i}{\mathbf{\nu }}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\mathbf{\nu }}_{j} - \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr} \tag{20}
+$$
+
+where ${\iota }_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij} + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}$ . Substituting (20) into (19) results in
+
+$$
+{\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\left( {{\iota }_{i}{\mathbf{R}}_{i}{\mathbf{\nu }}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\mathbf{\nu }}_{j} - \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr}}\right) + {\mathbf{\mu }}_{i2}. \tag{21}
+$$
+
+From (21), the DTGPG-based guidance law is chosen as
+
+$$
+{\mathbf{\alpha }}_{i} = \frac{1}{{\iota }_{i}{\mathbf{R}}_{i}}\left( {\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widehat{\mathbf{\nu }}}_{j} + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr} - \frac{1}{{\mathbf{\mu }}_{i1}}\left( {{\mathbf{\kappa }}_{i1}{\mathbf{\mathcal{Z}}}_{i} + {\mathbf{\mu }}_{i2}}\right) }\right) . \tag{22}
+$$
+
+We substitute (22) into (21), and it follows that
+
+$$
+{\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widetilde{\mathbf{\nu }}}_{j} - {\mathbf{\kappa }}_{i1}{\mathcal{Z}}_{i} \tag{23}
+$$
+
+with ${\kappa }_{i1} \in {\mathbb{R}}^{3 \times 3}$ being a positive diagonal matrix.
+
+Differing from the first-order low-pass filtering method in the traditional DSC, a second-order linear tracking differentiator (LTD) with respect to ${\mathbf{\alpha }}_{i}$ is introduced
+
+$$
+\left\{ \begin{array}{l} {\dot{\mathbf{\alpha }}}_{if} = {\mathbf{\alpha }}_{if}^{ * } \\ {\dot{\mathbf{\alpha }}}_{if}^{ * } = - {\gamma }_{i}^{2}\left( {\left( {{\mathbf{\alpha }}_{if} - {\mathbf{\alpha }}_{i}}\right) + 2\left( {{\mathbf{\alpha }}_{if}^{ * }/{\gamma }_{i}}\right) }\right) \end{array}\right. \tag{24}
+$$
+
+where ${\mathbf{\alpha }}_{if}^{ * } \in {\mathbb{R}}^{3}$ is the filtered value of ${\dot{\mathbf{\alpha }}}_{i}$ , and ${\gamma }_{i} \in {\mathbb{R}}^{ + }$ .
+
+Second, we design the control law. Defining a velocity error ${\mathcal{Z}}_{ie} = {\mathbf{\nu }}_{i} - {\mathbf{\alpha }}_{i} \in {\mathbb{R}}^{3},{\dot{\mathcal{Z}}}_{ie}$ along (7) satisfies
+
+$$
+{\dot{\mathbf{Z}}}_{ie} = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {\mathbf{\Lambda }}_{i} - {\dot{\mathbf{\alpha }}}_{i}. \tag{25}
+$$
+
+Then, we designed the SESO-based control law to stabilize (25)
+
+$$
+{\mathbf{\tau }}_{i} = \frac{{\mathbf{M}}_{i}{\mathbf{J}}_{i}}{{r}_{i}}\left( {{\mathbf{\alpha }}_{if}^{ * } - {\widehat{\mathbf{\Lambda }}}_{i} - {\mathbf{\kappa }}_{i2}{\mathbf{\mathcal{Z}}}_{ie}}\right) \tag{26}
+$$
+
+with ${\kappa }_{i2} \in {\mathbb{R}}^{3 \times 3}$ being a positive diagonal matrix.
+
+The dynamics of ${\mathcal{Z}}_{ie}$ is further obtained by substituting (26) into (25)
+
+$$
+{\dot{\mathcal{Z}}}_{ie} = {\widetilde{\alpha }}_{i}^{ * } - {\widetilde{\Lambda }}_{i} - {\kappa }_{i2}{\mathcal{Z}}_{ie} \tag{27}
+$$
+
+where ${\widetilde{\mathbf{\alpha }}}_{i}^{ * } = {\mathbf{\alpha }}_{if}^{ * } - {\dot{\mathbf{\alpha }}}_{i}$ .
+
+From (23) and (27), we can obtain the following subsystems
+
+$$
+\left\{ \begin{array}{l} {\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widetilde{\mathbf{\nu }}}_{j} - {\mathbf{\kappa }}_{i1}{\mathcal{Z}}_{i} \\ {\dot{\mathcal{Z}}}_{ie} = {\widetilde{\mathbf{\alpha }}}_{i}^{ * } - {\mathbf{\Lambda }}_{i} - {\mathbf{\kappa }}_{i2}{\mathcal{Z}}_{ie}. \end{array}\right. \tag{28}
+$$
+
+Lemma 2: The system (28) is ISS.
+
+Proof: Consider a Lyapunov function candidate as ${V}_{2} =$ $\left( {1/2}\right) \mathop{\sum }\limits_{{i = 1}}^{N}\left( {{\mathbf{Z}}_{i}^{T}{\mathbf{Z}}_{i} + {\mathbf{Z}}_{ie}^{T}{\mathbf{Z}}_{ie}}\right)$ . The time derivative of ${V}_{2}$ based on (28) satisfies
+
+$$
+{\dot{V}}_{2} \leq - {n}_{1}\parallel \mathcal{Z}{\parallel }^{2} - {n}_{2}{\begin{Vmatrix}{\mathcal{Z}}_{e}\end{Vmatrix}}^{2} + {n}_{3}{n}^{ * }\parallel \mathcal{Z}\parallel \parallel \widetilde{\nu }\parallel \tag{29}
+$$
+
+$$
++ \begin{Vmatrix}{\mathbf{Z}}_{e}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \begin{Vmatrix}{\mathbf{Z}}_{e}\end{Vmatrix}\parallel \widetilde{\mathbf{\Lambda }}\parallel
+$$
+
+where ${n}_{1} = {\lambda }_{\min }\left( {\mathbf{\kappa }}_{1}\right)$ with ${\mathbf{\kappa }}_{1} = \operatorname{diag}\left\{ {{\mathbf{\kappa }}_{11},\ldots ,{\mathbf{\kappa }}_{N1}}\right\} \in$ ${\mathbb{R}}^{{3N} \times {3N}} \cdot {n}_{2} = {\lambda }_{\min }\left( {\mathbf{\kappa }}_{2}\right)$ with ${\mathbf{\kappa }}_{2} = \operatorname{diag}\left\{ {{\mathbf{\kappa }}_{12},\ldots ,{\mathbf{\kappa }}_{N2}}\right\} \in$ ${\mathbb{R}}^{{3N} \times {3N}}.{n}_{3} = \mathop{\max }\limits_{{i = 1,\ldots ,N}}\left( {{\lambda }_{\max }\left( {\mathbf{\mu }}_{i1}\right) }\right) .{n}^{ * } = \mathop{\max }\limits_{{i = 1,\ldots ,N}}\left( {n}_{i}^{ * }\right)$ with ${n}_{i}^{ * } = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ji}.\mathcal{Z} = {\left\lbrack {\mathcal{Z}}_{1}^{T},\ldots ,{\mathcal{Z}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\mathcal{Z}}_{e} =$ ${\left\lbrack {\mathbf{\mathcal{Z}}}_{1e}^{T},\ldots ,{\mathbf{\mathcal{Z}}}_{Ne}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},\widetilde{\mathbf{\nu }} = {\left\lbrack {\widetilde{\mathbf{\nu }}}_{1}^{T},\ldots ,{\widetilde{\mathbf{\nu }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\widetilde{\mathbf{\alpha }}}^{ * } =$ ${\left\lbrack {\widetilde{\mathbf{\alpha }}}_{1}^{*T},\ldots ,{\widetilde{\mathbf{\alpha }}}_{N}^{*T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N}$ , and $\widetilde{\mathbf{\Lambda }} = {\left\lbrack {\widetilde{\mathbf{\Lambda }}}_{1}^{T},\ldots ,{\widetilde{\mathbf{\Lambda }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N}$ .
+
+Define $n = \min \left( {{n}_{1},{n}_{2}}\right)$ and ${\mathcal{N}}_{2} = {\left\lbrack \parallel \mathcal{Z}\parallel ,\begin{Vmatrix}{\mathcal{Z}}_{e}\end{Vmatrix}\right\rbrack }^{T} \in {\mathbb{R}}^{2}$ . Then, (29) is further put into
+
+$$
+{\dot{V}}_{2} \leq - n{\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}}^{2} + {n}_{3}{n}^{ * }\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\parallel \widetilde{\nu }\parallel \tag{30}
+$$
+
+$$
++ \begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\parallel \widetilde{\mathbf{\Lambda }}\parallel \text{ . }
+$$
+
+Since $\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix} \geq 2\left( {{n}_{3}{n}^{ * }\parallel \widetilde{\mathbf{\nu }}\parallel + \begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \parallel \widetilde{\mathbf{\Lambda }}\parallel }\right) /n$ , one has ${\dot{V}}_{2} \leq$ $- n{\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}}^{2}/2$ . It follows that the subsystem (28) is ISS. There exists a $\mathcal{K}\mathcal{L}$ function ${\mathcal{Y}}_{2}\left( \cdot \right)$ and ${\mathcal{K}}_{\infty }$ function ${\mathcal{C}}^{\widetilde{\nu }}\left( \cdot \right) ,{\mathcal{C}}^{{\widetilde{\alpha }}^{ * }}\left( \cdot \right)$ , and ${\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( \cdot \right)$ satisfying $\begin{Vmatrix}{{\mathcal{N}}_{2}\left( t\right) }\end{Vmatrix} \leq {\mathcal{Y}}_{2}\left( {\begin{Vmatrix}{{\mathcal{N}}_{2}\left( 0\right) }\end{Vmatrix},t}\right) + {\mathcal{C}}^{\widetilde{\mathbf{\nu }}}\left( {\parallel \widetilde{\mathbf{\nu }}\parallel }\right) +$ ${\mathcal{C}}^{{\widetilde{\mathbf{\alpha }}}^{ * }}\left( \begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix}\right) + {\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( {\parallel \widetilde{\mathbf{\Lambda }}\parallel }\right)$ , where ${\mathcal{C}}^{\widetilde{\mathbf{\nu }}}\left( s\right) = 2{n}_{3}{n}^{ * }s/n,{\mathcal{C}}^{{\widetilde{\mathbf{\alpha }}}^{ * }}\left( s\right) =$ ${2s}/n$ , and ${\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( s\right) = {2s}/n$ .
+
+ < g r a p h i c s >
+
+Fig. 1. Circular formation using the proposed method.
+
+Theorem 1: For multi-WMRs (1) subject to initial conditions, the closed-loop system is ISS consisting of SESO (10), the DTGPG-based guidance law (22), and the SESO-based control law (26). Moreover, Zeno behavior can be avoided.
+
+Proof: The ISS properties of subsystems (14) and (28) are proven through Lemma 1 and Lemma 2, respectively. The state of the subsystem (14), $\widetilde{\mathbf{\nu }}$ , and $\widetilde{\mathbf{\Lambda }}$ are inputs of the subsystem (28). Under Assumptions 1-2, according to the cascade stability theorem, the closed-loop system is ISS. It yields that the ultimate boundedness of $\begin{Vmatrix}{{\mathcal{N}}_{2}\left( t\right) }\end{Vmatrix}$ as $t \rightarrow \infty$
+
+$$
+{\begin{Vmatrix}{\mathcal{N}}_{2}\left( t\right) \end{Vmatrix}}_{t \rightarrow \infty } \leq \frac{2\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix}}{n} + {\mathcal{H}}^{ * }\left( {\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix} + \parallel \dot{\mathbf{\Lambda }}\parallel \begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}}\right. \tag{31}
+$$
+
+with ${\mathcal{H}}^{ * } = \left( {4\left( {{n}_{3}{n}^{ * } + 1}\right) \sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {n{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right)$ . The detailed proof process of the Zeno behavior can be referred to [5]. The proof of Theorem 1 is complete.
+
+§ IV. SIMULATION RESULTS
+
+From Fig. 1, it can be seen that we consider a communication topology consisting of three followers ${n}_{1},{n}_{2}$ , and ${n}_{3}$ , as well as two virtual leaders ${n}_{4}$ and ${n}_{5}$ to verify the effectiveness of the proposed controller. The physical parameters of the WMR can refer to [10]. This external disturbance is similar to [16]. The initial values of three followers are chosen as ${\mathbf{\eta }}_{1}\left( 0\right) = {\left\lbrack 0,0,3\pi /2\right\rbrack }^{T},{\mathbf{\eta }}_{2}\left( 0\right) = {\left\lbrack 2, - {10},\pi /2\right\rbrack }^{T},{\mathbf{\eta }}_{3}\left( 0\right) =$ ${\left\lbrack 2, - {17},4\pi /3\right\rbrack }^{T}$ . The trajectories of the two virtual leaders are chosen as
+
+$$
+\left\{ \begin{array}{l} {\mathbf{\eta }}_{4r} = {\left\lbrack -5\sin \left( {0.2}t\right) , - 5\cos \left( {0.2}t\right) ,\operatorname{atan}2\left( {\dot{\eta }}_{4y},{\dot{\eta }}_{4x}\right) \right\rbrack }^{T} \\ {\mathbf{\eta }}_{5r} = {\left\lbrack -{15}\sin \left( {0.2}t\right) , - {15}\cos \left( {0.2}t\right) ,\operatorname{atan}2\left( {\dot{\eta }}_{5y},{\dot{\eta }}_{5x}\right) \right\rbrack }^{T}. \end{array}\right.
+$$
+
+The main design parameters are set as ${\kappa }_{11} = \operatorname{diag}\{ {12},7,{10}\}$ , ${\kappa }_{21} = \operatorname{diag}\{ 7,7,{10}\} ,{\kappa }_{31} = \operatorname{diag}\{ {12},9,{10}\} ,{\kappa }_{i2} = \operatorname{diag}\{ {20},{20},{20}\}$ , ${\varepsilon }_{i1} = \operatorname{diag}\{ 2,2,2\} ,{\varepsilon }_{i2} = \operatorname{diag}\{ {40},{40},{40}\} ,{T}_{{1x},a} = {T}_{{1\psi },a} =$ ${T}_{{2x},a} = {T}_{{2\psi },a} = {T}_{{3x},a} = {T}_{{3\psi },a} = {0.5},{T}_{{1y},a} = {T}_{{2y},a} =$
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Fig. 4. The number of triggering events.
+
+Fig. 2. Tracking errors using the DTGPG. Fig. 3. The estimated disturbances using the SESO. ${T}_{{3y},a} = 1,{T}_{{1x},b} = {T}_{{2x},b} = {T}_{{3x},b} = {0.7},{T}_{{1y},b} = {T}_{{2y},b} =$ ${T}_{{3y},b} = {1.2},{T}_{{1\psi },b} = {T}_{{2\psi },b} = {T}_{{3\psi },b} = {1.5},{\omega }_{ik} =$ ${0.7},{\Theta }_{{ik},\infty } = {0.9},{\varrho }_{ik} = 2,{l}_{ik} = {10},{\mathcal{X}}_{1} = {\mathcal{X}}_{2} = {\mathcal{X}}_{3} = {0.06}$ .
+
+Simulation results are depicted in Figs 1-4. Fig. 1 demonstrates these three vehicles forming a circular formation guided by two virtual leaders. Fig. 2 shows that the tracking profile is not constrained by the initial value and is able to dynamically adjust the performance boundaries using the proposed DTGPG control scheme. Fig. 3 shows that SESO is not only able to estimate internal uncertainties and external disturbances but also to reduce chattering. Fig. 4 shows the number of triggering events. ${\nu }_{1}^{ \star },{\nu }_{2}^{ \star }$ , and ${\nu }_{3}^{ \star }$ are triggered 179,213, and 211 times respectively. Compared to time triggering 2800 times, it effectively saves resources.
+
+§ V. CONCLUSION
+
+In this paper, the dynamic threshold global prescribed performance formation control problem was investigated for WMRs in the presence of unknown total disturbances. A dynamic threshold global performance-guaranteed formation control method based on SESO was proposed, which had three advantages: 1) it could adjust the steady-state performance boundary twice, 2) it resolved the initial value constraints present in standard PPC, and 3) it mitigated the chattering problem in event-triggered ESO. This cascade system consisting of the SESO, the DTGPG-based guidance law, and the SESO-based control law was proved to be ISS. The main results were demonstrated by the simulation examples.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..59bbb2af327abe9bb47d313b8c317e5a6509f26a
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,653 @@
+# Event-Triggered Optimal Tracking Control for Uncertain Nonlinear System Based on Reinforcement Learning
+
+Yuanhao Wang
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+wangyuanhao2024@163.com
+
+Weiwei Bai
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+baiweiwei_dl@163.com
+
+Abstract-In this paper, an event-triggered optimal tracking control problem is studied for uncertain nonlinear systems based on reinforcement learning (RL). Firstly, a class of nonlinear dynamic systems with general uncertainty is considered and the augmented system comprising tracking error and reference signal is constructed. Secondly, an improved adaptive dynamic programming (ADP) technique, involving actor-critic algorithm and fuzzy logic systems, is developed to solve the Hamilton-Jacobi-Bellman (HJB) equation with respect to nominal augmented system. Thirdly, in order to reduce the mechanical wear of actuator and energy consumption, event-triggered mechanism is performed in controller updating. Finally, stability analysis proofs that all signals are uniformly ultimately bounded (UUB) in the closed-loop system via Lyapunov theory. Simulation results verify feasibility of proposed scheme.
+
+Index Terms-ADP, event-triggered, reinforcement learning, nonlinear, fuzzy logic systems, tracking control.
+
+## I. INTRODUCTION
+
+Reinforcement Learning (RL) as an effective technique has competent in facilitating adaptive optimization strategy [1], [2]. Generally, optimization is implemented via seeking minimized or maximized cost function to solve the Hamilton-Jacobi-Bellman (HJB) equation [3]. However, there exists a challenge about acquiring analytic solution of HJB equation directly for nonlinear dynamic systems [4]. Therefore many researchers proposed numerical solution of HJB equation [5]. Adaptive dynamic programming (ADP) as an advanced numerical solving method, has been widely applied to achieve the optimal tracking control of nonlinear systems.
+
+In contrast to traditional dynamic programming, ADP can be utilized to design optimal controller forward in time, which effectively avoids "curse of dimensionality" [6], [7]. In addition, an improved ADP framework consists of actor-critic algorithm and fuzzy logic systems is constructed. So far, there have been many scholars devoting to developing ADP techniques [8]-[10]. In [11], ADP method was implemented to solve a new neuro-optimal control problem of nonlinear dynamic systems by employing one critic and two actor networks. In [12], a neural-network-based ADP method was developed to solve the optimal tracking control problem of a class of nonlinear systems with unmatched uncertainties. In [13], linear singularly perturbed system was studied via employing ADP framework to achieve optimal control. These literatures concentrated on application and development of ADP and RL, but they did not consider the condition with mechanical wear of actuator and energy consumption. As a result, it is of necessity to perform event-triggered mechanism in control design for reducing mechanical wear and saving energy in actual engineering practice [14].
+
+The key of event-triggered control algorithm is triggering threshold [14]. When signal exceeds triggering threshold, control policy will be updated [15], [16]. In this paper, an event-triggered optimal tracking control scheme for uncertain nonlinear systems based on RL is developed. There are two main contributions:
+
+(1) An improved ADP and RL algorithm involving actor-critic and fuzzy logic systems is developed, which develops the optimal control strategy and effectively balances the tracking control performance and control costs.
+
+(2) Event-triggered mechanism is performed in controller design, the unnecessary control input is avoided, achieving the reduction of mechanical wear and the energy saving in engineering practice.
+
+The organization of this paper is shown as follows. System dynamic description and fuzzy logic systems are stated in Section II. Optimal controller and event-triggered controller are designed in Sections III and IV, respectively. Stability analysis, simulation and conclusion are shown in Sections V, VI and VII, respectively.
+
+## II. Problem formulation and preliminaries
+
+## A. System dynamic description
+
+Consider a class of continuous-time nonlinear dynamic systems which can be described by
+
+$$
+\dot{x}\left( t\right) = f\left( {x\left( t\right) }\right) + g\left( {x\left( t\right) }\right) u\left( t\right) + \mathcal{D}\left( {x\left( t\right) }\right) \tag{1}
+$$
+
+where $x\left( t\right) \in {\mathbb{R}}^{n}$ is the state variable, $u\left( t\right) \in {\mathbb{R}}^{m}$ is the control input, $f\left( {x\left( t\right) }\right) \in {\mathbb{R}}^{n}$ and $g\left( {x\left( t\right) }\right) \in {\mathbb{R}}^{n \times m}$ are the unknown smooth function and unknown smooth function matrix respectively, $\mathcal{D}\left( {x\left( t\right) }\right)$ is the unknown disturbance with $\parallel \mathcal{D}\left( {x\left( t\right) }\right) \parallel \leq {\lambda }_{\mathcal{D}}$ and ${\lambda }_{\mathcal{D}}$ is a positive parameter.
+
+To achieve tracking control, a reference signal is given by
+
+$$
+\dot{r}\left( t\right) = \delta \left( {r\left( t\right) }\right) \tag{2}
+$$
+
+where $r\left( t\right) \in {\mathbb{R}}^{n}$ is a bounded desired trajectory and $\delta \left( {r\left( t\right) }\right)$ is a Lipschitz continuous function. Let the tracking error be
+
+$$
+e\left( t\right) = x\left( t\right) - r\left( t\right) \tag{3}
+$$
+
+Combining equations (1), (2) and (3), one can yield the following dynamic of tracking error
+
+$$
+\dot{e}\left( t\right) = f\left( {x\left( t\right) }\right) + g\left( {x\left( t\right) }\right) u\left( t\right) + \mathcal{D}\left( {x\left( t\right) }\right) - \delta \left( {r\left( t\right) }\right) \tag{4}
+$$
+
+Note that $x\left( t\right) = e\left( t\right) + r\left( t\right)$ , equation (4) can be rewritten as
+
+$$
+\dot{e}\left( t\right) = f\left( {e\left( t\right) + r\left( t\right) }\right) - \delta \left( {r\left( t\right) }\right) + g\left( {e\left( t\right) + r\left( t\right) }\right) u\left( t\right) \tag{5}
+$$
+
+$$
++ \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right)
+$$
+
+For the sake of facilitating description, define $\xi \left( t\right) = \mathrm{i}$ ${\left\lbrack {e}^{\mathrm{T}}\left( t\right) ,{r}^{\mathrm{T}}\left( t\right) \right\rbrack }^{\mathrm{T}} \in {\mathbb{R}}^{2n}$ , and then dynamic systems (2) and (5) can be augmented as a concise form
+
+$$
+\dot{\xi }\left( t\right) = F\left( {\xi \left( t\right) }\right) + G\left( {\xi \left( t\right) }\right) u\left( t\right) + \Delta \mathbb{D}\left( {\xi \left( t\right) }\right) \tag{6}
+$$
+
+where $F\left( {\xi \left( t\right) }\right)$ and $G\left( {\xi \left( t\right) }\right)$ are new matrices and $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ can be still regarded as a new uncertain term. In particular, $F\left( {\xi \left( t\right) }\right) = \left\lbrack \begin{matrix} f\left( {e\left( t\right) + r\left( t\right) }\right) - \delta \left( t\right) \\ \delta \left( t\right) \end{matrix}\right\rbrack , G\left( {\xi \left( t\right) }\right) =$ $\left\lbrack \begin{matrix} g\left( {e\left( t\right) + r\left( t\right) }\right) \\ {0}_{n \times m} \end{matrix}\right\rbrack$ and $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right) = \left\lbrack \begin{matrix} \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right) \\ {0}_{n \times 1} \end{matrix}\right\rbrack$ .
+
+Undoubtedly, the new uncertain term $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ is still upper bounded since
+
+$$
+\parallel \Delta \mathbb{D}\left( {\xi \left( t\right) }\right) \parallel = \parallel \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right) \parallel = \parallel \mathcal{D}\left( {x\left( t\right) }\right) \parallel \leq {\lambda }_{\mathcal{D}} \tag{7}
+$$
+
+To accomplish tracking control of dynamic system (1) to reference signal (2), the feedback controller $u\left( \xi \right)$ will be constructed. One can yield that the closed-loop system is asymptotically stable under the controller $u\left( \xi \right)$ for the uncertain and bounded term $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ . Therefore, the optimal control policy can be applied by considering appropriate cost function of the subsequent nominal system the same as that in [5].
+
+## B. Fuzzy logic systems
+
+Define a nonlinear continuous function $P\left( x\right)$ over a compact set $\mathbb{U}$ , for any constant $\varepsilon > 0$ , there exists fuzzy logic systems ${\omega }^{\mathrm{T}}\varphi \left( x\right)$ such that [17]
+
+$$
+\mathop{\sup }\limits_{{x \in \mathbb{U}}}\left| {P\left( x\right) - {\omega }^{\mathrm{T}}\varphi \left( x\right) }\right| \leq \varepsilon \tag{8}
+$$
+
+where $x = {\left\lbrack {x}_{1},\ldots ,{x}_{j}\right\rbrack }^{\mathrm{T}}$ is the input vector of fuzzy logic systems, $\omega = {\left\lbrack {\omega }_{1},{\omega }_{2},\ldots ,{\omega }_{L}\right\rbrack }^{\mathrm{T}} \in {\mathbb{R}}^{L}$ is the degree of membership and $L > 1$ is the number of fuzzy rules, $\varepsilon$ is the fuzzy minimum approximation error. $\varphi \left( x\right) =$ ${\left\lbrack {\varphi }_{1}\left( x\right) ,{\varphi }_{2}\left( x\right) ,\ldots ,{\varphi }_{L}\left( x\right) \right\rbrack }^{\mathrm{T}}$ is fuzzy basic function vector and ${\varphi }_{i}\left( x\right)$ is selected as follows:
+
+$$
+{\varphi }_{i}\left( x\right) = \frac{\mathop{\prod }\limits_{{i = 1}}^{j}{\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right) }{\mathop{\sum }\limits_{{i = 1}}^{N}\left( {\mathop{\prod }\limits_{{i = 1}}^{j}{\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right) }\right) },\left( {i = 1,\ldots , L}\right) \tag{9}
+$$
+
+where ${F}_{i}^{l}\left( {i = 1,\ldots , j;l = 1,\ldots , N}\right)$ is the fuzzy set and ${\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right)$ is the membership function.
+
+## III. OPTIMAL CONTROL DESIGN
+
+In this section, ADP comprising actor-critic algorithm and fuzzy logic systems will be employed to design the value function ${L}^{ * }\left( \xi \right)$ and control policy ${u}^{ * }\left( \xi \right)$ , and design degree of membership update laws.
+
+In actor-critic framework, value function and control policy are approximated by critic and actor fuzzy systems, respectively. Optimal cost function (13) and feedback controller (15) represent value function and control policy for optimal tracking control problem, respectively.
+
+Consider the nominal part of the augmented system (6), that
+
+is
+
+$$
+\dot{\xi }\left( t\right) = F\left( {\xi \left( t\right) }\right) + G\left( {\xi \left( t\right) }\right) u\left( t\right) \tag{10}
+$$
+
+For the nominal system (10), this cost function is considered
+
+$$
+L\left( \xi \right) = {\int }_{t}^{\infty }Q\left( \tau \right) + u{\left( \tau \right) }^{\mathrm{T}}{Ru}\left( \tau \right) {d\tau } \tag{11}
+$$
+
+where $Q\left( \xi \right) = {\xi }^{\mathrm{T}}\mathcal{Q}\xi , R = {R}^{\mathrm{T}}.\mathcal{Q}$ and $R$ are positive defined matrix.
+
+Subsequently, one can define the Hamiltonian of the optimal problem
+
+$$
+H\left( {\xi , u\left( \xi \right) }\right) = Q\left( \xi \right) + u{\left( \xi \right) }^{\mathrm{T}}{Ru}\left( \xi \right) \tag{12}
+$$
+
+$$
++ {\nabla }^{\mathrm{T}}L\left( \xi \right) \left\lbrack {F\left( \xi \right) + G\left( \xi \right) u\left( \xi \right) }\right\rbrack
+$$
+
+where $\nabla L\left( \xi \right)$ represents the partial derivative of $L\left( \xi \right)$ .
+
+Generally, as long as finding the optimal cost function can we derive the optimal controller. The infinitesimal version of cost function is regarded as the optimal cost function, one has
+
+$$
+{L}^{ * }\left( \xi \right) = \min {\int }_{t}^{\infty }Q\left( \tau \right) + u{\left( \tau \right) }^{\mathrm{T}}{Ru}\left( \tau \right) {d\tau } \tag{13}
+$$
+
+The optimal cost function is the solution of the HJB equation which satisfies
+
+$$
+H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + {u}^{ * }{\left( \xi \right) }^{\mathrm{T}}R{u}^{ * }\left( \xi \right)
+$$
+
+$$
++ {\nabla }^{\mathrm{T}}L\left( \xi \right) \left\lbrack {F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right) }\right\rbrack = 0
+$$
+
+(14)
+
+Consequently, the optimal feedback controller is yielded
+
+$$
+{u}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \nabla {L}^{ * }\left( \xi \right) \tag{15}
+$$
+
+One need to solve the HJB equation (14) and obtain the optimal controller (15) for nominal system (10). However, the solution of HJB equation (14) is difficult to be obtained directly. Therefore, fuzzy logic systems and adaptive actor-critic will be utilized to find its estimated solution.
+
+Fuzzy logic systems are employed to reconstruct the value function ${L}^{ * }\left( \xi \right)$
+
+$$
+{L}^{ * }\left( \xi \right) = {\omega }^{\mathrm{T}}\varphi \left( \xi \right) + \varepsilon \left( \xi \right) \tag{16}
+$$
+
+where $\omega$ is the degree of membership of fuzzy logic systems, $\varphi \left( \xi \right)$ is the fuzzy basis function and $\varepsilon \left( \xi \right)$ is the unknown fuzzy approximate error.
+
+Considering (15) and (16) yields the optimal controller described by fuzzy logic systems as
+
+$$
+{u}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \left\lbrack {{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \omega + \nabla \varepsilon \left( \xi \right) }\right\rbrack \tag{17}
+$$
+
+In order to clearly analyze, define a non-negative matrix
+
+$$
+A\left( \xi \right) = \nabla \varphi \left( \xi \right) G\left( \xi \right) {R}^{-1}G\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \tag{18}
+$$
+
+One can derive the HJB equation reconstructed by fuzzy logic systems, combining with (16), (17) and (18), one has
+
+$$
+H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + {\omega }^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right)
+$$
+
+$$
+- \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega + {\varepsilon }_{HJB} = 0 \tag{19}
+$$
+
+and the residual error ${\varepsilon }_{HJB}$ is expressed as
+
+$$
+{\varepsilon }_{HJB} = {\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) \left( {F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right) }\right)
+$$
+
+$$
++ \frac{1}{4}{\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) G\left( \xi \right) {R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \nabla \varepsilon \left( \xi \right) \tag{20}
+$$
+
+$$
++ \frac{1}{2}{\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) G\left( \xi \right) {R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \omega
+$$
+
+The estimation of value function ${L}^{ * }\left( \xi \right)$ and control policy ${u}^{ * }\left( \xi \right)$ can be constructed by critic and actor fuzzy, respectively.
+
+$$
+{\widehat{L}}^{ * }\left( \xi \right) = {\widehat{\omega }}_{c}^{\mathrm{T}}\varphi \left( \xi \right) \tag{21}
+$$
+
+$$
+{\widehat{u}}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widehat{\omega }}_{a} \tag{22}
+$$
+
+where ${\widehat{\omega }}_{a}$ is the actor estimated degree of membership and ${\widehat{\omega }}_{c}$ is the critic estimated degree of membership.
+
+Noticing (21) and (22), one can derive the following estimated Hamiltonian
+
+$$
+\widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}
+$$
+
+$$
++ {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}
+$$
+
+(23)
+
+To obtain the degree of membership update laws of fuzzy logic systems, defining the objective function as ${E}_{c} = \frac{1}{2}{e}_{c}{}^{\mathrm{T}}{e}_{c}$ , where ${e}_{c} = \widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right) - H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right)$ is the Bellman error. In order to conquer the difficulties of searching controller and adaptive laws, the following assumption is made and the additional term can be constructed to improve the learning process.
+
+Assumption 1: [5] Define ${L}_{s}\left( \xi \right)$ is a continuous differentiable Lyapunov function candidate satisfying
+
+$$
+{\dot{L}}_{s}\left( \xi \right) = {\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + {u}^{ * }\left( \xi \right) }\right) < 0 \tag{24}
+$$
+
+and then, there exists a positive matrix $\mathfrak{K} \in {\mathbb{R}}^{{2n} \times {2n}}$ ensuring that
+
+$$
+{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + {u}^{ * }\left( \xi \right) }\right) = - {\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \Re \nabla {L}_{s}\left( \xi \right) \tag{25}
+$$
+
+$$
+\leq - {\lambda }_{\min }\left( \mathfrak{K}\right) \nabla {\begin{Vmatrix}{L}_{s}\left( \xi \right) \end{Vmatrix}}^{2}
+$$
+
+Based on the gradient decent, degree of membership update laws of fuzzy logic systems are designed, by considering these two Hamilton functions $H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right)$ and $\widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right)$ , one has
+
+$$
+{\dot{\widehat{\omega }}}_{a} = - {\alpha }_{a}\left( {\frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a} - \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{c}}\right)
+$$
+
+$$
+\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) + \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+(26)
+
+$$
+{\dot{\widehat{\omega }}}_{c} = - {\alpha }_{c}\left( {\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
+\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) + \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+(27)
+
+where ${\alpha }_{a}$ and ${\alpha }_{c}$ are the basis learning parameters of actor and critic systems, respectively, and ${\alpha }_{s}$ is the adjustable parameter for the additional term.
+
+## IV. EVENT-TRIGGERED CONTROL IMPLEMENTATION
+
+The event triggering mechanism is defined as
+
+$$
+{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = {u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right) ,\forall t \in \left\lbrack {{t}_{d},{t}_{d + 1}}\right) \tag{28}
+$$
+
+$$
+{t}_{d + 1} = \inf \left\{ {t \in \mathbb{R}\left| \right| \Gamma \left( t\right) \left| { \geq \Delta }\right| {u}_{e}^{ * }\left( {\xi \left( t\right) }\right) \mid + M}\right\} ,{t}_{1} = 0 \tag{29}
+$$
+
+where the event-triggered error $\Gamma \left( t\right) = {u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right) - {u}_{e}^{ * }\left( {\xi \left( t\right) }\right)$ , the controller update time is ${t}_{d}, d \in {Z}^{ + }$ . Define the proper parameters $0 < \Delta < 1$ and $M > 0$ .
+
+When event is not triggered, the control policy will be chosen as ${u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right)$ . Otherwise, control policy will be updated and marked as ${u}_{e}^{ * }\left( {\xi \left( {t}_{d + 1}\right) }\right)$ . Assume two continuous and time-varying parameters ${\rho }_{1}\left( t\right)$ and ${\rho }_{2}\left( t\right)$ , which results in ${u}^{ * }\left( {\xi \left( t\right) }\right) = \left( {1 + {\rho }_{1}\left( t\right) \Delta }\right) {u}_{e}^{ * }\left( {\xi \left( t\right) }\right) + {\rho }_{2}\left( t\right) M$ where $\left| {{\rho }_{1}\left( t\right) }\right| \leq 1$ and $\left| {{\rho }_{2}\left( t\right) }\right| \leq 1$ . And then, the event-triggered controller can be rewritten as
+
+$$
+{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = \frac{{u}^{ * }\left( {\xi \left( t\right) }\right) - {\rho }_{2}\left( t\right) M}{1 + {\rho }_{1}\left( t\right) \Delta } \tag{30}
+$$
+
+Using (17) and (30) can yield that
+
+$$
+{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = - \frac{1}{2\rho }{R}^{-1}\left\lbrack {{G}^{\mathrm{T}}\left( {\xi \left( t\right) }\right) {\nabla }^{\mathrm{T}}\varphi \left( {\xi \left( t\right) }\right) \omega + {\varepsilon }_{e}\left( {\xi \left( t\right) }\right) }\right\rbrack
+$$
+
+(31)
+
+where $\rho = 1 + {\rho }_{1}\left( t\right) \Delta ,{\varepsilon }_{e}\left( {\xi \left( t\right) }\right) = \nabla \varepsilon \left( {\xi \left( t\right) }\right) + 2{\rho }_{2}\left( t\right) {RM}$ .
+
+Similarly, based on critic fuzzy logic systems, the estimated event-triggered controller can be obtained, one has
+
+$$
+{\widehat{u}}_{e}^{ * }\left( {\xi \left( t\right) }\right) = - \frac{1}{2\rho }{R}^{-1}{G}^{\mathrm{T}}\left( {\xi \left( t\right) }\right) {\nabla }^{\mathrm{T}}\varphi \left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} \tag{32}
+$$
+
+Considering the HJB equation (14), value function (21) and event-triggered controller (32), one can yield the following Hamilton function as
+
+$$
+{\widehat{H}}_{e}\left( {\xi \left( t\right) ,{\widehat{u}}_{e}^{ * }\left( {\xi \left( t\right) }\right) ,{\widehat{L}}^{ * }\left( {\xi \left( t\right) }\right) }\right)
+$$
+
+$$
+= Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right)
+$$
+
+$$
+- \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}
+$$
+
+(33)
+
+Subsequently, degree of membership update laws with respect to event-triggered mechanism can be constructed, one has
+
+$$
+{\dot{\widehat{\omega }}}_{ae} = - {\alpha }_{a}\left( {\frac{1}{2{\rho }^{2}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} - \frac{1}{2\rho }A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{c}}\right)
+$$
+
+$$
+\times \left( {Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right. \tag{34}
+$$
+
+$$
+\left. {+{\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
++ \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( {\xi \left( t\right) }\right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( {\xi \left( t\right) }\right)
+$$
+
+$$
+{\dot{\widehat{\omega }}}_{ce} = - {\alpha }_{c}\left( {\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
+\times \left( {Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right. \tag{35}
+$$
+
+$$
+\left. {+{\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
++ \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( {\xi \left( t\right) }\right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( {\xi \left( t\right) }\right)
+$$
+
+Theorem 1: Considering the dynamic system (1), the optimal feedback controller (22), event-triggered controller (32) and the degree of membership update laws (26), (27), (34) and (35) are developed. Based on Lyapunov theory, all signals are uniformly ultimately bounded (UUB) in the closed-loop system.
+
+For the sake of investigating the stability of error dynamics and close-loop states, the following assumption is given by
+
+Assumption 2: On a compact set $\Omega , G\left( \xi \right) ,\nabla \varphi \left( \xi \right) ,\nabla \varepsilon \left( \xi \right)$ , ${\xi }^{ * }$ and ${\varepsilon }_{HJB}$ are bounded. $\parallel G\left( \xi \right) \parallel \leq {\lambda }_{g},\parallel \nabla \varphi \left( \eta \right) \parallel \leq {\lambda }_{\varphi }$ , $\parallel \nabla \varepsilon \left( \eta \right) \parallel \leq {\lambda }_{\varepsilon },\begin{Vmatrix}{\xi }^{ * }\end{Vmatrix} \leq {\lambda }_{\xi }$ and $\begin{Vmatrix}{\varepsilon }_{HJB}\end{Vmatrix} \leq {\lambda }_{HJB}$ , where ${\lambda }_{g}$ , ${\lambda }_{\varphi },{\lambda }_{\varepsilon },{\lambda }_{\xi }$ and ${\lambda }_{HJB}$ are positive constants.
+
+## V. STABILITY ANALYSIS
+
+In this section, Lyapunov theory will be employed to demonstrate Theorem 1.
+
+Case1 : Event are not triggered. Consider the feedback controller (22) and the related degree of membership update laws (26) and (27). According to HJB equation (19), it can be transformed as
+
+$$
+Q\left( \xi \right) = - {\omega }^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \eta \right) + \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega - {\varepsilon }_{HJB} \tag{36}
+$$
+
+Considering the degree of membership update laws (26) and (27), combining with ${\widetilde{\omega }}_{a} = - {\dot{\omega }}_{a}$ and ${\widetilde{\omega }}_{c} = - {\dot{\omega }}_{c}$ , one has
+
+$$
+{\dot{\widetilde{\omega }}}_{a} = - {\alpha }_{a}\left( {-\frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a} + \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{c}}\right)
+$$
+
+$$
+\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) - \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+(37)
+
+$$
+{\dot{\widetilde{\omega }}}_{c} = - {\alpha }_{c}\left( {-\nabla \varphi \left( \xi \right) F\left( \eta \right) + \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
+\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) - \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+(38)
+
+Then the following Lyapunov function can be chosen as
+
+$$
+S\left( t\right) = \frac{1}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}{\widetilde{\omega }}_{a} + \frac{1}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}{\widetilde{\omega }}_{c} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{L}_{s}\left( \xi \right) + \frac{{\alpha }_{s}}{{\alpha }_{c}}{L}_{s}\left( \xi \right)
+$$
+
+(39)
+
+its derivative is
+
+$$
+\dot{S}\left( t\right) = \frac{1}{{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}{\dot{\widetilde{\omega }}}_{a} + \frac{1}{{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}{\dot{\widetilde{\omega }}}_{c} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
+$$
+
+$$
+= \left( {{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega - \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right.
+$$
+
+$$
+\left. {+{\varepsilon }_{HJB} + \frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) \times \left( {-{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {+\frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{c} + \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
+- \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+$$
+- \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+$$
++ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
+$$
+
+(40)
+
+Substituting (22) into (10) and observing the dynamic system ${\dot{\xi }}^{ * } = F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right)$ with optimal controller ${u}^{ * }\left( \xi \right)$ , one can acquire
+
+$$
+\nabla \varphi \left( \xi \right) F\left( \xi \right) = \nabla \varphi \left( \xi \right) \dot{\xi } + \frac{1}{2}\nabla \varphi \left( \xi \right) {R}^{-1}{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widehat{\omega }}_{a} \tag{41}
+$$
+
+$$
+\dot{\xi } = {\dot{\xi }}^{ * } + \frac{1}{2}G{R}^{-1}{G}^{\mathrm{T}}\left( {{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widetilde{\omega }}_{a} + \nabla \varepsilon \left( \xi \right) }\right) \tag{42}
+$$
+
+Considering above formulations, one can further derive that
+
+$$
+\dot{S}\left( t\right) = \left( {{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) {\dot{\xi }}^{ * } + \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right) }\right.
+$$
+
+$$
+\left. {+\frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) \omega + \frac{1}{4}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} + {\varepsilon }_{HJB}}\right)
+$$
+
+$$
+\times \left( {-{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) {\dot{\xi }}^{ * } - \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right) }\right.
+$$
+
+$$
+\left. {-{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a}}\right)
+$$
+
+$$
+- \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+$$
+- \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+$$
++ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
+$$
+
+(43)
+
+Next, equation (43) can be expended to conduct mathematical operations based on Assumption 2 and yields that
+
+$$
+\dot{S}\left( t\right) \leq - {\lambda }_{1}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix}\right) }^{4} - {\lambda }_{2}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix}\right) }^{2} + {\lambda }_{3}
+$$
+
+$$
++ \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right)
+$$
+
+$$
++ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + G{u}^{ * }\left( \xi \right) }\right) \tag{44}
+$$
+
+$$
++ \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right)
+$$
+
+$$
++ \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + G{u}^{ * }\left( \xi \right) }\right)
+$$
+
+where ${\lambda }_{1},{\lambda }_{2}$ and ${\lambda }_{3}$ are positive constants.
+
+Considering Assumption 1 and equation (44), one can further derive that
+
+$$
+\dot{S}\left( t\right) \leq - {\lambda }_{1}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix}\right) }^{4} - {\lambda }_{2}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix}\right) }^{2} + {\lambda }_{\partial }
+$$
+
+$$
+- {\lambda }_{\min }\left( \mathfrak{K}\right) {\alpha }_{s}\left( {\frac{1}{{\alpha }_{a}} + \frac{1}{{\alpha }_{c}}}\right) \left( \begin{Vmatrix}{\nabla {L}_{s}\left( \xi \right) }\end{Vmatrix}\right. \tag{45}
+$$
+
+$$
+- \frac{{\lambda }_{g}^{2}{\lambda }_{\varepsilon }^{2}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{2}}{4{\lambda }_{\min }\left( \mathfrak{K}\right) }{)}^{2}
+$$
+
+where ${\lambda }_{\partial } = {\lambda }_{3} + \frac{{\lambda }_{g}{}^{4}{\lambda }_{\varepsilon }{}^{4}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{4}}{{16}{\lambda }_{\min }\left( \mathfrak{K}\right) }$ .
+
+As a result, if $\begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix} \geq \sqrt[4]{\frac{{\lambda }_{\partial }}{{\lambda }_{1}}}$ or $\begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix} \geq \sqrt{\frac{{\lambda }_{\partial }}{{\lambda }_{2}}}$ or
+
+$\begin{Vmatrix}{\nabla {L}_{s}\left( \xi \right) }\end{Vmatrix} \geq \sqrt{\frac{{\lambda }_{\partial }}{{\lambda }_{\min }\left( \mathfrak{K}\right) {\alpha }_{s}\left( {\frac{1}{{\alpha }_{a}} + \frac{1}{{\alpha }_{c}}}\right) }} + \frac{{{\lambda }_{g}}^{2}{{\lambda }_{\varepsilon }}^{2}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{2}}{4{\lambda }_{\min }\left( \mathfrak{K}\right) }$ hold, $S\left( t\right) \leq 0$ will be satisfied. Finally, one can conclude that all signals are UUB.
+
+Case2 : Events are triggered. Consider the event-triggered controller (32) and the degree of membership update law (34) and (35).
+
+Choosing the following Lyapunov function
+
+$$
+{S}_{e}\left( t\right) = \frac{1}{2{\alpha }_{a}}{\widetilde{\omega }}_{ae}^{\mathrm{T}}{\widetilde{\omega }}_{ae} + \frac{1}{2{\alpha }_{c}}{\widetilde{\omega }}_{ce}^{\mathrm{T}}{\widetilde{\omega }}_{ce} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{L}_{s}\left( \xi \right) + \frac{{\alpha }_{s}}{{\alpha }_{c}}{L}_{s}(\xi
+$$
+
+(ξ)
+
+(46)
+
+same proof as that in Case 1, we can demonstrate all signals are UUB.
+
+Motivated by [14], the derivative of event-triggered function can be written as
+
+$$
+\frac{d}{dt}\left| {\Gamma \left( t\right) }\right| = \frac{d}{dt}{\left( \Gamma \left( t\right) \times \Gamma \left( t\right) \right) }^{\frac{1}{2}} = \operatorname{sgn}\left( {\Gamma \left( t\right) }\right) \dot{\Gamma }\left( t\right) \leq \left| {{\dot{u}}^{ * }\left( {\xi \left( t\right) }\right) }\right|
+$$
+
+(47)
+
+Because all signals are UUB, absolutely existing a positive parameter $\kappa$ satisfies
+
+$$
+\left| {{\dot{u}}^{ * }\left( {\xi \left( t\right) }\right) }\right| \leq \kappa \tag{48}
+$$
+
+According to the event-triggered mechanism (28) and (29), one can derive that $\Gamma \left( {t}_{d}\right) = 0$ and $\mathop{\lim }\limits_{{t \rightarrow {t}_{d + 1}}}\Gamma \left( {t}_{d + 1}\right) =$ $\Delta \left| {{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) }\right| + M$ . Combining equation (47),(48) and performing some mathematical operations, the minimal inter-execution ${t}^{ * } = {t}_{d + 1} - {t}_{d}$ satisfies ${t}^{ * } > \frac{\left| {{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) }\right| + M}{\kappa },\forall t \in \left\lbrack {{t}_{d},{t}_{d + 1}}\right)$ . Consequently, it is guaranteed that the Zeno behavior is non-occurring.
+
+## VI. Simulation
+
+In this section, YUKUN of Dalian Maritime University is utilized to verify the validity and flexibility of the optimal control strategy considering event-triggered mechanism. The parameters of YUKUN are as follows: length between perpendiculars is ${105}\mathrm{\;m}$ , beam is ${18}\mathrm{\;m}$ , rudder area is 11.46 ${\mathrm{m}}^{2}$ , loaded speed is ${16.7}\mathrm{{kn}}$ , full amidships draft is ${5.2}\mathrm{\;m}$ , full loaded displacement is ${5735.5}{\mathrm{\;m}}^{3}$ , block coefficient is 0.5595 . Maritime environment can be set that: wind direction ${\psi }_{\text{wind }} = {30}^{ \circ }$ , wind scale $\mathcal{S} = 6$ , current direction ${\psi }_{\text{current }} =$ ${30}^{ \circ }$ , current velocity ${v}_{\text{current }} = 5\mathrm{{kn}}$ .
+
+Therefore, a continuous-time ship dynamic system can be considered
+
+$$
+\left\{ \begin{array}{l} {\dot{x}}_{1} = {x}_{2} \\ {\dot{x}}_{2} = - \frac{1}{T}\left( {{\alpha }_{s}{x}_{2} + {\beta }_{s}{x}_{2}{}^{3}}\right) + \frac{K}{T}\left( {u + {\delta }_{w}}\right) \\ y = {x}_{1} \end{array}\right. \tag{49}
+$$
+
+where ${x}_{1}$ and ${x}_{2} \in \mathbb{R}$ are state variables, $u \in \mathbb{R}$ is the control input variable; reference signal ${x}_{1d} =$ $\sin \left( {{\pi t}/{25}}\right)$ ; the rudder gain $K = {0.314}$ and time constant $T = {62.387}$ ; designed parameters ${\alpha }_{s} = {100}$ and ${\beta }_{s} = {50}$ . Design parameters ${\alpha }_{a} = {0.001},{\alpha }_{c} = 1$ , ${\alpha }_{s} = {100000}, R = {0.067},\Delta = {0.39}, M = {0.001}$ . The initial state can be set that ${x}_{0} = {\left\lbrack -{0.3},{2.1},{0.1},{0.03}\right\rbrack }^{\mathrm{T}}$ , the initial degree of membership can be set that ${\omega }_{a0} =$ ${\left\lbrack -{3.4}, - 4, - {3.5}, - {1.8}, - 2,0, - {1.4}, - {0.8}, - {1.8}, - 2\right\rbrack }^{\mathrm{T}},{\omega }_{c0} =$ ${\left\lbrack 1,{1.3},{1.5},{1.3},0,0,{1.5},3,{3.3},3\right\rbrack }^{\mathrm{T}}$ .
+
+Simulation results are illustrated in Fig. 1-4. The tracking trajectory and error are shown in Fig. 1, where the ship course can rapidly track the reference course in 10 seconds and tracking error can converge to a bounded compact set of zero based on the designed event-triggered adaptive optimal controller. Fig. 2 describes the general control input and the event-triggered control input. Its result illustrates event-triggered controller is superior to common controllers under the same conditions. The numerical values of event-triggered controller are smaller than that of the general controller, which effectively verifies the competent in reducing mechanical wear and saving energy of the event-triggered mechanism. Fig. 3 describes the corresponding triggered time that highlights the advantages of cost saving for event-triggered controller. In the end, Fig. 4 gives the value function and policy function degree of memberships convergence exhibitions which demonstrate degree of membership signals can rapidly coverage to a bounded range.
+
+
+
+Fig. 1. Trajectories of the course tracking error, actual course and reference course.
+
+
+
+Fig. 2. Trajectories of control input and event-triggered control input.
+
+## VII. CONCLUSION
+
+In this article, an event-triggered optimal tracking control scheme has been proposed for uncertain nonlinear systems based on RL. An improved ADP technique combining actor-critic algorithm and fuzzy logic systems have been implemented in solving HJB equation of nominal system. To reduce mechanical wear of actuator and save energy, event-triggered mechanism has been performed to update controller. All signals are UUB by Lyapunov demonstration and simulations verify the feasibility of proposed scheme. In the future, we will study the tracking control problem based on deep reinforcement learning and the multi-agent systems also is an interesting direction.
+
+
+
+Fig. 3. Inter-event times of ${u}_{e}$ .
+
+
+
+Fig. 4. Convergence situations of policy function degree of memberships ${\widehat{\omega }}_{a}$ and value function degree of memberships ${\widehat{\omega }}_{c}$ .
+
+## ACKNOWLEDGMENT
+
+This work was supported in part by the Central Guidance on Local Science and Technology Development Fund of Liaoning Province (Grant No. 2023JH6/100100055); in part by the National Natural Science Foundation of China (Grant Nos. 52271360); in part by the Dalian Outstanding Young Scientific and Technological Talents Project (Grant No. 2023RY031); in part by the Basic Scientific Research Project of Liaoning Education Department (Grant No. JYTMS20230164); and in part by the Fundamental Research Funds for the Central Universities (Grant No. 3132024125).
+
+## REFERENCES
+
+[1] D. Wang, N. Gao, D. Liu, J. Li, and F. L. Lewis, "Recent progress in reinforcement learning and adaptive dynamic programming for advanced
+
+control applications," IEEE/CAA Journal of Automatica Sinica, vol. 11, no. 1, pp. 18-36, Jan. 2024.
+
+[2] W. Bai, "Introduction to discrete-time reinforcement learning control in complex engineering systems," Complex Engineering Systems, vol. 4, no. 2, pp. 8, Apr. 2024.
+
+[3] W. Gao, M. Mynuddin, D. Wunsch, and Z. Jiang, "Reinforcement learning-based cooperative optimal output regulation via distributed adaptive internal model," IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5229-5240, Oct. 2022.
+
+[4] D. M. Le, M. L. Greene, W. A. Makumi, and W. E. Dixon, "Real-time modular deep neural network-based adaptive control of nonlinear systems," IEEE Control Systems Letters, vol. 6, pp. 476-481, 2022.
+
+[5] D. Wang, and C. Mu, "Adaptive-critic-based robust trajectory tracking of uncertain dynamics and its application to a spring-mass-damper system," IEEE Transactions on Industrial Electronics, vol. 65, no. 1, pp. 654-663, Jan. 2018.
+
+[6] D. Wang, J. Qiao, and L. Cheng, "An approximate neuro-optimal solution of discounted guaranteed cost control design," IEEE Transactions on Cybernetics, vol. 52, no. 1, pp. 77-86, Jan. 2022
+
+[7] K. G. Vamvoudakis, and F. L. Lewis, "Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem," Automatica, vol. 46, no. 5, pp. 878-888, May. 2010.
+
+[8] X. Li, J. Ren, and D. Wang, "Multi-step policy evaluation for adaptive-critic-based tracking control towards nonlinear systems," Complex Engineering Systems, vol. 3, no. 4, pp. 20, Nov. 2023.
+
+[9] J. Li, G. Zhang, Q. Shan, and W. Zhang, "A novel cooperative design for USV-UAV systems: 3D mapping guidance and adaptive fuzzy control," IEEE Transactions on Control of Network Systems, vol. 10, no. 2, pp. 564-574, Jun. 2023.
+
+[10] H. Yue, and J. Xia, "Reinforcement learning-based optimal adaptive fuzzy control for nonlinear multi-agent systems with prescribed performance," Complex Engineering Systems, vol. 3, no. 4, pp. 19, Nov. 2023.
+
+[11] Q. Wei, R. Song, and P. Yan, "Data-driven zero-sum neuro-optimal control for a class of continuous-time unknown nonlinear systems with disturbance using ADP," IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 2, pp. 444-458, Feb. 2016.
+
+[12] C. Mu, Y. Zhang, Z. Gao and C. Sun, "ADP-Based Robust Tracking Control for a Class of Nonlinear Systems With Unmatched Uncertainties," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 11, pp. 4056-4067, Nov. 2020.
+
+[13] J. Zhao, C. Yang, W. Gao, and J. H. Park, "ADP-based optimal control of linear singularly perturbed systems with uncertain dynamics: A two-stage value iteration method," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 70, no. 12, pp. 4399-4403, Dec. 2023.
+
+[14] X. Yang, H. He, and D. Liu, "Event-triggered optimal neuro-controller design with reinforcement learning for unknown nonlinear systems," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 9, pp. 1866-1878, Sept. 2019.
+
+[15] Y. Zhang, Sun, J. Zhang, H. Liang, and H. Li, "Event-triggered adaptive tracking control for multiagent systems with unknown disturbances," IEEE Transactions on Cybernetics, vol. 50, no. 3, pp. 890-901, Mar. 2020.
+
+[16] Q. Zhang, D. Zhao, and Y. Zhu,"Event-triggered $h\infty$ control for continuous-time nonlinear system via concurrent learning," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 7, pp. 1071-1081, Jul. 2017.
+
+[17] Z. Liu, F. Wang, Y. Zhang, X. Chen, and C. L. P. Chen, "Adaptive tracking control for a class of nonlinear systems with a fuzzy dead-zone input," IEEE Transactions on Fuzzy Systems, vol. 23, no. 1, pp. 193-204, Feb. 2015.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7c6d439236b3184d8baaf134f8098bb1d555a2f9
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,615 @@
+§ EVENT-TRIGGERED OPTIMAL TRACKING CONTROL FOR UNCERTAIN NONLINEAR SYSTEM BASED ON REINFORCEMENT LEARNING
+
+Yuanhao Wang
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+wangyuanhao2024@163.com
+
+Weiwei Bai
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+baiweiwei_dl@163.com
+
+Abstract-In this paper, an event-triggered optimal tracking control problem is studied for uncertain nonlinear systems based on reinforcement learning (RL). Firstly, a class of nonlinear dynamic systems with general uncertainty is considered and the augmented system comprising tracking error and reference signal is constructed. Secondly, an improved adaptive dynamic programming (ADP) technique, involving actor-critic algorithm and fuzzy logic systems, is developed to solve the Hamilton-Jacobi-Bellman (HJB) equation with respect to nominal augmented system. Thirdly, in order to reduce the mechanical wear of actuator and energy consumption, event-triggered mechanism is performed in controller updating. Finally, stability analysis proofs that all signals are uniformly ultimately bounded (UUB) in the closed-loop system via Lyapunov theory. Simulation results verify feasibility of proposed scheme.
+
+Index Terms-ADP, event-triggered, reinforcement learning, nonlinear, fuzzy logic systems, tracking control.
+
+§ I. INTRODUCTION
+
+Reinforcement Learning (RL) as an effective technique has competent in facilitating adaptive optimization strategy [1], [2]. Generally, optimization is implemented via seeking minimized or maximized cost function to solve the Hamilton-Jacobi-Bellman (HJB) equation [3]. However, there exists a challenge about acquiring analytic solution of HJB equation directly for nonlinear dynamic systems [4]. Therefore many researchers proposed numerical solution of HJB equation [5]. Adaptive dynamic programming (ADP) as an advanced numerical solving method, has been widely applied to achieve the optimal tracking control of nonlinear systems.
+
+In contrast to traditional dynamic programming, ADP can be utilized to design optimal controller forward in time, which effectively avoids "curse of dimensionality" [6], [7]. In addition, an improved ADP framework consists of actor-critic algorithm and fuzzy logic systems is constructed. So far, there have been many scholars devoting to developing ADP techniques [8]-[10]. In [11], ADP method was implemented to solve a new neuro-optimal control problem of nonlinear dynamic systems by employing one critic and two actor networks. In [12], a neural-network-based ADP method was developed to solve the optimal tracking control problem of a class of nonlinear systems with unmatched uncertainties. In [13], linear singularly perturbed system was studied via employing ADP framework to achieve optimal control. These literatures concentrated on application and development of ADP and RL, but they did not consider the condition with mechanical wear of actuator and energy consumption. As a result, it is of necessity to perform event-triggered mechanism in control design for reducing mechanical wear and saving energy in actual engineering practice [14].
+
+The key of event-triggered control algorithm is triggering threshold [14]. When signal exceeds triggering threshold, control policy will be updated [15], [16]. In this paper, an event-triggered optimal tracking control scheme for uncertain nonlinear systems based on RL is developed. There are two main contributions:
+
+(1) An improved ADP and RL algorithm involving actor-critic and fuzzy logic systems is developed, which develops the optimal control strategy and effectively balances the tracking control performance and control costs.
+
+(2) Event-triggered mechanism is performed in controller design, the unnecessary control input is avoided, achieving the reduction of mechanical wear and the energy saving in engineering practice.
+
+The organization of this paper is shown as follows. System dynamic description and fuzzy logic systems are stated in Section II. Optimal controller and event-triggered controller are designed in Sections III and IV, respectively. Stability analysis, simulation and conclusion are shown in Sections V, VI and VII, respectively.
+
+§ II. PROBLEM FORMULATION AND PRELIMINARIES
+
+§ A. SYSTEM DYNAMIC DESCRIPTION
+
+Consider a class of continuous-time nonlinear dynamic systems which can be described by
+
+$$
+\dot{x}\left( t\right) = f\left( {x\left( t\right) }\right) + g\left( {x\left( t\right) }\right) u\left( t\right) + \mathcal{D}\left( {x\left( t\right) }\right) \tag{1}
+$$
+
+where $x\left( t\right) \in {\mathbb{R}}^{n}$ is the state variable, $u\left( t\right) \in {\mathbb{R}}^{m}$ is the control input, $f\left( {x\left( t\right) }\right) \in {\mathbb{R}}^{n}$ and $g\left( {x\left( t\right) }\right) \in {\mathbb{R}}^{n \times m}$ are the unknown smooth function and unknown smooth function matrix respectively, $\mathcal{D}\left( {x\left( t\right) }\right)$ is the unknown disturbance with $\parallel \mathcal{D}\left( {x\left( t\right) }\right) \parallel \leq {\lambda }_{\mathcal{D}}$ and ${\lambda }_{\mathcal{D}}$ is a positive parameter.
+
+To achieve tracking control, a reference signal is given by
+
+$$
+\dot{r}\left( t\right) = \delta \left( {r\left( t\right) }\right) \tag{2}
+$$
+
+where $r\left( t\right) \in {\mathbb{R}}^{n}$ is a bounded desired trajectory and $\delta \left( {r\left( t\right) }\right)$ is a Lipschitz continuous function. Let the tracking error be
+
+$$
+e\left( t\right) = x\left( t\right) - r\left( t\right) \tag{3}
+$$
+
+Combining equations (1), (2) and (3), one can yield the following dynamic of tracking error
+
+$$
+\dot{e}\left( t\right) = f\left( {x\left( t\right) }\right) + g\left( {x\left( t\right) }\right) u\left( t\right) + \mathcal{D}\left( {x\left( t\right) }\right) - \delta \left( {r\left( t\right) }\right) \tag{4}
+$$
+
+Note that $x\left( t\right) = e\left( t\right) + r\left( t\right)$ , equation (4) can be rewritten as
+
+$$
+\dot{e}\left( t\right) = f\left( {e\left( t\right) + r\left( t\right) }\right) - \delta \left( {r\left( t\right) }\right) + g\left( {e\left( t\right) + r\left( t\right) }\right) u\left( t\right) \tag{5}
+$$
+
+$$
++ \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right)
+$$
+
+For the sake of facilitating description, define $\xi \left( t\right) = \mathrm{i}$ ${\left\lbrack {e}^{\mathrm{T}}\left( t\right) ,{r}^{\mathrm{T}}\left( t\right) \right\rbrack }^{\mathrm{T}} \in {\mathbb{R}}^{2n}$ , and then dynamic systems (2) and (5) can be augmented as a concise form
+
+$$
+\dot{\xi }\left( t\right) = F\left( {\xi \left( t\right) }\right) + G\left( {\xi \left( t\right) }\right) u\left( t\right) + \Delta \mathbb{D}\left( {\xi \left( t\right) }\right) \tag{6}
+$$
+
+where $F\left( {\xi \left( t\right) }\right)$ and $G\left( {\xi \left( t\right) }\right)$ are new matrices and $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ can be still regarded as a new uncertain term. In particular, $F\left( {\xi \left( t\right) }\right) = \left\lbrack \begin{matrix} f\left( {e\left( t\right) + r\left( t\right) }\right) - \delta \left( t\right) \\ \delta \left( t\right) \end{matrix}\right\rbrack ,G\left( {\xi \left( t\right) }\right) =$ $\left\lbrack \begin{matrix} g\left( {e\left( t\right) + r\left( t\right) }\right) \\ {0}_{n \times m} \end{matrix}\right\rbrack$ and $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right) = \left\lbrack \begin{matrix} \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right) \\ {0}_{n \times 1} \end{matrix}\right\rbrack$ .
+
+Undoubtedly, the new uncertain term $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ is still upper bounded since
+
+$$
+\parallel \Delta \mathbb{D}\left( {\xi \left( t\right) }\right) \parallel = \parallel \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right) \parallel = \parallel \mathcal{D}\left( {x\left( t\right) }\right) \parallel \leq {\lambda }_{\mathcal{D}} \tag{7}
+$$
+
+To accomplish tracking control of dynamic system (1) to reference signal (2), the feedback controller $u\left( \xi \right)$ will be constructed. One can yield that the closed-loop system is asymptotically stable under the controller $u\left( \xi \right)$ for the uncertain and bounded term $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ . Therefore, the optimal control policy can be applied by considering appropriate cost function of the subsequent nominal system the same as that in [5].
+
+§ B. FUZZY LOGIC SYSTEMS
+
+Define a nonlinear continuous function $P\left( x\right)$ over a compact set $\mathbb{U}$ , for any constant $\varepsilon > 0$ , there exists fuzzy logic systems ${\omega }^{\mathrm{T}}\varphi \left( x\right)$ such that [17]
+
+$$
+\mathop{\sup }\limits_{{x \in \mathbb{U}}}\left| {P\left( x\right) - {\omega }^{\mathrm{T}}\varphi \left( x\right) }\right| \leq \varepsilon \tag{8}
+$$
+
+where $x = {\left\lbrack {x}_{1},\ldots ,{x}_{j}\right\rbrack }^{\mathrm{T}}$ is the input vector of fuzzy logic systems, $\omega = {\left\lbrack {\omega }_{1},{\omega }_{2},\ldots ,{\omega }_{L}\right\rbrack }^{\mathrm{T}} \in {\mathbb{R}}^{L}$ is the degree of membership and $L > 1$ is the number of fuzzy rules, $\varepsilon$ is the fuzzy minimum approximation error. $\varphi \left( x\right) =$ ${\left\lbrack {\varphi }_{1}\left( x\right) ,{\varphi }_{2}\left( x\right) ,\ldots ,{\varphi }_{L}\left( x\right) \right\rbrack }^{\mathrm{T}}$ is fuzzy basic function vector and ${\varphi }_{i}\left( x\right)$ is selected as follows:
+
+$$
+{\varphi }_{i}\left( x\right) = \frac{\mathop{\prod }\limits_{{i = 1}}^{j}{\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right) }{\mathop{\sum }\limits_{{i = 1}}^{N}\left( {\mathop{\prod }\limits_{{i = 1}}^{j}{\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right) }\right) },\left( {i = 1,\ldots ,L}\right) \tag{9}
+$$
+
+where ${F}_{i}^{l}\left( {i = 1,\ldots ,j;l = 1,\ldots ,N}\right)$ is the fuzzy set and ${\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right)$ is the membership function.
+
+§ III. OPTIMAL CONTROL DESIGN
+
+In this section, ADP comprising actor-critic algorithm and fuzzy logic systems will be employed to design the value function ${L}^{ * }\left( \xi \right)$ and control policy ${u}^{ * }\left( \xi \right)$ , and design degree of membership update laws.
+
+In actor-critic framework, value function and control policy are approximated by critic and actor fuzzy systems, respectively. Optimal cost function (13) and feedback controller (15) represent value function and control policy for optimal tracking control problem, respectively.
+
+Consider the nominal part of the augmented system (6), that
+
+is
+
+$$
+\dot{\xi }\left( t\right) = F\left( {\xi \left( t\right) }\right) + G\left( {\xi \left( t\right) }\right) u\left( t\right) \tag{10}
+$$
+
+For the nominal system (10), this cost function is considered
+
+$$
+L\left( \xi \right) = {\int }_{t}^{\infty }Q\left( \tau \right) + u{\left( \tau \right) }^{\mathrm{T}}{Ru}\left( \tau \right) {d\tau } \tag{11}
+$$
+
+where $Q\left( \xi \right) = {\xi }^{\mathrm{T}}\mathcal{Q}\xi ,R = {R}^{\mathrm{T}}.\mathcal{Q}$ and $R$ are positive defined matrix.
+
+Subsequently, one can define the Hamiltonian of the optimal problem
+
+$$
+H\left( {\xi ,u\left( \xi \right) }\right) = Q\left( \xi \right) + u{\left( \xi \right) }^{\mathrm{T}}{Ru}\left( \xi \right) \tag{12}
+$$
+
+$$
++ {\nabla }^{\mathrm{T}}L\left( \xi \right) \left\lbrack {F\left( \xi \right) + G\left( \xi \right) u\left( \xi \right) }\right\rbrack
+$$
+
+where $\nabla L\left( \xi \right)$ represents the partial derivative of $L\left( \xi \right)$ .
+
+Generally, as long as finding the optimal cost function can we derive the optimal controller. The infinitesimal version of cost function is regarded as the optimal cost function, one has
+
+$$
+{L}^{ * }\left( \xi \right) = \min {\int }_{t}^{\infty }Q\left( \tau \right) + u{\left( \tau \right) }^{\mathrm{T}}{Ru}\left( \tau \right) {d\tau } \tag{13}
+$$
+
+The optimal cost function is the solution of the HJB equation which satisfies
+
+$$
+H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + {u}^{ * }{\left( \xi \right) }^{\mathrm{T}}R{u}^{ * }\left( \xi \right)
+$$
+
+$$
++ {\nabla }^{\mathrm{T}}L\left( \xi \right) \left\lbrack {F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right) }\right\rbrack = 0
+$$
+
+(14)
+
+Consequently, the optimal feedback controller is yielded
+
+$$
+{u}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \nabla {L}^{ * }\left( \xi \right) \tag{15}
+$$
+
+One need to solve the HJB equation (14) and obtain the optimal controller (15) for nominal system (10). However, the solution of HJB equation (14) is difficult to be obtained directly. Therefore, fuzzy logic systems and adaptive actor-critic will be utilized to find its estimated solution.
+
+Fuzzy logic systems are employed to reconstruct the value function ${L}^{ * }\left( \xi \right)$
+
+$$
+{L}^{ * }\left( \xi \right) = {\omega }^{\mathrm{T}}\varphi \left( \xi \right) + \varepsilon \left( \xi \right) \tag{16}
+$$
+
+where $\omega$ is the degree of membership of fuzzy logic systems, $\varphi \left( \xi \right)$ is the fuzzy basis function and $\varepsilon \left( \xi \right)$ is the unknown fuzzy approximate error.
+
+Considering (15) and (16) yields the optimal controller described by fuzzy logic systems as
+
+$$
+{u}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \left\lbrack {{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \omega + \nabla \varepsilon \left( \xi \right) }\right\rbrack \tag{17}
+$$
+
+In order to clearly analyze, define a non-negative matrix
+
+$$
+A\left( \xi \right) = \nabla \varphi \left( \xi \right) G\left( \xi \right) {R}^{-1}G\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \tag{18}
+$$
+
+One can derive the HJB equation reconstructed by fuzzy logic systems, combining with (16), (17) and (18), one has
+
+$$
+H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + {\omega }^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right)
+$$
+
+$$
+- \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega + {\varepsilon }_{HJB} = 0 \tag{19}
+$$
+
+and the residual error ${\varepsilon }_{HJB}$ is expressed as
+
+$$
+{\varepsilon }_{HJB} = {\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) \left( {F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right) }\right)
+$$
+
+$$
++ \frac{1}{4}{\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) G\left( \xi \right) {R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \nabla \varepsilon \left( \xi \right) \tag{20}
+$$
+
+$$
++ \frac{1}{2}{\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) G\left( \xi \right) {R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \omega
+$$
+
+The estimation of value function ${L}^{ * }\left( \xi \right)$ and control policy ${u}^{ * }\left( \xi \right)$ can be constructed by critic and actor fuzzy, respectively.
+
+$$
+{\widehat{L}}^{ * }\left( \xi \right) = {\widehat{\omega }}_{c}^{\mathrm{T}}\varphi \left( \xi \right) \tag{21}
+$$
+
+$$
+{\widehat{u}}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widehat{\omega }}_{a} \tag{22}
+$$
+
+where ${\widehat{\omega }}_{a}$ is the actor estimated degree of membership and ${\widehat{\omega }}_{c}$ is the critic estimated degree of membership.
+
+Noticing (21) and (22), one can derive the following estimated Hamiltonian
+
+$$
+\widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}
+$$
+
+$$
++ {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}
+$$
+
+(23)
+
+To obtain the degree of membership update laws of fuzzy logic systems, defining the objective function as ${E}_{c} = \frac{1}{2}{e}_{c}{}^{\mathrm{T}}{e}_{c}$ , where ${e}_{c} = \widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right) - H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right)$ is the Bellman error. In order to conquer the difficulties of searching controller and adaptive laws, the following assumption is made and the additional term can be constructed to improve the learning process.
+
+Assumption 1: [5] Define ${L}_{s}\left( \xi \right)$ is a continuous differentiable Lyapunov function candidate satisfying
+
+$$
+{\dot{L}}_{s}\left( \xi \right) = {\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + {u}^{ * }\left( \xi \right) }\right) < 0 \tag{24}
+$$
+
+and then, there exists a positive matrix $\mathfrak{K} \in {\mathbb{R}}^{{2n} \times {2n}}$ ensuring that
+
+$$
+{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + {u}^{ * }\left( \xi \right) }\right) = - {\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \Re \nabla {L}_{s}\left( \xi \right) \tag{25}
+$$
+
+$$
+\leq - {\lambda }_{\min }\left( \mathfrak{K}\right) \nabla {\begin{Vmatrix}{L}_{s}\left( \xi \right) \end{Vmatrix}}^{2}
+$$
+
+Based on the gradient decent, degree of membership update laws of fuzzy logic systems are designed, by considering these two Hamilton functions $H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right)$ and $\widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right)$ , one has
+
+$$
+{\dot{\widehat{\omega }}}_{a} = - {\alpha }_{a}\left( {\frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a} - \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{c}}\right)
+$$
+
+$$
+\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) + \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+(26)
+
+$$
+{\dot{\widehat{\omega }}}_{c} = - {\alpha }_{c}\left( {\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
+\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) + \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+(27)
+
+where ${\alpha }_{a}$ and ${\alpha }_{c}$ are the basis learning parameters of actor and critic systems, respectively, and ${\alpha }_{s}$ is the adjustable parameter for the additional term.
+
+§ IV. EVENT-TRIGGERED CONTROL IMPLEMENTATION
+
+The event triggering mechanism is defined as
+
+$$
+{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = {u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right) ,\forall t \in \left\lbrack {{t}_{d},{t}_{d + 1}}\right) \tag{28}
+$$
+
+$$
+{t}_{d + 1} = \inf \left\{ {t \in \mathbb{R}\left| \right| \Gamma \left( t\right) \left| { \geq \Delta }\right| {u}_{e}^{ * }\left( {\xi \left( t\right) }\right) \mid + M}\right\} ,{t}_{1} = 0 \tag{29}
+$$
+
+where the event-triggered error $\Gamma \left( t\right) = {u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right) - {u}_{e}^{ * }\left( {\xi \left( t\right) }\right)$ , the controller update time is ${t}_{d},d \in {Z}^{ + }$ . Define the proper parameters $0 < \Delta < 1$ and $M > 0$ .
+
+When event is not triggered, the control policy will be chosen as ${u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right)$ . Otherwise, control policy will be updated and marked as ${u}_{e}^{ * }\left( {\xi \left( {t}_{d + 1}\right) }\right)$ . Assume two continuous and time-varying parameters ${\rho }_{1}\left( t\right)$ and ${\rho }_{2}\left( t\right)$ , which results in ${u}^{ * }\left( {\xi \left( t\right) }\right) = \left( {1 + {\rho }_{1}\left( t\right) \Delta }\right) {u}_{e}^{ * }\left( {\xi \left( t\right) }\right) + {\rho }_{2}\left( t\right) M$ where $\left| {{\rho }_{1}\left( t\right) }\right| \leq 1$ and $\left| {{\rho }_{2}\left( t\right) }\right| \leq 1$ . And then, the event-triggered controller can be rewritten as
+
+$$
+{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = \frac{{u}^{ * }\left( {\xi \left( t\right) }\right) - {\rho }_{2}\left( t\right) M}{1 + {\rho }_{1}\left( t\right) \Delta } \tag{30}
+$$
+
+Using (17) and (30) can yield that
+
+$$
+{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = - \frac{1}{2\rho }{R}^{-1}\left\lbrack {{G}^{\mathrm{T}}\left( {\xi \left( t\right) }\right) {\nabla }^{\mathrm{T}}\varphi \left( {\xi \left( t\right) }\right) \omega + {\varepsilon }_{e}\left( {\xi \left( t\right) }\right) }\right\rbrack
+$$
+
+(31)
+
+where $\rho = 1 + {\rho }_{1}\left( t\right) \Delta ,{\varepsilon }_{e}\left( {\xi \left( t\right) }\right) = \nabla \varepsilon \left( {\xi \left( t\right) }\right) + 2{\rho }_{2}\left( t\right) {RM}$ .
+
+Similarly, based on critic fuzzy logic systems, the estimated event-triggered controller can be obtained, one has
+
+$$
+{\widehat{u}}_{e}^{ * }\left( {\xi \left( t\right) }\right) = - \frac{1}{2\rho }{R}^{-1}{G}^{\mathrm{T}}\left( {\xi \left( t\right) }\right) {\nabla }^{\mathrm{T}}\varphi \left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} \tag{32}
+$$
+
+Considering the HJB equation (14), value function (21) and event-triggered controller (32), one can yield the following Hamilton function as
+
+$$
+{\widehat{H}}_{e}\left( {\xi \left( t\right) ,{\widehat{u}}_{e}^{ * }\left( {\xi \left( t\right) }\right) ,{\widehat{L}}^{ * }\left( {\xi \left( t\right) }\right) }\right)
+$$
+
+$$
+= Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right)
+$$
+
+$$
+- \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}
+$$
+
+(33)
+
+Subsequently, degree of membership update laws with respect to event-triggered mechanism can be constructed, one has
+
+$$
+{\dot{\widehat{\omega }}}_{ae} = - {\alpha }_{a}\left( {\frac{1}{2{\rho }^{2}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} - \frac{1}{2\rho }A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{c}}\right)
+$$
+
+$$
+\times \left( {Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right. \tag{34}
+$$
+
+$$
+\left. {+{\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
++ \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( {\xi \left( t\right) }\right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( {\xi \left( t\right) }\right)
+$$
+
+$$
+{\dot{\widehat{\omega }}}_{ce} = - {\alpha }_{c}\left( {\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
+\times \left( {Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right. \tag{35}
+$$
+
+$$
+\left. {+{\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
++ \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( {\xi \left( t\right) }\right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( {\xi \left( t\right) }\right)
+$$
+
+Theorem 1: Considering the dynamic system (1), the optimal feedback controller (22), event-triggered controller (32) and the degree of membership update laws (26), (27), (34) and (35) are developed. Based on Lyapunov theory, all signals are uniformly ultimately bounded (UUB) in the closed-loop system.
+
+For the sake of investigating the stability of error dynamics and close-loop states, the following assumption is given by
+
+Assumption 2: On a compact set $\Omega ,G\left( \xi \right) ,\nabla \varphi \left( \xi \right) ,\nabla \varepsilon \left( \xi \right)$ , ${\xi }^{ * }$ and ${\varepsilon }_{HJB}$ are bounded. $\parallel G\left( \xi \right) \parallel \leq {\lambda }_{g},\parallel \nabla \varphi \left( \eta \right) \parallel \leq {\lambda }_{\varphi }$ , $\parallel \nabla \varepsilon \left( \eta \right) \parallel \leq {\lambda }_{\varepsilon },\begin{Vmatrix}{\xi }^{ * }\end{Vmatrix} \leq {\lambda }_{\xi }$ and $\begin{Vmatrix}{\varepsilon }_{HJB}\end{Vmatrix} \leq {\lambda }_{HJB}$ , where ${\lambda }_{g}$ , ${\lambda }_{\varphi },{\lambda }_{\varepsilon },{\lambda }_{\xi }$ and ${\lambda }_{HJB}$ are positive constants.
+
+§ V. STABILITY ANALYSIS
+
+In this section, Lyapunov theory will be employed to demonstrate Theorem 1.
+
+Case1 : Event are not triggered. Consider the feedback controller (22) and the related degree of membership update laws (26) and (27). According to HJB equation (19), it can be transformed as
+
+$$
+Q\left( \xi \right) = - {\omega }^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \eta \right) + \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega - {\varepsilon }_{HJB} \tag{36}
+$$
+
+Considering the degree of membership update laws (26) and (27), combining with ${\widetilde{\omega }}_{a} = - {\dot{\omega }}_{a}$ and ${\widetilde{\omega }}_{c} = - {\dot{\omega }}_{c}$ , one has
+
+$$
+{\dot{\widetilde{\omega }}}_{a} = - {\alpha }_{a}\left( {-\frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a} + \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{c}}\right)
+$$
+
+$$
+\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) - \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+(37)
+
+$$
+{\dot{\widetilde{\omega }}}_{c} = - {\alpha }_{c}\left( {-\nabla \varphi \left( \xi \right) F\left( \eta \right) + \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
+\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) - \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+(38)
+
+Then the following Lyapunov function can be chosen as
+
+$$
+S\left( t\right) = \frac{1}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}{\widetilde{\omega }}_{a} + \frac{1}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}{\widetilde{\omega }}_{c} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{L}_{s}\left( \xi \right) + \frac{{\alpha }_{s}}{{\alpha }_{c}}{L}_{s}\left( \xi \right)
+$$
+
+(39)
+
+its derivative is
+
+$$
+\dot{S}\left( t\right) = \frac{1}{{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}{\dot{\widetilde{\omega }}}_{a} + \frac{1}{{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}{\dot{\widetilde{\omega }}}_{c} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
+$$
+
+$$
+= \left( {{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega - \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right.
+$$
+
+$$
+\left. {+{\varepsilon }_{HJB} + \frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) \times \left( {-{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
+$$
+
+$$
+\left. {+\frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{c} + \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
+$$
+
+$$
+- \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+$$
+- \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+$$
++ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
+$$
+
+(40)
+
+Substituting (22) into (10) and observing the dynamic system ${\dot{\xi }}^{ * } = F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right)$ with optimal controller ${u}^{ * }\left( \xi \right)$ , one can acquire
+
+$$
+\nabla \varphi \left( \xi \right) F\left( \xi \right) = \nabla \varphi \left( \xi \right) \dot{\xi } + \frac{1}{2}\nabla \varphi \left( \xi \right) {R}^{-1}{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widehat{\omega }}_{a} \tag{41}
+$$
+
+$$
+\dot{\xi } = {\dot{\xi }}^{ * } + \frac{1}{2}G{R}^{-1}{G}^{\mathrm{T}}\left( {{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widetilde{\omega }}_{a} + \nabla \varepsilon \left( \xi \right) }\right) \tag{42}
+$$
+
+Considering above formulations, one can further derive that
+
+$$
+\dot{S}\left( t\right) = \left( {{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) {\dot{\xi }}^{ * } + \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right) }\right.
+$$
+
+$$
+\left. {+\frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) \omega + \frac{1}{4}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} + {\varepsilon }_{HJB}}\right)
+$$
+
+$$
+\times \left( {-{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) {\dot{\xi }}^{ * } - \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right) }\right.
+$$
+
+$$
+\left. {-{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a}}\right)
+$$
+
+$$
+- \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+$$
+- \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
+$$
+
+$$
++ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
+$$
+
+(43)
+
+Next, equation (43) can be expended to conduct mathematical operations based on Assumption 2 and yields that
+
+$$
+\dot{S}\left( t\right) \leq - {\lambda }_{1}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix}\right) }^{4} - {\lambda }_{2}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix}\right) }^{2} + {\lambda }_{3}
+$$
+
+$$
++ \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right)
+$$
+
+$$
++ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + G{u}^{ * }\left( \xi \right) }\right) \tag{44}
+$$
+
+$$
++ \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right)
+$$
+
+$$
++ \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + G{u}^{ * }\left( \xi \right) }\right)
+$$
+
+where ${\lambda }_{1},{\lambda }_{2}$ and ${\lambda }_{3}$ are positive constants.
+
+Considering Assumption 1 and equation (44), one can further derive that
+
+$$
+\dot{S}\left( t\right) \leq - {\lambda }_{1}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix}\right) }^{4} - {\lambda }_{2}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix}\right) }^{2} + {\lambda }_{\partial }
+$$
+
+$$
+- {\lambda }_{\min }\left( \mathfrak{K}\right) {\alpha }_{s}\left( {\frac{1}{{\alpha }_{a}} + \frac{1}{{\alpha }_{c}}}\right) \left( \begin{Vmatrix}{\nabla {L}_{s}\left( \xi \right) }\end{Vmatrix}\right. \tag{45}
+$$
+
+$$
+- \frac{{\lambda }_{g}^{2}{\lambda }_{\varepsilon }^{2}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{2}}{4{\lambda }_{\min }\left( \mathfrak{K}\right) }{)}^{2}
+$$
+
+where ${\lambda }_{\partial } = {\lambda }_{3} + \frac{{\lambda }_{g}{}^{4}{\lambda }_{\varepsilon }{}^{4}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{4}}{{16}{\lambda }_{\min }\left( \mathfrak{K}\right) }$ .
+
+As a result, if $\begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix} \geq \sqrt[4]{\frac{{\lambda }_{\partial }}{{\lambda }_{1}}}$ or $\begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix} \geq \sqrt{\frac{{\lambda }_{\partial }}{{\lambda }_{2}}}$ or
+
+$\begin{Vmatrix}{\nabla {L}_{s}\left( \xi \right) }\end{Vmatrix} \geq \sqrt{\frac{{\lambda }_{\partial }}{{\lambda }_{\min }\left( \mathfrak{K}\right) {\alpha }_{s}\left( {\frac{1}{{\alpha }_{a}} + \frac{1}{{\alpha }_{c}}}\right) }} + \frac{{{\lambda }_{g}}^{2}{{\lambda }_{\varepsilon }}^{2}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{2}}{4{\lambda }_{\min }\left( \mathfrak{K}\right) }$ hold, $S\left( t\right) \leq 0$ will be satisfied. Finally, one can conclude that all signals are UUB.
+
+Case2 : Events are triggered. Consider the event-triggered controller (32) and the degree of membership update law (34) and (35).
+
+Choosing the following Lyapunov function
+
+$$
+{S}_{e}\left( t\right) = \frac{1}{2{\alpha }_{a}}{\widetilde{\omega }}_{ae}^{\mathrm{T}}{\widetilde{\omega }}_{ae} + \frac{1}{2{\alpha }_{c}}{\widetilde{\omega }}_{ce}^{\mathrm{T}}{\widetilde{\omega }}_{ce} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{L}_{s}\left( \xi \right) + \frac{{\alpha }_{s}}{{\alpha }_{c}}{L}_{s}(\xi
+$$
+
+ (ξ)
+
+(46)
+
+same proof as that in Case 1, we can demonstrate all signals are UUB.
+
+Motivated by [14], the derivative of event-triggered function can be written as
+
+$$
+\frac{d}{dt}\left| {\Gamma \left( t\right) }\right| = \frac{d}{dt}{\left( \Gamma \left( t\right) \times \Gamma \left( t\right) \right) }^{\frac{1}{2}} = \operatorname{sgn}\left( {\Gamma \left( t\right) }\right) \dot{\Gamma }\left( t\right) \leq \left| {{\dot{u}}^{ * }\left( {\xi \left( t\right) }\right) }\right|
+$$
+
+(47)
+
+Because all signals are UUB, absolutely existing a positive parameter $\kappa$ satisfies
+
+$$
+\left| {{\dot{u}}^{ * }\left( {\xi \left( t\right) }\right) }\right| \leq \kappa \tag{48}
+$$
+
+According to the event-triggered mechanism (28) and (29), one can derive that $\Gamma \left( {t}_{d}\right) = 0$ and $\mathop{\lim }\limits_{{t \rightarrow {t}_{d + 1}}}\Gamma \left( {t}_{d + 1}\right) =$ $\Delta \left| {{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) }\right| + M$ . Combining equation (47),(48) and performing some mathematical operations, the minimal inter-execution ${t}^{ * } = {t}_{d + 1} - {t}_{d}$ satisfies ${t}^{ * } > \frac{\left| {{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) }\right| + M}{\kappa },\forall t \in \left\lbrack {{t}_{d},{t}_{d + 1}}\right)$ . Consequently, it is guaranteed that the Zeno behavior is non-occurring.
+
+§ VI. SIMULATION
+
+In this section, YUKUN of Dalian Maritime University is utilized to verify the validity and flexibility of the optimal control strategy considering event-triggered mechanism. The parameters of YUKUN are as follows: length between perpendiculars is ${105}\mathrm{\;m}$ , beam is ${18}\mathrm{\;m}$ , rudder area is 11.46 ${\mathrm{m}}^{2}$ , loaded speed is ${16.7}\mathrm{{kn}}$ , full amidships draft is ${5.2}\mathrm{\;m}$ , full loaded displacement is ${5735.5}{\mathrm{\;m}}^{3}$ , block coefficient is 0.5595 . Maritime environment can be set that: wind direction ${\psi }_{\text{ wind }} = {30}^{ \circ }$ , wind scale $\mathcal{S} = 6$ , current direction ${\psi }_{\text{ current }} =$ ${30}^{ \circ }$ , current velocity ${v}_{\text{ current }} = 5\mathrm{{kn}}$ .
+
+Therefore, a continuous-time ship dynamic system can be considered
+
+$$
+\left\{ \begin{array}{l} {\dot{x}}_{1} = {x}_{2} \\ {\dot{x}}_{2} = - \frac{1}{T}\left( {{\alpha }_{s}{x}_{2} + {\beta }_{s}{x}_{2}{}^{3}}\right) + \frac{K}{T}\left( {u + {\delta }_{w}}\right) \\ y = {x}_{1} \end{array}\right. \tag{49}
+$$
+
+where ${x}_{1}$ and ${x}_{2} \in \mathbb{R}$ are state variables, $u \in \mathbb{R}$ is the control input variable; reference signal ${x}_{1d} =$ $\sin \left( {{\pi t}/{25}}\right)$ ; the rudder gain $K = {0.314}$ and time constant $T = {62.387}$ ; designed parameters ${\alpha }_{s} = {100}$ and ${\beta }_{s} = {50}$ . Design parameters ${\alpha }_{a} = {0.001},{\alpha }_{c} = 1$ , ${\alpha }_{s} = {100000},R = {0.067},\Delta = {0.39},M = {0.001}$ . The initial state can be set that ${x}_{0} = {\left\lbrack -{0.3},{2.1},{0.1},{0.03}\right\rbrack }^{\mathrm{T}}$ , the initial degree of membership can be set that ${\omega }_{a0} =$ ${\left\lbrack -{3.4}, - 4, - {3.5}, - {1.8}, - 2,0, - {1.4}, - {0.8}, - {1.8}, - 2\right\rbrack }^{\mathrm{T}},{\omega }_{c0} =$ ${\left\lbrack 1,{1.3},{1.5},{1.3},0,0,{1.5},3,{3.3},3\right\rbrack }^{\mathrm{T}}$ .
+
+Simulation results are illustrated in Fig. 1-4. The tracking trajectory and error are shown in Fig. 1, where the ship course can rapidly track the reference course in 10 seconds and tracking error can converge to a bounded compact set of zero based on the designed event-triggered adaptive optimal controller. Fig. 2 describes the general control input and the event-triggered control input. Its result illustrates event-triggered controller is superior to common controllers under the same conditions. The numerical values of event-triggered controller are smaller than that of the general controller, which effectively verifies the competent in reducing mechanical wear and saving energy of the event-triggered mechanism. Fig. 3 describes the corresponding triggered time that highlights the advantages of cost saving for event-triggered controller. In the end, Fig. 4 gives the value function and policy function degree of memberships convergence exhibitions which demonstrate degree of membership signals can rapidly coverage to a bounded range.
+
+ < g r a p h i c s >
+
+Fig. 1. Trajectories of the course tracking error, actual course and reference course.
+
+ < g r a p h i c s >
+
+Fig. 2. Trajectories of control input and event-triggered control input.
+
+§ VII. CONCLUSION
+
+In this article, an event-triggered optimal tracking control scheme has been proposed for uncertain nonlinear systems based on RL. An improved ADP technique combining actor-critic algorithm and fuzzy logic systems have been implemented in solving HJB equation of nominal system. To reduce mechanical wear of actuator and save energy, event-triggered mechanism has been performed to update controller. All signals are UUB by Lyapunov demonstration and simulations verify the feasibility of proposed scheme. In the future, we will study the tracking control problem based on deep reinforcement learning and the multi-agent systems also is an interesting direction.
+
+ < g r a p h i c s >
+
+Fig. 3. Inter-event times of ${u}_{e}$ .
+
+ < g r a p h i c s >
+
+Fig. 4. Convergence situations of policy function degree of memberships ${\widehat{\omega }}_{a}$ and value function degree of memberships ${\widehat{\omega }}_{c}$ .
+
+§ ACKNOWLEDGMENT
+
+This work was supported in part by the Central Guidance on Local Science and Technology Development Fund of Liaoning Province (Grant No. 2023JH6/100100055); in part by the National Natural Science Foundation of China (Grant Nos. 52271360); in part by the Dalian Outstanding Young Scientific and Technological Talents Project (Grant No. 2023RY031); in part by the Basic Scientific Research Project of Liaoning Education Department (Grant No. JYTMS20230164); and in part by the Fundamental Research Funds for the Central Universities (Grant No. 3132024125).
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e9b7b53d895aef939816983f85f706ec753aed84
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,393 @@
+# Simulation Research on Time-Optimal Path Planning of UAV Utilizing the Flightmare Platform
+
+${1}^{\text{st }}$ Yuling Xin
+
+School of Automation Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+xinyuling01@163.com
+
+${2}^{\text{nd }}$ Xin Lu
+
+Yangtze Delta Region Institute (Huzhou)
+
+University of Electronic Science
+
+and Technology of China
+
+Huzhou, China
+
+luxin_uestc@163.com
+
+${3}^{\text{rd }}$ Fusheng ${\mathrm{{Li}}}^{ * }$
+
+School of Automation Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+lifusheng@uestc.edu.cn
+
+Abstract-This paper presents a study on time-optimal path planning and control for Unmanned Aerial Vehicles (UAVs) using fourth-order minimum snap trajectory generation and Nonlinear Model Predictive Control (NMPC) on the Flightmare simulation platform. Targeting the demands of fast flight in complex environments, a fourth-order polynomial trajectory planner is designed to minimize flight time while adhering to dynamical constraints. Integration with an NMPC and a PID controller enables precise tracking and dynamic adjustment of planned trajectories. Experimental results demonstrate that this method generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety.
+
+Index Terms-Flightmare Platform, Fourth-Order Minimum Snap Trajectory Generation, High-Fidelity Simulation, UAV, $\mathbf{{NMPC}}$
+
+## I. INTRODUCTION
+
+As Unmanned Aerial Vehicle (UAV) technology continues to evolve at a rapid pace, its applications have broadened significantly across diverse fields. UAVs, also known as drones, have become indispensable tools for tasks requiring high-speed, agile, and autonomous responses [1]. These include but are not limited to package delivery, search-and-rescue operations, aerial photography, environmental monitoring, and even military applications [2]. Within these applications, the ability to plan time-optimal flight paths that align seamlessly with UAV dynamics is paramount for improving overall performance and safety.
+
+Time-optimal path planning for UAVs is a complex problem that involves optimizing flight trajectories to minimize the total flight time while adhering to various constraints such as dynamical limitations, obstacle avoidance, and energy efficiency [3]. This optimization process not only ensures faster completion of missions but also enhances the stability and safety of the UAVs during operation.
+
+Traditional approaches to path planning for UAVs often focus on generating collision-free paths, but they often fail to account for the intricate dynamics of the aircraft, leading to suboptimal flight performance [4]. To overcome this limitation, recent research has explored the integration of advanced trajectory planning and control techniques [9].
+
+The fourth-order minimum snap trajectory generation method optimizes the snap term (fourth derivative of the position) of the trajectory [15]. This approach ensures that the generated trajectories are both smooth and aggressive, which is crucial for achieving high-speed flight in complex environments. The integration of an NMPC and a PID controller further enhances the system's capabilities by dynamically adjusting control inputs based on real-time state feedback. This allows for precise tracking of the planned trajectory and resilience against uncertainties during flight.
+
+
+
+Fig. 1. Experimental results on the Flightmare simulation platform.
+
+The proposed framework is evaluated using the Flightmare simulation platform, a high-fidelity drone simulation based on the Unity engine. This platform offers precise physics modeling and flexible interfaces for algorithm development, making it an ideal testbed for validating the effectiveness of the proposed method. The experimental results demonstrate that the integration of fourth-order minimum snap trajectory generation with NMPC generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety. The flightmre experimental results are shown in Figure 1.
+
+## II. Problem Formulation
+
+## A. Agile High-speed Flight
+
+High-speed Unmanned Aerial Vehicles (UAVs) operating in complex environments face numerous challenges in trajectory generation and control. These challenges stem from the intricate dynamics of quadrotors, the stringent requirements on agility, and the need to adapt quickly to unexpected obstacles and environmental changes [1].
+
+In terms of trajectory generation, high-speed flight demands trajectories that are not only collision-free but also highly dynamic and aggressive to minimize flight time. Traditional methods of trajectory planning, such as spline interpolation or simple waypoint navigation, often fail to generate trajectories that fully exploit the full capabilities of the UAVs, particularly at high speeds [4]. Minimizing the flight time while adhering to strict dynamical constraints and avoiding obstacles becomes an NP-hard optimization problem that requires sophisticated algorithms to solve efficiently.
+
+Control of high-speed UAVs further complicates the problem due to the inherent nonlinearities and uncertainties in the system dynamics. Real-time adjustments are crucial to handle external disturbances, actuator saturation, and sensor noise. Moreover, the fast-changing environment necessitates a control scheme that can rapidly replan and adjust the trajectory on the fly to ensure safety and mission success.
+
+In summary, agile high-speed UAVs require:
+
+1) Trajectory generation algorithms that can produce smooth yet aggressive trajectories to minimize flight time under strict dynamical and environmental constraints.
+
+2) A robust control framework that can dynamically adjust control inputs based on real-time feedback to handle uncertainties and disturbances, ensuring precise tracking of the planned trajectory.
+
+## B. Optimal Problem
+
+Traditionally, optimal control problems in the context of UAVs aim to minimize a cost function subject to a set of constraints on the system dynamics and inputs. This formulation allows balancing multiple objectives, such as minimizing flight time, energy consumption, or control effort, while ensuring that the UAV operates within its physical and operational limits.
+
+Mathematically, an optimal control problem can be formulated as follows:
+
+$$
+\mathop{\min }\limits_{\mathbf{u}}\;{\int }_{{t}_{0}}^{{t}_{f}}{\mathcal{L}}_{a}\left( {\mathbf{x},\mathbf{u}}\right) {dt} \tag{1}
+$$
+
+$$
+\text{subject to}\;\mathbf{r}\left( {\mathbf{x},\mathbf{u},\mathbf{z}}\right) = 0
+$$
+
+$$
+\mathbf{h}\left( {\mathbf{x},\mathbf{u},\mathbf{z}}\right) \leq 0
+$$
+
+## III. DRONE MODELING
+
+## A. Nomenclature
+
+In this work, we establish a comprehensive mathematical framework for robot vision systems. We define a world frame $W$ with an orthonormal basis $\left\{ {{x}_{W},{y}_{W},{z}_{W}}\right\}$ to represent the global environment. Additionally, a body frame $B$ with an orthonormal basis $\left\{ {{x}_{B},{y}_{B},{z}_{B}}\right\}$ is introduced to describe the robot's orientation and position. The body frame is attached to the quadrotor, with its origin aligned with the center of mass as illustrated in Fig. 2.
+
+Throughout the document, vectors are denoted in boldface with a prefix indicating the frame of reference and a suffix specifying the vector's origin and terminus. For example, ${\mathbf{w}}_{WB}$ represents the position vector of the body frame $B$ relative to the world frame $W$ , expressed in the coordinates of the world frame.
+
+To represent the orientation of rigid bodies, including the robot, we employ quaternions. The time derivative of a quaternion ${\mathbf{q}}_{WB} = \left( {{q}_{w},{q}_{x},{q}_{y},{q}_{z}}\right)$ is governed by the skew-symmetric matrix $\Lambda \left( \omega \right)$ , where ${\mathbf{\omega }}_{B} = {\left( {\omega }_{x},{\omega }_{y},{\omega }_{z}\right) }^{T}$ represents the angular velocity.
+
+
+
+Fig. 2. Schematic diagrams of the quadrotor model being considered, along with the coordinate systems utilized.
+
+## B. Quadrotor Dynamics
+
+The drone is modeled as a rigid body with six degrees of freedom (DoF). The state vector $\mathbf{x} \in {\mathbb{R}}^{13}$ describing the evolution of the drone's configuration over time is given by:
+
+$$
+\mathbf{x} = \left\lbrack \begin{matrix} {\mathbf{p}}_{WB} \\ {\mathbf{v}}_{WB} \\ {\mathbf{q}}_{WB} \\ {\mathbf{\omega }}_{B} \end{matrix}\right\rbrack \text{ and }\mathbf{u} = \left\lbrack \begin{matrix} T \\ \mathbf{\tau } \end{matrix}\right\rbrack \tag{2}
+$$
+
+where: ${\mathbf{p}}_{WB} \in {\mathbb{R}}^{3}$ is the position of the drone’s center of mass in the world frame $W,{\mathbf{v}}_{WB} \in {\mathbb{R}}^{3}$ is the linear velocity of the drone in the world frame, ${\mathbf{q}}_{WB} \in {SO}\left( 3\right)$ is the quaternion representing the rotation from the body frame $B$ to the world frame $W,{\omega }_{B} \in {\mathbb{R}}^{3}$ is the angular velocity of the drone in the body frame. $T$ is the total thrust produced by the drone’s rotors, and $\tau$ is the total torque acting on the drone.
+
+$$
+\mathbf{J} = \left\lbrack \begin{matrix} {J}_{x} & 0 & 0 \\ 0 & {J}_{y} & 0 \\ 0 & 0 & {J}_{z} \end{matrix}\right\rbrack \tag{3}
+$$
+
+where ${J}_{x},{J}_{y}$ , and ${J}_{z}$ are the moments of inertia of the drone about its principal axes.
+
+$$
+T = \mathop{\sum }\limits_{{i = 1}}^{4}{f}_{i} \tag{4}
+$$
+
+where ${f}_{i}$ is the thrust produced by the i-th rotor.
+
+The time derivative of the state vector $\dot{\mathbf{x}}$ is governed by the following equations:
+
+$$
+\dot{\mathbf{x}} = f\left( {\mathbf{x},\mathbf{u}}\right) = \left\lbrack \begin{matrix} {\mathbf{v}}_{WB} \\ \frac{1}{m}\left( {m{\mathbf{g}}_{W} + {\mathbf{q}}_{WB} \odot {\mathbf{T}}_{B}}\right) \\ \frac{1}{2}\mathbf{\Lambda }\left( {\mathbf{\Omega }}_{B}\right) \cdot {\mathbf{q}}_{WB} \\ {\mathbf{J}}^{-1}\left( {\mathbf{\tau } - {\mathbf{\omega }}_{B} \times J{\mathbf{\omega }}_{B}}\right) \end{matrix}\right\rbrack \tag{5}
+$$
+
+where: $\odot$ denotes the quaternion multiplication, ${\mathbf{T}}_{B}$ and $\tau$ are the total force and torque acting on the drone, respectively, $m$ is the mass of the drone, $\mathbf{J} \in {\mathbb{R}}^{3 \times 3}$ is the inertia matrix, ${\mathbf{g}}_{W} = {\left\lbrack 0,0, - {9.81}\right\rbrack }^{T}\mathrm{\;m}/{\mathrm{s}}^{2}$ is the gravitational acceleration in the world frame.
+
+The $\mathbf{\Lambda }$ means the skew-symmetric matrix of the angular velocity, which is given by:
+
+$$
+\mathbf{\Lambda }\left( \omega \right) = \left\lbrack \begin{matrix} 0 & - {\omega }_{x} & - {\omega }_{y} & - {\omega }_{z} \\ {\omega }_{x} & 0 & {\omega }_{z} & - {\omega }_{y} \\ {\omega }_{y} & - {\omega }_{z} & 0 & {\omega }_{x} \\ {\omega }_{z} & {\omega }_{y} & - {\omega }_{x} & 0 \end{matrix}\right\rbrack \tag{6}
+$$
+
+The torque $\tau$ and total thrust $T$ are related to the individual i-th rotor thrust ${f}_{i}$ as:
+
+$$
+{\mathbf{T}}_{B} = \left\lbrack \begin{array}{l} 0 \\ 0 \\ T \end{array}\right\rbrack \text{and}\tau = \left\lbrack \begin{matrix} \frac{l}{\sqrt{2}}\left( {{f}_{1} - {f}_{2} - {f}_{3} + {f}_{4}}\right) \\ \frac{l}{\sqrt{2}}\left( {-{f}_{1} - {f}_{2} + {f}_{3} + {f}_{4}}\right) \\ {c}_{\tau }\left( {{f}_{1} - {f}_{2} + {f}_{3} - {f}_{4}}\right) \end{matrix}\right\rbrack \tag{7}
+$$
+
+## IV. Path Generation
+
+In this section, we discuss the methods used for generating time-optimal paths for autonomous drone racing. Specifically, we focus on polynomial trajectory planning, particularly the use of fourth-order polynomials to minimize the snap of the trajectory, as this objective leads to aggressive and smooth trajectories suitable for drone racing.
+
+## A. Polynomial Trajectory Planning
+
+Polynomial trajectory planning leverages the differential flatness property of quadrotors to simplify full-state trajectory planning to a problem of planning only a few flat outputs (typically position and yaw) [14]. By representing the trajectory as a polynomial, we can efficiently compute the control inputs that achieve the desired trajectory [15].
+
+1) Minimizing Snap: To generate aggressive and smooth trajectories, the objective is to minimize the snap (fourth-order derivative of position) of the trajectory [15] [16]. The snap $s\left( t\right)$ of a polynomial trajectory $p\left( t\right) = {a}_{0} + {a}_{1}t + {a}_{2}{t}^{2} + {a}_{3}{t}^{3} + {a}_{4}{t}^{4}$ can be written as:
+
+$$
+s\left( t\right) = {p}^{\left( 4\right) }\left( t\right) = {24}{a}_{4}t \tag{8}
+$$
+
+where ${p}^{\left( 4\right) }\left( t\right)$ denotes the fourth-order derivative of $p\left( t\right)$ with respect to time $t$ .
+
+The optimization problem can then be formulated as finding the polynomial coefficients ${a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}$ that minimize the integral of the square of the snap over the trajectory duration $T$ :
+
+$$
+\mathop{\min }\limits_{{{a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}}}{\int }_{0}^{T}s{\left( t\right) }^{2}{dt} = {\int }_{0}^{T}{\left( {24}{a}_{4}t\right) }^{2}{dt} \tag{9}
+$$
+
+However, in practice, we often minimize the maximum snap or add additional constraints and costs related to trajectory duration, smoothness, and feasibility. The full optimization problem includes constraints on the initial and final states of the drone (position, velocity, acceleration, and jerk) as well as any intermediate waypoints or obstacle avoidance constraints.
+
+2) Time Allocation: Finding the optimal time allocation along the trajectory (i.e., determining how fast the drone should travel through each segment) is crucial for achieving minimum lap times. This is typically done by optimizing the polynomial coefficients jointly with the trajectory duration $T$ :
+
+$$
+\mathop{\min }\limits_{{{a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}, T}}\left( {{\int }_{0}^{T}s{\left( t\right) }^{2}{dt} + \lambda \cdot T}\right) \tag{10}
+$$
+
+where $\lambda$ is a weight factor balancing the snap minimization and the total trajectory time.
+
+## B. Implementation
+
+Implementing a fourth-order polynomial trajectory planner involves solving the optimization problem described above. This can be done using numerical optimization techniques such as quadratic programming or nonlinear optimization solvers. The resulting trajectory is then used as a reference for the low-level controller to track.
+
+In this paper, we adopt the polynomial trajectory planning approach to generate optimal paths. This method generates time-optimal trajectories by minimizing the snap of the trajectory.
+
+In summary, polynomial trajectory planning with a focus on minimizing the snap of the trajectory is a powerful method for generating time-optimal and feasible paths for autonomous drone racing. This approach leverages the differential flatness property of quadrotors and enables the use of efficient optimization techniques to find optimal trajectories in real time.
+
+## V. Model Predictive Control
+
+Model Predictive Control (MPC) is a powerful technique for controlling complex systems with dynamical constraints [17]. For agile quadrotor flight, Nonlinear Model Predictive Control (NMPC) is particularly suited due to its ability to handle nonlinear dynamics and constraints effectively [9]. In this section, we detail the formulation and implementation of NMPC for quadrotor control.
+
+## A. NMPC Formulation
+
+The NMPC generates control inputs by solving a finite-time optimal control problem (OCP) over a receding horizon. The objective is to minimize the tracking error between the predicted states and reference states, while adhering to the system dynamics and constraints [5]. The optimization problem can be formulated as follows:
+
+$$
+{\mathcal{L}}_{a} = {\overline{\mathbf{x}}}_{N}^{T}{Q}_{N}\overline{{\mathbf{x}}_{N}} + \mathop{\sum }\limits_{{i = 1}}^{{N - 1}}\left( {{\overline{\mathbf{x}}}_{i}^{T}{Q}_{i}\overline{{\mathbf{x}}_{i}} + {\overline{\mathbf{u}}}_{i}^{T}{R}_{i}{\overline{\mathbf{u}}}_{i}}\right)
+$$
+
+$$
+\text{s.t.}
+$$
+
+$$
+{\mathbf{x}}_{0} = {\mathbf{x}}_{\text{init }} \tag{11}
+$$
+
+$$
+{\mathbf{x}}_{k + 1} = f\left( {{\mathbf{x}}_{k},{\mathbf{u}}_{k}}\right) ,
+$$
+
+$$
+{\mathbf{x}}_{k} \in \left\lbrack {{\mathbf{x}}_{\min },{\mathbf{x}}_{\max }}\right\rbrack
+$$
+
+$$
+{\mathbf{u}}_{k} \in \left\lbrack {{\mathbf{u}}_{\min },{\mathbf{u}}_{\max }}\right\rbrack
+$$
+
+where ${\overline{\mathbf{x}}}_{N}^{T}{Q}_{N}\overline{{\mathbf{x}}_{N}}$ is the terminal cost, ${\overline{\mathbf{x}}}_{i}^{T}{Q}_{i}\overline{{\mathbf{x}}_{i}}$ and ${\overline{\mathbf{u}}}_{i}^{T}{R}_{i}{\overline{\mathbf{u}}}_{i}$ are the stage costs, $f\left( {{\mathbf{x}}_{k},{\mathbf{u}}_{k}}\right)$ represents the discrete-time quadrotor dynamics, and ${Q}_{i},{R}_{i}$ , and ${Q}_{N}$ are positive definite weight matrices. The constraints ensure that the control inputs and angular velocities remain within specified bounds. And the $\overline{\mathbf{x}}$ and $\overline{\mathbf{u}}$ are defined as $\overline{\mathbf{x}} = \mathbf{x} - {\mathbf{x}}_{\text{ref }}$ and $\overline{\mathbf{u}} = \mathbf{u} - {\mathbf{u}}_{\text{ref }}$ respectively.
+
+## B. Discretization of Dynamics
+
+The continuous-time quadrotor dynamics need to be dis-cretized for use in the NMPC framework. This can be achieved using numerical integration schemes such as Euler integration or Runge-Kutta methods. In our implementation, we use multiple-shooting as the transcription method and Runge-Kutta integration [18] to discretize the dynamics.
+
+$$
+{x}_{k + 1} = {f}_{\mathrm{{RK}}4}\left( {{x}_{k},{u}_{k},{\Delta t}}\right) \tag{12}
+$$
+
+where ${f}_{\mathrm{{RK}}4}$ is the Runge-Kutta 4th order integration function and ${\Delta t}$ is the discretization time step.
+
+## C. Constraint Handling
+
+Efficient constraint handling within the optimization framework is crucial for real-time performance. The NMPC formulation includes constraints on the angular velocities ${\mathbf{\Omega }}_{\mathrm{B}}$ , thrust $T$ , velocities ${\mathbf{v}}_{WB}$ , and control inputs $\mathbf{u}$ , ensuring that the control actions remain within the physical limits of the quadrotor.
+
+
+
+Fig. 3. Block diagram of the Nonlinear Model Predictive Controller with PID inner loop controller.
+
+## D. Optimization Solver
+
+The resulting nonlinear optimization problem is solved using a suitable solver, such as Sequential Quadratic Programming (SQP). In our implementation, we utilize the ACADO Toolkit [6] with qpOASES [7] as the underlying quadratic program solver.
+
+## E. Integration with PID Controller
+
+While NMPC provides a powerful framework for trajectory optimization and control, a PID controller can be used to complement the NMPC controller for enhanced stability and responsiveness. The PID controller can be used to regulate low-level system dynamics, such as the quadrotor's attitude, while the NMPC controller focuses on the high-level trajectory tracking. The integration of the two controllers is illustrated in Figure 3, where the NMPC controller generates the desired setpoints for the PID controller based on the time-optimal trajectory. The controller gains and parameters for the NMPC and PID controllers are summarized in Table I.
+
+By integrating the PID and NMPC controllers, we can achieve a robust and responsive control system that can dynamically adjust to changes in the environment and mission requirements.
+
+TABLE I
+
+CONTROLLER GAINS AND PARAMETERS COMPARISON
+
+| NMPC | PID |
| Parameter | Value | Parameter | Value |
| $Q$ | diag(200, 200, 500) | ${K}_{p}$ | 50 |
| $R$ | diag(10, 50) | ${K}_{i}$ | 1 |
| ${dt}$ | 50 ms | ${K}_{d}$ | 0.01 |
| $\mathrm{N}$ | 20 | | |
+
+## VI. FLIGHTMARE
+
+In this section, we introduce the Flightmare [8] simulation platform and discuss its advantages for validating the proposed time-optimal path planning and control framework. Flightmare is a high-fidelity quadrotor simulator designed for research and development, offering a range of features that make it an ideal testbed for evaluating UAV algorithms. We highlight the platform's unique capabilities and discuss the experimental setup used to validate the proposed method.
+
+## A. Comparison of Quadrotor Simulators
+
+In contrast to Hector [10], FlightGoggles [11], and AirSim [12] form Table II, Flightmare offers a unique combination of features that make it well-suited for UAV research. Flightmare's rendering engine is based on Unity, providing a flexible and high-speed rendering environment that can be tailored to the user's needs. The platform's physics simulation engine is highly configurable, supporting a range of dynamics from simple to real-world quadrotor behaviors. Flightmare is the only simulator among the compared ones that provides a point cloud extraction feature and an RL API, making it particularly suited for tasks requiring environmental $3\mathrm{D}$ information and reinforcement learning-based control policies. Additionally, Flightmare can simulate multiple vehicles concurrently, facilitating research on multi-drone applications. All in all, Flightmare is chosen as the simulation platform for validating the proposed method due to its unique features and capabilities.
+
+TABLE II
+
+A Comparison of Flightmare to Other Open-Source QuadroTor Simulators
+
+| Simulator | Rendering | Dynamics | Sensor Suite | Point Cloud | RL API | Vehicles |
| Hector [10] | OpenGL | Gazebo-based | IMU, RGB | ✘ | ✘ | Single |
| FlightGoggles [11] | Unity | Flexible | IMU, RGB | ✘ | ✘ | Single |
| AirSim [12] | Unreal Engine | PhysX | IMU, RGB, Depth, Seg | ✘ | ✘ | Multiple |
| Flightmare [8] | Unity | Flexible | IMU, RGB, Depth, Seg | ✓ | ✓ | Multiple |
+
+## B. Advantages of the Flightmare Platform
+
+1) Decoupled Rendering and Physics Engine: One of the key strengths of Flightmare lies in its decoupled architecture, where the rendering engine based on Unity [19] is separated from the physics simulation engine. This design choice enables Flightmare to achieve remarkable performance: rendering speeds of up to ${230}\mathrm{{Hz}}$ and physics simulation frequencies of up to ${200},{000}\mathrm{\;{Hz}}$ on a standard laptop [8]. This separation also allows users to flexibly adjust the balance between visual fidelity and simulation speed, tailored to the specific research needs.
+
+2) Flexible Sensor Suite: Flightmare comes equipped with a rich and configurable sensor suite, including IMU, RGB cameras with ground-truth depth and semantic segmentation, range finders, and collision detection capabilities. This enables researchers to simulate a wide range of sensing modalities, critical for developing and testing perception-driven algorithms. Furthermore, Flightmare provides APIs to extract the full 3D point cloud of the simulated environment, facilitating path planning and obstacle avoidance tasks.
+
+3) Scalability and Parallel Simulation: The platform's flexibility extends to supporting large-scale simulations, enabling the parallel simulation of hundreds of quadrotors. This feature is invaluable for reinforcement learning applications, where data efficiency is crucial. By simulating multiple agents in parallel, Flightmare allows for rapid data collection, significantly accelerating the training process for control policies.
+
+4) Open-Source and Modular Design: Flightmare's open-source nature and modular design encourage collaboration and extendibility. The platform provides a clear and well-documented API, facilitating integration with existing research tools and libraries. The modular structure also makes it easy to swap out components, such as the physics engine or rendering backend, based on the specific research requirements. In this work, we use the RotorS [13] as the underlying quadrotor dynamics model in Flightmare, demonstrating the platform's flexibility and modularity.
+
+
+
+Fig. 4. Block diagram of the integration of control algorithms with Flightmare.
+
+## VII. EXPERIMENTS
+
+In this section, we present the experimental setup and results of the proposed time-optimal path planning and control framework for autonomous drone racing. The integration of polynomial trajectory planning and NMPC is . validated in a simulated environment using the Flightmare platform. The results demonstrate the effectiveness of the proposed method in generating efficient and smooth flight trajectories, enabling UAVs to navigate precisely and stably along planned paths.
+
+## A. Experimental Setup
+
+To evaluate the proposed time-optimal path planning and control framework in the flightmare simulation platform, we firstly design the control flow as shown in Fig. 4. The Flightmare decouples the rendering and physics engines, and the interface between the rendering engine and the quadrotor dynamics is implemented using the high-performance asynchronous messaging library ZeroMQ [20].
+
+The quadrotor configurations used in the simulation are shown in Table III.
+
+## B. Trajectory Tracking Performance on Giving Path
+
+To evaluate the trajectory tracking performance of the proposed framework, we first consider a simple scenario where the drone is required to track a given path. The path is defined as a spiral ascent trajectory given by:
+
+$$
+\mathbf{p}\left( t\right) = \left\lbrack \begin{matrix} r\left( t\right) \cos \left( {\omega t}\right) \\ r\left( t\right) \sin \left( {\omega t}\right) \\ {v}_{z}t \end{matrix}\right\rbrack \tag{13}
+$$
+
+where $r\left( t\right) = {r}_{0} + {v}_{r}t$ is the radius of the spiral, $\omega$ is the angular velocity, and ${v}_{z}$ is the vertical velocity. The drone is required to track this path while maintaining a constant altitude.
+
+TABLE III
+
+QUADROTOR CONFIGURATIONS
+
+| Parameter(s) | Value(s) |
| $m\left\lbrack \mathrm{\;{kg}}\right\rbrack$ | 0.6 |
| $l\left\lbrack \mathrm{\;m}\right\rbrack$ | 0.125 |
| ${J}_{x}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ | 2.1e-3 |
| ${J}_{y}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ | ${2.3}\mathrm{e} - 3$ |
| ${J}_{z}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ | ${4.0}\mathrm{e} - 3$ |
| $\left( {{T}_{\min },{T}_{\max }}\right)$ [N] | (0, 8.5) |
| ${c}_{\tau }\left\lbrack {N \cdot m/{\left( rad/s\right) }^{2}}\right\rbrack$ | ${2.1}\mathrm{e} - 6$ |
| ${c}_{T}\left\lbrack {N/{\left( rad/s\right) }^{2}}\right\rbrack$ | 1.2e-6 |
+
+The trajectory tracking performance of the proposed NMPC controller is shown in Fig. 5. In the figure, the pink dashed line represents the desired path, while the orange line represents the actual trajectory of the drone. The drone successfully tracks the spiral ascent trajectory, demonstrating the effectiveness of the proposed framework in generating smooth and accurate flight trajectories.
+
+The error between the desired path and the actual trajectory is shown in Fig. 6. The error remains within an acceptable range, indicating that the drone is able to track the desired path accurately.
+
+
+
+Fig. 5. Drone tracking the trajectory of a given spiral ascent path. The pink dashed line represents the desired path, while the orange line represents the actual trajectory of the drone.
+
+## C. Time-Optimal Path Planning for NMPC Controller
+
+In this experiment, the drone has to navigate through four gates in a time-optimal manner, which are placed at different locations in $\left( {-{10},0,2}\right) ,\left( {0,{10},4}\right) ,\left( {{10},0,2}\right) ,\left( {0, - {10},2}\right)$ respectively.
+
+
+
+Fig. 6. Error between the desired path and the actual trajectory of the drone. The top, middle, and bottom plots represent the error in the $x, y$ , and $z$ directions, respectively.
+
+The time-optimal path planning results are shown in Fig. 7 and Fig. 8. In these figures, the orange dashed line represents the time-optimal path generated by the polynomial trajectory planner, which is shown in section IV. And the pink line represents the actual trajectory of the drone, which is controlled by the NMPC controller. The drone successfully navigates through the four gates in a time-optimal manner, demonstrating the effectiveness of the proposed framework in generating aggressive and smooth flight trajectories.
+
+
+
+Fig. 7. Time-optimal path generation and NMPC tracking of the drone through four gates. The orange dashed line represents the time-optimal path, the pink line represents the actual tracking trajectory, and the four squares represent the positions of the gates.
+
+The tracking performance from $x, y, z$ axis of the drone is shown in Fig. 9, which indicates that the drone can track the time-optimal path accurately from the $x, y, z$ axis.
+
+## VIII. CONCLUSION
+
+This paper presents a comprehensive framework for time-optimal path generation and control of Unmanned Aerial Vehicles (UAVs) using fourth-order minimum snap trajectory generation and Nonlinear Model Predictive Control (NMPC). The framework is designed to address the challenges of agile high-speed flight in auto race, aiming to minimize flight time while adhering to strict dynamical constraints.
+
+
+
+Fig. 8. Top view of the time-optimal path generation and NMPC tracking of the drone through four gates.
+
+
+
+Fig. 9. Tracking performance of the drone through four gates in the $x, y, z$ axis. The top, middle, and bottom plots represent the tracking performance in the $x, y, z$ axis, respectively. The horizontal error indicates the control delay.
+
+The proposed method utilizes the fourth-order polynomial trajectory generation approach to generate smooth yet aggressive trajectories. By minimizing the snap term (fourth derivative of position), the generated trajectories are optimized for high-speed performance while ensuring their feasibility and safety. The integration of NMPC controller further enhances the system capabilities by dynamically adjusting control inputs based on real-time state feedback, enabling precise trajectory tracking and resilience against uncertainties during flight.
+
+The effectiveness of the proposed framework is evaluated using the Flightmare simulation platform, a high-fidelity drone simulator based on the Unity engine. The experimental results demonstrate that the integration of fourth-order minimum snap trajectory generation with NMPC generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety. This approach is well-suited for autonomous UAV operations in complex environments, such as drone racing and aerial photography.
+
+Future work could further optimize the trajectory planning and control algorithms, explore adaptive control strategies, and investigate their application in real-world UAV platforms.
+
+## REFERENCES
+
+[1] Hanover D, Loquercio A, Bauersfeld L, Romero A, Penicka R, Song Y, et al. Autonomous Drone Racing: A Survey. IEEE Trans Robot. 2024;40:3044-67.
+
+[2] Loquercio A, Kaufmann E, Ranftl R, Müller M, Koltun V, Scaramuzza D. Learning high-speed flight in the wild. Sci Robot. 2021 Oct 13;6(59).
+
+[3] Romero A, Sun S, Foehn P, Scaramuzza D. Model Predictive Contouring Control for Time-Optimal Quadrotor Flight. IEEE Trans Robot. 2022 Dec;38(6):3340-56.
+
+[4] Foehn P, Romero A, Scaramuzza D. Time-optimal planning for quadro-tor waypoint flight. Sci Robot. 2021 Jul 21;6(56).
+
+[5] Falanga D, Foehn P, Lu P, Scaramuzza D. PAMPC: Perception-Aware Model Predictive Control for Quadrotors. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2018.
+
+[6] Houska B, Ferreau HJ, Diehl M. ACADO toolkit-An open-source framework for automatic control and dynamic optimization. Optim Control Appl Meth. 2010 May 25;32(3):298-312.
+
+[7] Ferreau HJ, Kirches C, Potschka A, Bock HG, Diehl M. qpOASES: a parametric active-set algorithm for quadratic programming. Math Prog Comp. 2014 Apr 30;6(4):327-63.
+
+[8] Song Y, Naji S, Kaufmann E, Loquercio A, Scaramuzza D. Flightmare: A Flexible Quadrotor Simulator. Conference on Robot Learning. 2020;
+
+[9] Sun S, Romero A, Foehn P, Kaufmann E, Scaramuzza D. A Comparative Study of Nonlinear MPC and Differential-Flatness-Based Control for Quadrotor Agile Flight. IEEE Trans Robot. 2022;1-17.
+
+[10] Kohlbrecher S, Meyer J, Graber T, Petersen K, Klingauf U, von Stryk O. Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots. In: RoboCup 2013: Robot World Cup XVII. Berlin, Heidelberg: Springer Berlin Heidelberg; 2014. p. 624-31.
+
+[11] Guerra W, Tal E, Murali V, Ryou G, Karaman S. FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2019.
+
+[12] Shah S, Dey D, Lovett C, Kapoor A. AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles. In: Field and Service Robotics. Cham: Springer International Publishing; 2017. p. 621-35.
+
+[13] Furrer F, Burri M, Achtelik M, Siegwart R. RotorS-A Modular Gazebo MAV Simulator Framework. In: Studies in Computational Intelligence. Cham: Springer International Publishing; 2016. p. 595-625.
+
+[14] Faessler M, Franchi A, Scaramuzza D. Differential Flatness of Quadrotor Dynamics Subject to Rotor Drag for Accurate Tracking of High-Speed Trajectories. IEEE Robot Autom Lett. 2018 Apr;3(2):620-6.
+
+[15] Mellinger D, Kumar V. Minimum Snap Trajectory Generation and Control for Quadrotors. In: 2011 IEEE International Conference on Robotics and Automation. IEEE; 2011.
+
+[16] Mellinger D, Michael N, Kumar V. Trajectory generation and control for precise aggressive maneuvers with quadrotors. Int J Rob Res. 2012 Jan 25;31(5):664-74.
+
+[17] Nguyen H, Kamel M, Alexis K, Siegwart R. Model Predictive Control for Micro Aerial Vehicles: A Survey. In: 2021 European Control Conference (ECC). IEEE; 2021.
+
+[18] Houska B, Ferreau HJ, Diehl M. An auto-generated real-time iteration algorithm for nonlinear MPC in the microsecond range. Automatica (Oxf). 2011 Oct;47(10):2279-85.
+
+[19] "Unity3d Game Engine," https://unity3d.com/, 2019, [Online; accessed 28-February-2019].
+
+[20] ZeroMQ: High-performance brokerless messaging. ZeroMQ. https://zeromq.org
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..98a780272a726a1f8f9fa1429b3a70e5b08a6195
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,414 @@
+§ SIMULATION RESEARCH ON TIME-OPTIMAL PATH PLANNING OF UAV UTILIZING THE FLIGHTMARE PLATFORM
+
+${1}^{\text{ st }}$ Yuling Xin
+
+School of Automation Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+xinyuling01@163.com
+
+${2}^{\text{ nd }}$ Xin Lu
+
+Yangtze Delta Region Institute (Huzhou)
+
+University of Electronic Science
+
+and Technology of China
+
+Huzhou, China
+
+luxin_uestc@163.com
+
+${3}^{\text{ rd }}$ Fusheng ${\mathrm{{Li}}}^{ * }$
+
+School of Automation Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+lifusheng@uestc.edu.cn
+
+Abstract-This paper presents a study on time-optimal path planning and control for Unmanned Aerial Vehicles (UAVs) using fourth-order minimum snap trajectory generation and Nonlinear Model Predictive Control (NMPC) on the Flightmare simulation platform. Targeting the demands of fast flight in complex environments, a fourth-order polynomial trajectory planner is designed to minimize flight time while adhering to dynamical constraints. Integration with an NMPC and a PID controller enables precise tracking and dynamic adjustment of planned trajectories. Experimental results demonstrate that this method generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety.
+
+Index Terms-Flightmare Platform, Fourth-Order Minimum Snap Trajectory Generation, High-Fidelity Simulation, UAV, $\mathbf{{NMPC}}$
+
+§ I. INTRODUCTION
+
+As Unmanned Aerial Vehicle (UAV) technology continues to evolve at a rapid pace, its applications have broadened significantly across diverse fields. UAVs, also known as drones, have become indispensable tools for tasks requiring high-speed, agile, and autonomous responses [1]. These include but are not limited to package delivery, search-and-rescue operations, aerial photography, environmental monitoring, and even military applications [2]. Within these applications, the ability to plan time-optimal flight paths that align seamlessly with UAV dynamics is paramount for improving overall performance and safety.
+
+Time-optimal path planning for UAVs is a complex problem that involves optimizing flight trajectories to minimize the total flight time while adhering to various constraints such as dynamical limitations, obstacle avoidance, and energy efficiency [3]. This optimization process not only ensures faster completion of missions but also enhances the stability and safety of the UAVs during operation.
+
+Traditional approaches to path planning for UAVs often focus on generating collision-free paths, but they often fail to account for the intricate dynamics of the aircraft, leading to suboptimal flight performance [4]. To overcome this limitation, recent research has explored the integration of advanced trajectory planning and control techniques [9].
+
+The fourth-order minimum snap trajectory generation method optimizes the snap term (fourth derivative of the position) of the trajectory [15]. This approach ensures that the generated trajectories are both smooth and aggressive, which is crucial for achieving high-speed flight in complex environments. The integration of an NMPC and a PID controller further enhances the system's capabilities by dynamically adjusting control inputs based on real-time state feedback. This allows for precise tracking of the planned trajectory and resilience against uncertainties during flight.
+
+ < g r a p h i c s >
+
+Fig. 1. Experimental results on the Flightmare simulation platform.
+
+The proposed framework is evaluated using the Flightmare simulation platform, a high-fidelity drone simulation based on the Unity engine. This platform offers precise physics modeling and flexible interfaces for algorithm development, making it an ideal testbed for validating the effectiveness of the proposed method. The experimental results demonstrate that the integration of fourth-order minimum snap trajectory generation with NMPC generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety. The flightmre experimental results are shown in Figure 1.
+
+§ II. PROBLEM FORMULATION
+
+§ A. AGILE HIGH-SPEED FLIGHT
+
+High-speed Unmanned Aerial Vehicles (UAVs) operating in complex environments face numerous challenges in trajectory generation and control. These challenges stem from the intricate dynamics of quadrotors, the stringent requirements on agility, and the need to adapt quickly to unexpected obstacles and environmental changes [1].
+
+In terms of trajectory generation, high-speed flight demands trajectories that are not only collision-free but also highly dynamic and aggressive to minimize flight time. Traditional methods of trajectory planning, such as spline interpolation or simple waypoint navigation, often fail to generate trajectories that fully exploit the full capabilities of the UAVs, particularly at high speeds [4]. Minimizing the flight time while adhering to strict dynamical constraints and avoiding obstacles becomes an NP-hard optimization problem that requires sophisticated algorithms to solve efficiently.
+
+Control of high-speed UAVs further complicates the problem due to the inherent nonlinearities and uncertainties in the system dynamics. Real-time adjustments are crucial to handle external disturbances, actuator saturation, and sensor noise. Moreover, the fast-changing environment necessitates a control scheme that can rapidly replan and adjust the trajectory on the fly to ensure safety and mission success.
+
+In summary, agile high-speed UAVs require:
+
+1) Trajectory generation algorithms that can produce smooth yet aggressive trajectories to minimize flight time under strict dynamical and environmental constraints.
+
+2) A robust control framework that can dynamically adjust control inputs based on real-time feedback to handle uncertainties and disturbances, ensuring precise tracking of the planned trajectory.
+
+§ B. OPTIMAL PROBLEM
+
+Traditionally, optimal control problems in the context of UAVs aim to minimize a cost function subject to a set of constraints on the system dynamics and inputs. This formulation allows balancing multiple objectives, such as minimizing flight time, energy consumption, or control effort, while ensuring that the UAV operates within its physical and operational limits.
+
+Mathematically, an optimal control problem can be formulated as follows:
+
+$$
+\mathop{\min }\limits_{\mathbf{u}}\;{\int }_{{t}_{0}}^{{t}_{f}}{\mathcal{L}}_{a}\left( {\mathbf{x},\mathbf{u}}\right) {dt} \tag{1}
+$$
+
+$$
+\text{ subject to }\;\mathbf{r}\left( {\mathbf{x},\mathbf{u},\mathbf{z}}\right) = 0
+$$
+
+$$
+\mathbf{h}\left( {\mathbf{x},\mathbf{u},\mathbf{z}}\right) \leq 0
+$$
+
+§ III. DRONE MODELING
+
+§ A. NOMENCLATURE
+
+In this work, we establish a comprehensive mathematical framework for robot vision systems. We define a world frame $W$ with an orthonormal basis $\left\{ {{x}_{W},{y}_{W},{z}_{W}}\right\}$ to represent the global environment. Additionally, a body frame $B$ with an orthonormal basis $\left\{ {{x}_{B},{y}_{B},{z}_{B}}\right\}$ is introduced to describe the robot's orientation and position. The body frame is attached to the quadrotor, with its origin aligned with the center of mass as illustrated in Fig. 2.
+
+Throughout the document, vectors are denoted in boldface with a prefix indicating the frame of reference and a suffix specifying the vector's origin and terminus. For example, ${\mathbf{w}}_{WB}$ represents the position vector of the body frame $B$ relative to the world frame $W$ , expressed in the coordinates of the world frame.
+
+To represent the orientation of rigid bodies, including the robot, we employ quaternions. The time derivative of a quaternion ${\mathbf{q}}_{WB} = \left( {{q}_{w},{q}_{x},{q}_{y},{q}_{z}}\right)$ is governed by the skew-symmetric matrix $\Lambda \left( \omega \right)$ , where ${\mathbf{\omega }}_{B} = {\left( {\omega }_{x},{\omega }_{y},{\omega }_{z}\right) }^{T}$ represents the angular velocity.
+
+ < g r a p h i c s >
+
+Fig. 2. Schematic diagrams of the quadrotor model being considered, along with the coordinate systems utilized.
+
+§ B. QUADROTOR DYNAMICS
+
+The drone is modeled as a rigid body with six degrees of freedom (DoF). The state vector $\mathbf{x} \in {\mathbb{R}}^{13}$ describing the evolution of the drone's configuration over time is given by:
+
+$$
+\mathbf{x} = \left\lbrack \begin{matrix} {\mathbf{p}}_{WB} \\ {\mathbf{v}}_{WB} \\ {\mathbf{q}}_{WB} \\ {\mathbf{\omega }}_{B} \end{matrix}\right\rbrack \text{ and }\mathbf{u} = \left\lbrack \begin{matrix} T \\ \mathbf{\tau } \end{matrix}\right\rbrack \tag{2}
+$$
+
+where: ${\mathbf{p}}_{WB} \in {\mathbb{R}}^{3}$ is the position of the drone’s center of mass in the world frame $W,{\mathbf{v}}_{WB} \in {\mathbb{R}}^{3}$ is the linear velocity of the drone in the world frame, ${\mathbf{q}}_{WB} \in {SO}\left( 3\right)$ is the quaternion representing the rotation from the body frame $B$ to the world frame $W,{\omega }_{B} \in {\mathbb{R}}^{3}$ is the angular velocity of the drone in the body frame. $T$ is the total thrust produced by the drone’s rotors, and $\tau$ is the total torque acting on the drone.
+
+$$
+\mathbf{J} = \left\lbrack \begin{matrix} {J}_{x} & 0 & 0 \\ 0 & {J}_{y} & 0 \\ 0 & 0 & {J}_{z} \end{matrix}\right\rbrack \tag{3}
+$$
+
+where ${J}_{x},{J}_{y}$ , and ${J}_{z}$ are the moments of inertia of the drone about its principal axes.
+
+$$
+T = \mathop{\sum }\limits_{{i = 1}}^{4}{f}_{i} \tag{4}
+$$
+
+where ${f}_{i}$ is the thrust produced by the i-th rotor.
+
+The time derivative of the state vector $\dot{\mathbf{x}}$ is governed by the following equations:
+
+$$
+\dot{\mathbf{x}} = f\left( {\mathbf{x},\mathbf{u}}\right) = \left\lbrack \begin{matrix} {\mathbf{v}}_{WB} \\ \frac{1}{m}\left( {m{\mathbf{g}}_{W} + {\mathbf{q}}_{WB} \odot {\mathbf{T}}_{B}}\right) \\ \frac{1}{2}\mathbf{\Lambda }\left( {\mathbf{\Omega }}_{B}\right) \cdot {\mathbf{q}}_{WB} \\ {\mathbf{J}}^{-1}\left( {\mathbf{\tau } - {\mathbf{\omega }}_{B} \times J{\mathbf{\omega }}_{B}}\right) \end{matrix}\right\rbrack \tag{5}
+$$
+
+where: $\odot$ denotes the quaternion multiplication, ${\mathbf{T}}_{B}$ and $\tau$ are the total force and torque acting on the drone, respectively, $m$ is the mass of the drone, $\mathbf{J} \in {\mathbb{R}}^{3 \times 3}$ is the inertia matrix, ${\mathbf{g}}_{W} = {\left\lbrack 0,0, - {9.81}\right\rbrack }^{T}\mathrm{\;m}/{\mathrm{s}}^{2}$ is the gravitational acceleration in the world frame.
+
+The $\mathbf{\Lambda }$ means the skew-symmetric matrix of the angular velocity, which is given by:
+
+$$
+\mathbf{\Lambda }\left( \omega \right) = \left\lbrack \begin{matrix} 0 & - {\omega }_{x} & - {\omega }_{y} & - {\omega }_{z} \\ {\omega }_{x} & 0 & {\omega }_{z} & - {\omega }_{y} \\ {\omega }_{y} & - {\omega }_{z} & 0 & {\omega }_{x} \\ {\omega }_{z} & {\omega }_{y} & - {\omega }_{x} & 0 \end{matrix}\right\rbrack \tag{6}
+$$
+
+The torque $\tau$ and total thrust $T$ are related to the individual i-th rotor thrust ${f}_{i}$ as:
+
+$$
+{\mathbf{T}}_{B} = \left\lbrack \begin{array}{l} 0 \\ 0 \\ T \end{array}\right\rbrack \text{ and }\tau = \left\lbrack \begin{matrix} \frac{l}{\sqrt{2}}\left( {{f}_{1} - {f}_{2} - {f}_{3} + {f}_{4}}\right) \\ \frac{l}{\sqrt{2}}\left( {-{f}_{1} - {f}_{2} + {f}_{3} + {f}_{4}}\right) \\ {c}_{\tau }\left( {{f}_{1} - {f}_{2} + {f}_{3} - {f}_{4}}\right) \end{matrix}\right\rbrack \tag{7}
+$$
+
+§ IV. PATH GENERATION
+
+In this section, we discuss the methods used for generating time-optimal paths for autonomous drone racing. Specifically, we focus on polynomial trajectory planning, particularly the use of fourth-order polynomials to minimize the snap of the trajectory, as this objective leads to aggressive and smooth trajectories suitable for drone racing.
+
+§ A. POLYNOMIAL TRAJECTORY PLANNING
+
+Polynomial trajectory planning leverages the differential flatness property of quadrotors to simplify full-state trajectory planning to a problem of planning only a few flat outputs (typically position and yaw) [14]. By representing the trajectory as a polynomial, we can efficiently compute the control inputs that achieve the desired trajectory [15].
+
+1) Minimizing Snap: To generate aggressive and smooth trajectories, the objective is to minimize the snap (fourth-order derivative of position) of the trajectory [15] [16]. The snap $s\left( t\right)$ of a polynomial trajectory $p\left( t\right) = {a}_{0} + {a}_{1}t + {a}_{2}{t}^{2} + {a}_{3}{t}^{3} + {a}_{4}{t}^{4}$ can be written as:
+
+$$
+s\left( t\right) = {p}^{\left( 4\right) }\left( t\right) = {24}{a}_{4}t \tag{8}
+$$
+
+where ${p}^{\left( 4\right) }\left( t\right)$ denotes the fourth-order derivative of $p\left( t\right)$ with respect to time $t$ .
+
+The optimization problem can then be formulated as finding the polynomial coefficients ${a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}$ that minimize the integral of the square of the snap over the trajectory duration $T$ :
+
+$$
+\mathop{\min }\limits_{{{a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}}}{\int }_{0}^{T}s{\left( t\right) }^{2}{dt} = {\int }_{0}^{T}{\left( {24}{a}_{4}t\right) }^{2}{dt} \tag{9}
+$$
+
+However, in practice, we often minimize the maximum snap or add additional constraints and costs related to trajectory duration, smoothness, and feasibility. The full optimization problem includes constraints on the initial and final states of the drone (position, velocity, acceleration, and jerk) as well as any intermediate waypoints or obstacle avoidance constraints.
+
+2) Time Allocation: Finding the optimal time allocation along the trajectory (i.e., determining how fast the drone should travel through each segment) is crucial for achieving minimum lap times. This is typically done by optimizing the polynomial coefficients jointly with the trajectory duration $T$ :
+
+$$
+\mathop{\min }\limits_{{{a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4},T}}\left( {{\int }_{0}^{T}s{\left( t\right) }^{2}{dt} + \lambda \cdot T}\right) \tag{10}
+$$
+
+where $\lambda$ is a weight factor balancing the snap minimization and the total trajectory time.
+
+§ B. IMPLEMENTATION
+
+Implementing a fourth-order polynomial trajectory planner involves solving the optimization problem described above. This can be done using numerical optimization techniques such as quadratic programming or nonlinear optimization solvers. The resulting trajectory is then used as a reference for the low-level controller to track.
+
+In this paper, we adopt the polynomial trajectory planning approach to generate optimal paths. This method generates time-optimal trajectories by minimizing the snap of the trajectory.
+
+In summary, polynomial trajectory planning with a focus on minimizing the snap of the trajectory is a powerful method for generating time-optimal and feasible paths for autonomous drone racing. This approach leverages the differential flatness property of quadrotors and enables the use of efficient optimization techniques to find optimal trajectories in real time.
+
+§ V. MODEL PREDICTIVE CONTROL
+
+Model Predictive Control (MPC) is a powerful technique for controlling complex systems with dynamical constraints [17]. For agile quadrotor flight, Nonlinear Model Predictive Control (NMPC) is particularly suited due to its ability to handle nonlinear dynamics and constraints effectively [9]. In this section, we detail the formulation and implementation of NMPC for quadrotor control.
+
+§ A. NMPC FORMULATION
+
+The NMPC generates control inputs by solving a finite-time optimal control problem (OCP) over a receding horizon. The objective is to minimize the tracking error between the predicted states and reference states, while adhering to the system dynamics and constraints [5]. The optimization problem can be formulated as follows:
+
+$$
+{\mathcal{L}}_{a} = {\overline{\mathbf{x}}}_{N}^{T}{Q}_{N}\overline{{\mathbf{x}}_{N}} + \mathop{\sum }\limits_{{i = 1}}^{{N - 1}}\left( {{\overline{\mathbf{x}}}_{i}^{T}{Q}_{i}\overline{{\mathbf{x}}_{i}} + {\overline{\mathbf{u}}}_{i}^{T}{R}_{i}{\overline{\mathbf{u}}}_{i}}\right)
+$$
+
+$$
+\text{ s.t. }
+$$
+
+$$
+{\mathbf{x}}_{0} = {\mathbf{x}}_{\text{ init }} \tag{11}
+$$
+
+$$
+{\mathbf{x}}_{k + 1} = f\left( {{\mathbf{x}}_{k},{\mathbf{u}}_{k}}\right) ,
+$$
+
+$$
+{\mathbf{x}}_{k} \in \left\lbrack {{\mathbf{x}}_{\min },{\mathbf{x}}_{\max }}\right\rbrack
+$$
+
+$$
+{\mathbf{u}}_{k} \in \left\lbrack {{\mathbf{u}}_{\min },{\mathbf{u}}_{\max }}\right\rbrack
+$$
+
+where ${\overline{\mathbf{x}}}_{N}^{T}{Q}_{N}\overline{{\mathbf{x}}_{N}}$ is the terminal cost, ${\overline{\mathbf{x}}}_{i}^{T}{Q}_{i}\overline{{\mathbf{x}}_{i}}$ and ${\overline{\mathbf{u}}}_{i}^{T}{R}_{i}{\overline{\mathbf{u}}}_{i}$ are the stage costs, $f\left( {{\mathbf{x}}_{k},{\mathbf{u}}_{k}}\right)$ represents the discrete-time quadrotor dynamics, and ${Q}_{i},{R}_{i}$ , and ${Q}_{N}$ are positive definite weight matrices. The constraints ensure that the control inputs and angular velocities remain within specified bounds. And the $\overline{\mathbf{x}}$ and $\overline{\mathbf{u}}$ are defined as $\overline{\mathbf{x}} = \mathbf{x} - {\mathbf{x}}_{\text{ ref }}$ and $\overline{\mathbf{u}} = \mathbf{u} - {\mathbf{u}}_{\text{ ref }}$ respectively.
+
+§ B. DISCRETIZATION OF DYNAMICS
+
+The continuous-time quadrotor dynamics need to be dis-cretized for use in the NMPC framework. This can be achieved using numerical integration schemes such as Euler integration or Runge-Kutta methods. In our implementation, we use multiple-shooting as the transcription method and Runge-Kutta integration [18] to discretize the dynamics.
+
+$$
+{x}_{k + 1} = {f}_{\mathrm{{RK}}4}\left( {{x}_{k},{u}_{k},{\Delta t}}\right) \tag{12}
+$$
+
+where ${f}_{\mathrm{{RK}}4}$ is the Runge-Kutta 4th order integration function and ${\Delta t}$ is the discretization time step.
+
+§ C. CONSTRAINT HANDLING
+
+Efficient constraint handling within the optimization framework is crucial for real-time performance. The NMPC formulation includes constraints on the angular velocities ${\mathbf{\Omega }}_{\mathrm{B}}$ , thrust $T$ , velocities ${\mathbf{v}}_{WB}$ , and control inputs $\mathbf{u}$ , ensuring that the control actions remain within the physical limits of the quadrotor.
+
+ < g r a p h i c s >
+
+Fig. 3. Block diagram of the Nonlinear Model Predictive Controller with PID inner loop controller.
+
+§ D. OPTIMIZATION SOLVER
+
+The resulting nonlinear optimization problem is solved using a suitable solver, such as Sequential Quadratic Programming (SQP). In our implementation, we utilize the ACADO Toolkit [6] with qpOASES [7] as the underlying quadratic program solver.
+
+§ E. INTEGRATION WITH PID CONTROLLER
+
+While NMPC provides a powerful framework for trajectory optimization and control, a PID controller can be used to complement the NMPC controller for enhanced stability and responsiveness. The PID controller can be used to regulate low-level system dynamics, such as the quadrotor's attitude, while the NMPC controller focuses on the high-level trajectory tracking. The integration of the two controllers is illustrated in Figure 3, where the NMPC controller generates the desired setpoints for the PID controller based on the time-optimal trajectory. The controller gains and parameters for the NMPC and PID controllers are summarized in Table I.
+
+By integrating the PID and NMPC controllers, we can achieve a robust and responsive control system that can dynamically adjust to changes in the environment and mission requirements.
+
+TABLE I
+
+CONTROLLER GAINS AND PARAMETERS COMPARISON
+
+max width=
+
+2|c|NMPC 2|c|PID
+
+1-4
+Parameter Value Parameter Value
+
+1-4
+$Q$ diag(200, 200, 500) ${K}_{p}$ 50
+
+1-4
+$R$ diag(10, 50) ${K}_{i}$ 1
+
+1-4
+${dt}$ 50 ms ${K}_{d}$ 0.01
+
+1-4
+$\mathrm{N}$ 20 X X
+
+1-4
+
+§ VI. FLIGHTMARE
+
+In this section, we introduce the Flightmare [8] simulation platform and discuss its advantages for validating the proposed time-optimal path planning and control framework. Flightmare is a high-fidelity quadrotor simulator designed for research and development, offering a range of features that make it an ideal testbed for evaluating UAV algorithms. We highlight the platform's unique capabilities and discuss the experimental setup used to validate the proposed method.
+
+§ A. COMPARISON OF QUADROTOR SIMULATORS
+
+In contrast to Hector [10], FlightGoggles [11], and AirSim [12] form Table II, Flightmare offers a unique combination of features that make it well-suited for UAV research. Flightmare's rendering engine is based on Unity, providing a flexible and high-speed rendering environment that can be tailored to the user's needs. The platform's physics simulation engine is highly configurable, supporting a range of dynamics from simple to real-world quadrotor behaviors. Flightmare is the only simulator among the compared ones that provides a point cloud extraction feature and an RL API, making it particularly suited for tasks requiring environmental $3\mathrm{D}$ information and reinforcement learning-based control policies. Additionally, Flightmare can simulate multiple vehicles concurrently, facilitating research on multi-drone applications. All in all, Flightmare is chosen as the simulation platform for validating the proposed method due to its unique features and capabilities.
+
+TABLE II
+
+A Comparison of Flightmare to Other Open-Source QuadroTor Simulators
+
+max width=
+
+Simulator Rendering Dynamics Sensor Suite Point Cloud RL API Vehicles
+
+1-7
+Hector [10] OpenGL Gazebo-based IMU, RGB ✘ ✘ Single
+
+1-7
+FlightGoggles [11] Unity Flexible IMU, RGB ✘ ✘ Single
+
+1-7
+AirSim [12] Unreal Engine PhysX IMU, RGB, Depth, Seg ✘ ✘ Multiple
+
+1-7
+Flightmare [8] Unity Flexible IMU, RGB, Depth, Seg ✓ ✓ Multiple
+
+1-7
+
+§ B. ADVANTAGES OF THE FLIGHTMARE PLATFORM
+
+1) Decoupled Rendering and Physics Engine: One of the key strengths of Flightmare lies in its decoupled architecture, where the rendering engine based on Unity [19] is separated from the physics simulation engine. This design choice enables Flightmare to achieve remarkable performance: rendering speeds of up to ${230}\mathrm{{Hz}}$ and physics simulation frequencies of up to ${200},{000}\mathrm{\;{Hz}}$ on a standard laptop [8]. This separation also allows users to flexibly adjust the balance between visual fidelity and simulation speed, tailored to the specific research needs.
+
+2) Flexible Sensor Suite: Flightmare comes equipped with a rich and configurable sensor suite, including IMU, RGB cameras with ground-truth depth and semantic segmentation, range finders, and collision detection capabilities. This enables researchers to simulate a wide range of sensing modalities, critical for developing and testing perception-driven algorithms. Furthermore, Flightmare provides APIs to extract the full 3D point cloud of the simulated environment, facilitating path planning and obstacle avoidance tasks.
+
+3) Scalability and Parallel Simulation: The platform's flexibility extends to supporting large-scale simulations, enabling the parallel simulation of hundreds of quadrotors. This feature is invaluable for reinforcement learning applications, where data efficiency is crucial. By simulating multiple agents in parallel, Flightmare allows for rapid data collection, significantly accelerating the training process for control policies.
+
+4) Open-Source and Modular Design: Flightmare's open-source nature and modular design encourage collaboration and extendibility. The platform provides a clear and well-documented API, facilitating integration with existing research tools and libraries. The modular structure also makes it easy to swap out components, such as the physics engine or rendering backend, based on the specific research requirements. In this work, we use the RotorS [13] as the underlying quadrotor dynamics model in Flightmare, demonstrating the platform's flexibility and modularity.
+
+ < g r a p h i c s >
+
+Fig. 4. Block diagram of the integration of control algorithms with Flightmare.
+
+§ VII. EXPERIMENTS
+
+In this section, we present the experimental setup and results of the proposed time-optimal path planning and control framework for autonomous drone racing. The integration of polynomial trajectory planning and NMPC is . validated in a simulated environment using the Flightmare platform. The results demonstrate the effectiveness of the proposed method in generating efficient and smooth flight trajectories, enabling UAVs to navigate precisely and stably along planned paths.
+
+§ A. EXPERIMENTAL SETUP
+
+To evaluate the proposed time-optimal path planning and control framework in the flightmare simulation platform, we firstly design the control flow as shown in Fig. 4. The Flightmare decouples the rendering and physics engines, and the interface between the rendering engine and the quadrotor dynamics is implemented using the high-performance asynchronous messaging library ZeroMQ [20].
+
+The quadrotor configurations used in the simulation are shown in Table III.
+
+§ B. TRAJECTORY TRACKING PERFORMANCE ON GIVING PATH
+
+To evaluate the trajectory tracking performance of the proposed framework, we first consider a simple scenario where the drone is required to track a given path. The path is defined as a spiral ascent trajectory given by:
+
+$$
+\mathbf{p}\left( t\right) = \left\lbrack \begin{matrix} r\left( t\right) \cos \left( {\omega t}\right) \\ r\left( t\right) \sin \left( {\omega t}\right) \\ {v}_{z}t \end{matrix}\right\rbrack \tag{13}
+$$
+
+where $r\left( t\right) = {r}_{0} + {v}_{r}t$ is the radius of the spiral, $\omega$ is the angular velocity, and ${v}_{z}$ is the vertical velocity. The drone is required to track this path while maintaining a constant altitude.
+
+TABLE III
+
+QUADROTOR CONFIGURATIONS
+
+max width=
+
+Parameter(s) Value(s)
+
+1-2
+$m\left\lbrack \mathrm{\;{kg}}\right\rbrack$ 0.6
+
+1-2
+$l\left\lbrack \mathrm{\;m}\right\rbrack$ 0.125
+
+1-2
+${J}_{x}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ 2.1e-3
+
+1-2
+${J}_{y}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ ${2.3}\mathrm{e} - 3$
+
+1-2
+${J}_{z}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ ${4.0}\mathrm{e} - 3$
+
+1-2
+$\left( {{T}_{\min },{T}_{\max }}\right)$ [N] (0, 8.5)
+
+1-2
+${c}_{\tau }\left\lbrack {N \cdot m/{\left( rad/s\right) }^{2}}\right\rbrack$ ${2.1}\mathrm{e} - 6$
+
+1-2
+${c}_{T}\left\lbrack {N/{\left( rad/s\right) }^{2}}\right\rbrack$ 1.2e-6
+
+1-2
+
+The trajectory tracking performance of the proposed NMPC controller is shown in Fig. 5. In the figure, the pink dashed line represents the desired path, while the orange line represents the actual trajectory of the drone. The drone successfully tracks the spiral ascent trajectory, demonstrating the effectiveness of the proposed framework in generating smooth and accurate flight trajectories.
+
+The error between the desired path and the actual trajectory is shown in Fig. 6. The error remains within an acceptable range, indicating that the drone is able to track the desired path accurately.
+
+ < g r a p h i c s >
+
+Fig. 5. Drone tracking the trajectory of a given spiral ascent path. The pink dashed line represents the desired path, while the orange line represents the actual trajectory of the drone.
+
+§ C. TIME-OPTIMAL PATH PLANNING FOR NMPC CONTROLLER
+
+In this experiment, the drone has to navigate through four gates in a time-optimal manner, which are placed at different locations in $\left( {-{10},0,2}\right) ,\left( {0,{10},4}\right) ,\left( {{10},0,2}\right) ,\left( {0, - {10},2}\right)$ respectively.
+
+ < g r a p h i c s >
+
+Fig. 6. Error between the desired path and the actual trajectory of the drone. The top, middle, and bottom plots represent the error in the $x,y$ , and $z$ directions, respectively.
+
+The time-optimal path planning results are shown in Fig. 7 and Fig. 8. In these figures, the orange dashed line represents the time-optimal path generated by the polynomial trajectory planner, which is shown in section IV. And the pink line represents the actual trajectory of the drone, which is controlled by the NMPC controller. The drone successfully navigates through the four gates in a time-optimal manner, demonstrating the effectiveness of the proposed framework in generating aggressive and smooth flight trajectories.
+
+ < g r a p h i c s >
+
+Fig. 7. Time-optimal path generation and NMPC tracking of the drone through four gates. The orange dashed line represents the time-optimal path, the pink line represents the actual tracking trajectory, and the four squares represent the positions of the gates.
+
+The tracking performance from $x,y,z$ axis of the drone is shown in Fig. 9, which indicates that the drone can track the time-optimal path accurately from the $x,y,z$ axis.
+
+§ VIII. CONCLUSION
+
+This paper presents a comprehensive framework for time-optimal path generation and control of Unmanned Aerial Vehicles (UAVs) using fourth-order minimum snap trajectory generation and Nonlinear Model Predictive Control (NMPC). The framework is designed to address the challenges of agile high-speed flight in auto race, aiming to minimize flight time while adhering to strict dynamical constraints.
+
+ < g r a p h i c s >
+
+Fig. 8. Top view of the time-optimal path generation and NMPC tracking of the drone through four gates.
+
+ < g r a p h i c s >
+
+Fig. 9. Tracking performance of the drone through four gates in the $x,y,z$ axis. The top, middle, and bottom plots represent the tracking performance in the $x,y,z$ axis, respectively. The horizontal error indicates the control delay.
+
+The proposed method utilizes the fourth-order polynomial trajectory generation approach to generate smooth yet aggressive trajectories. By minimizing the snap term (fourth derivative of position), the generated trajectories are optimized for high-speed performance while ensuring their feasibility and safety. The integration of NMPC controller further enhances the system capabilities by dynamically adjusting control inputs based on real-time state feedback, enabling precise trajectory tracking and resilience against uncertainties during flight.
+
+The effectiveness of the proposed framework is evaluated using the Flightmare simulation platform, a high-fidelity drone simulator based on the Unity engine. The experimental results demonstrate that the integration of fourth-order minimum snap trajectory generation with NMPC generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety. This approach is well-suited for autonomous UAV operations in complex environments, such as drone racing and aerial photography.
+
+Future work could further optimize the trajectory planning and control algorithms, explore adaptive control strategies, and investigate their application in real-world UAV platforms.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..ff5db4b9dc1673922ad54fb8d59a7e087895f482
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,417 @@
+# Synchronization of Coupled Delayed Discontinuous Systems via Event-Trigged Intermittent Control
+
+${1}^{\text{st }}$ Rongqiang Tang
+
+College of Electronics and Information Engineering
+
+Sichuan University
+
+Chengdu, Sichuan
+
+tangrongqiang@stu.scu.edu.cn
+
+${2}^{\text{nd }}$ Xinsong Yang*
+
+College of Electronics and Information Engineering
+
+Sichuan University
+
+Chengdu, Sichuan
+
+xinsongyang@scu.edu.cn
+
+Abstract-This talk focuses on the complete synchronization of coupled delayed discontinuous systems (DDSs). Without constraints on the derivatives of time delays, several new conditions are exploited to guarantee the global existence of Filippov solutions for DDSs. A nonsmooth intermittent control combined with an event-triggering strategy is then designed. The conspicuous feature of this control scheme is that the measurement error in the event-triggering mechanism is formulated as a linear form, which can reduce computation burden compared to classical approaches. To address the challenges posed by Filippov solutions and intermittent control, novel analytical techniques, including an original lemma and a weighted-norm-based Lyapunov function, are developed so that sufficient synchronization conditions for DDSs are obtained. Finally, the effectiveness of the theoretical findings is confirmed by Hopfield neural networks.
+
+Index Terms-Discontinuous systems, event-triggered intermittent control, Filippov solution, synchronization, time delays.
+
+## I. INTRODUCTION
+
+Coupled discontinuous systems (DSs), modeled by some interconnected differential equations with discontinuous righthand sides, are a special type of complex network. Their applications span various areas of applied science and engineering, such as variable structure systems, neural networks [1], control synthesis [2], etc. Recently, there has been substantial attention on the dynamic behaviors of DSs with or without time delays, covering stability, stabilization, and synchronization [3]-[5].
+
+Considering the discontinuities of the states on the righthand side of DSs, especially delayed DSs (DDSs), it is paramount to discuss the existence of Filippov solutions. Some limitations on time delays are necessary to ensure the existence of Filippov solutions for DDSs. For example, literature [1] considered DDSs with constant delays. Liu et al. [6] demanded that the state variables with time delays satisfy $\parallel z\left( {t - \sigma \left( t\right) }\right) \parallel \leq \parallel z\left( t\right) \parallel + \mathop{\max }\limits_{{1 \leq i \leq n}}\mathop{\max }\limits_{{-\sigma \leq s \leq 0}}\left\{ {{z}_{i}\left( s\right) }\right\} ,$ where $z\left( t\right) \in {\mathbb{R}}^{n}$ is the state variable and $\sigma \left( t\right) \in \left\lbrack {0,\sigma }\right\rbrack$ is the time delay. Yang et al. [7], [8] provided sufficient criteria for the existence of global Filippov solutions for DDSs, based on the condition that the derivatives of time delays are less than 1. However, in reality, the derivatives of some time delays can exceed or equal 1 , and even be non-differentiable in some cases. A fundamental question arises: What conditions guarantee the existence of Filippov solutions for DDSs when these constraints are removed?
+
+To study the synchronization of coupled DDSs (CDDSs), the basic idea is to transform CDDSs into uncertain systems using Filippov regularization and the measurable selection theorem, and then to address the corresponding issues for the uncertain systems [8]. Quasi-synchronization criteria for CDDSs have been obtained via smooth state feedback control [6], [9]. A nonsmooth control incorporating sign functions was proposed to achieve complete synchronization of CDDSs [7], where the sign function is use to mitigate the effects of uncertainties caused by Filippov solutions. Subsequent results on exponential, finite-time, and fixed-time synchronization of CDDSs have been published in [10]-[13]. However, little work has been done to achieve the complete synchronization of CDDSs via intermittent control. Actually, intermittent control offers better robustness and lower control cost than continuous control, as control signals can be artificially interrupted without affecting the final control purposes [14]-[18]. If the intermittent control is adopted for complete synchronization of CDDSs, the main obstacle lies in that the uncertainties posed by Filippov solution are difficult to cancel out during the interrupted intervals of control signals. So, how to develop new analytical methods to study the complete synchronization of CDDSs with intermittent control is another motivation.
+
+Event-triggered control has recently sparked increasing interest due to its ability to reduce computational overhead by updating the sampled signal based on a preset supervision mechanism [19]-[21]. To fully leverage the merits of event-triggered strategy and intermittent control, this paper considers the complete synchronization of general CDDSs via a novel event-trigged intermittent control. The primary contributions of this work are:
+
+1) The existence of Filippov solutions of DDSs is discussed. Different from existing papers [1], [6]-[8], several harsh constrictions on delays are removed.
+
+2) A novel lemma is developed to address the difficulties induced by intermittent control. Then, complete synchronization criteria for CDDSs with intermittent control are obtained for the first time.
+
+---
+
+This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant Nos. 62373262 and 62303336, and in part by the Central guiding local science and technology development special project of Sichuan, and in part by the Fundamental Research Funds for Central Universities under Grant No. 2022SCU12009, and in part by the Sichuan Province Natural Science Foundation of China (NSFSC) under Grant Nos. 2022NSFSC0541, 2022NSFSC0875, 2023NSFSC1433.(Corresponding Author: Xinsong Yang)
+
+---
+
+3) A simple robust intermittent control scheme is designed by combining an event-triggered strategy with nonsmooth control. Unlike many event-triggered nonsmooth controls [12], [17], the measurement error (ME) in a linear form for the event-triggering mechanism (ETM) is considered, which facilitates easy computation (see Table I).
+
+Notation: Let ${\mathcal{D}}^{ + }\left\lbrack \cdot \right\rbrack$ be the upper right Dini derivative operator. ${\mathbb{N}}_{k}^{j} \triangleq \{ k, k + 1,\ldots , j\}$ with $k < j \in \mathbb{N},\operatorname{dg}\left( \cdot \right)$ is the block-diagonal matrix. For $a \in {\mathbb{R}}^{n}$ , let $\operatorname{cl}{\left( {a}_{i}\right) }_{n} = {\left( {a}_{1},{a}_{2},\ldots ,{a}_{n}\right) }^{\top }$ , and $\operatorname{dg}{\left( {a}_{i}\right) }_{n} = \operatorname{diag}\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) ,\operatorname{sg}\left( a\right) = \frac{a}{\parallel a\parallel },\parallel a\parallel \neq 0$ , otherwise $\operatorname{sg}\left( a\right) = 0$ . The other notations used in this paper are same as those in [16].
+
+## II. Preliminaries
+
+In this paper, the problem of synchronization and control in an array of coupled DDSs is considered. Before starting the research works, several necessary preparations on the solution of DDSs and stability theorem are provided.
+
+## A. Filippov solution of DDSs
+
+Consider a DDS as follows:
+
+$$
+\dot{z}\left( t\right) = F\left( {z,{z}_{\sigma }}\right) , z\left( o\right) = \tau \left( o\right) \in \mathcal{C}\left( {\left\lbrack {-\sigma ,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) . \tag{1}
+$$
+
+Here $F\left( {z,{z}_{\sigma }}\right) \triangleq {Cz}\left( t\right) + {Ah}\left( {z\left( t\right) }\right) + {Bg}\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) , z\left( t\right) \in$ ${\mathbb{R}}^{n}$ denotes the state vector, $\sigma \left( t\right) \in \left\lbrack {0,\sigma }\right\rbrack$ is the bounded delay, $C, A = {\left( {a}_{ij}\right) }_{n \times n}$ , and $B = {\left( {b}_{ij}\right) }_{n \times n} \in {\mathbb{R}}^{n \times n}$ are known constant matrices, nonlinear functions $h\left( \cdot \right) , g\left( \cdot \right) : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$ are continuous except on a series of smooth hypersurfaces domains [7]. Chosen an initial value $z\left( o\right)$ for system (1), its trajectory can establish the desired state, such as equilibrium point, chaotic orbit, or nontrivial periodic orbit.
+
+Due to the discontinuity of $\mathbf{a}\left( \cdot \right)$ with $\mathbf{a} = \{ h, g\}$ , classical solutions of DDS (1) do not exist. To further study the dynamical behaviors of DDS (1), this paper utilizes the framework of the Filippov solution, in which the definition of Filippov solution can be founded in [6]-[8]. It is concluded that, for DDS (1), there exists a continuous function $z\left( t\right)$ on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ to be absolutely continuous on $\left\lbrack {0,\mathrm{t}}\right\rbrack$ such that
+
+$$
+\dot{z}\left( t\right) = \mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{2}
+$$
+
+where $\mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) = {Cz}\left( t\right) + {A\gamma }\left( t\right) + {B\zeta }\left( {t - \sigma \left( t\right) }\right) ,\gamma \left( t\right) \in$ $\mathrm{F}\{ h\left( {z\left( t\right) }\right) \}$ and $\zeta \left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \}$ are measurable functions, and $\mathrm{F}\{ \cdot \}$ is the Filippov set-valued map [22].
+
+For the Cauchy problem of DDS (1) in the sense of Filippov, it implies that there is a triple of function $\left( {z\left( t\right) ,\gamma \left( t\right) ,\zeta \left( t\right) }\right)$ : $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack \rightarrow {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \times {\mathbb{R}}^{n}$ such that $z\left( t\right)$ is a Filippov solution on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ with $\mathfrak{t} > 0$ and
+
+$$
+\left\{ \begin{array}{l} \dot{z}\left( t\right) = \mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \\ \gamma \left( s\right) = \zeta \left( s\right) = \mathrm{F}\{ \phi \left( s\right) \} ,\text{ a.a. }s \in \left\lbrack {-\sigma ,0}\right\rbrack , \\ z\left( s\right) = \varphi \left( s\right) ,\forall s \in \left\lbrack {-\sigma ,0}\right\rbrack , \end{array}\right. \tag{3}
+$$
+
+where $\varphi \left( t\right)$ is a continuous function on $\left\lbrack {-\sigma ,0}\right\rbrack$ and $\phi \left( t\right)$ is a measurable selection function.
+
+The following lemma provides some mild conditions to ensure the existence of Filippov solutions for DDS (1).
+
+Lemma 1: Suppose that $\mathrm{a}\left( 0\right) = 0,\mathrm{a} = \{ h, g\}$ and there exist constants ${d}_{rj}^{\mathrm{a}} \geq 0$ and ${d}_{r}^{\mathrm{a}} \geq 0$ such that, for $\forall \mathrm{x} =$ $\operatorname{cl}{\left( {x}_{i}\right) }_{n},\mathbf{y} = \operatorname{cl}{\left( {y}_{i}\right) }_{n} \in {\mathbb{R}}^{n}$ ,
+
+$\left( {\mathbf{A}}_{1}\right) : \left| {{\mathbf{a}}_{r}\left( \mathbf{x}\right) - {\mathbf{a}}_{r}\left( \mathbf{y}\right) }\right| \leq \mathop{\sum }\limits_{{j = 1}}^{n}{d}_{rj}^{\mathbf{a}}\left| {{x}_{j} - {y}_{j}}\right| + {\widehat{d}}_{r}^{\mathbf{a}}, r \in {\mathbb{N}}_{1}^{n}$ . Then, there is at least one Filippov solution $z\left( t\right)$ to DDS (1) on $\lbrack 0, + \infty )$ .
+
+Proof: The prove process is similar to those in [7], [8] with slightly changes, that is, the Cauchy problem in (3) is transformed into a fixed point problem.
+
+Denote a map $\mathbb{G}\left( z\right) : \mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right) \rightarrow \mathcal{C}{\left( \left\lbrack -\sigma ,\mathfrak{t}\right\rbrack ,{\mathbb{R}}^{n}\right) }^{1}$ as:
+
+$$
+\mathbb{G}\left( z\right) = \begin{cases} {e}^{Ct}z\left( 0\right) + {\int }_{0}^{t}{e}^{C\left( {t - s}\right) } & \lbrack B\mathrm{\;F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \} \\ + A\mathrm{\;F}\{ h\left( {z\left( t\right) }\right) \} & \mathrm{d}s, t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , t > 0, \\ \varphi \left( s\right) ,\forall s \leq 0. & \end{cases} \tag{4}
+$$
+
+It has that $\mathbb{G}\left( z\right)$ is completely continuous and upper semicontinuous with convex closed values. Further, one knows that the solutions of the Cauchy problem of DDS (3) are the fixed points of $\mathbb{G}\left( z\right)$ .
+
+By $\left( {\mathbf{A}}_{1}\right)$ , the set $\Omega = \left\{ {z \in \mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right) : {\lambda z} \in \mathbb{G}\left( z\right) ,\lambda > }\right.$ $1\}$ is non-empty. Next, let us prove that the set $\Omega$ is bounded.
+
+For $z \in \Omega$ , it holds that ${\lambda z} \in \mathbb{G}\left( z\right)$ for $\lambda > 1$ . So, there are $\gamma \left( t\right) \in \mathrm{F}\{ h\left( {z\left( t\right) }\right) \}$ and $\zeta \left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \}$ such that
+
+$$
+z\left( t\right) = \frac{1}{\lambda }\left\lbrack {z\left( 0\right) {e}^{Ct} + {\int }_{0}^{t}{e}^{C\left( {t - s}\right) }\mathbb{c}\left( s\right) \mathrm{d}s}\right\rbrack ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{5}
+$$
+
+where $\mathbb{c}\left( t\right) = {A\gamma }\left( s\right) + {B\zeta }\left( {s - \tau \left( s\right) }\right)$ .
+
+In view of $\left( {\mathbf{A}}_{1}\right)$ , there are constants ${D}_{\mathbf{a}}$ and ${d}_{\mathbf{a}}$ such that
+
+$$
+\parallel \mathbb{c}\left( t\right) \parallel \leq {D}_{h}\parallel A\parallel \parallel z\left( t\right) \parallel + {D}_{g}\parallel B\parallel \parallel z\left( {t - \sigma \left( t\right) }\right) \parallel + \mathbb{d}, \tag{6}
+$$
+
+where $\mathbb{d} = \left( {{d}_{h}\parallel A\parallel + {d}_{g}\parallel B\parallel }\right)$ and $\mathbb{a} = \{ h, g\}$ . Considering inequalities (5) and (6), it follows that
+
+$$
+\parallel z\left( t\right) \parallel \leq {e}^{\parallel C\parallel t}\left\lbrack {\mathbb{y}\left( t\right) + {D}_{g}\parallel B\parallel {\int }_{0}^{t}{e}^{-\parallel C\parallel s}\parallel z\left( {s - \tau \left( s\right) }\right) \parallel \mathrm{d}s}\right.
+$$
+
+$$
++ {D}_{h}\parallel A\parallel {\int }_{0}^{t}{e}^{-\parallel C\parallel s}\parallel z\left( s\right) \parallel \mathrm{d}s\rbrack , a.a.t \in \left\lbrack {0,\mathrm{t}}\right\rbrack ,
+$$
+
+which implies that
+
+$$
+\mathbf{z}\left( t\right) \leq \mathbb{y}\left( t\right) + \mathcal{D}{\int }_{0}^{t}\mathbf{z}\left( s\right) \mathrm{d}s,\;\text{ a.a. }t \in \left\lbrack {0,\mathfrak{t}}\right\rbrack , \tag{7}
+$$
+
+where $\mathbf{z}\left( t\right) = {e}^{-\parallel C\parallel t}\mathop{\sup }\limits_{{\theta \in \left\lbrack {-\sigma , t}\right\rbrack }}\parallel z\left( \theta \right) \parallel ,\mathcal{D} = \left( {{D}_{h}\parallel A\parallel + }\right.$ $\left. {{D}_{g}\parallel B\parallel }\right)$ , and $\mathbb{y}\left( t\right) = \parallel z\left( 0\right) \parallel + \frac{\mathrm{d}}{\parallel C\parallel }\left( {1 - {e}^{-\parallel C\parallel t}}\right)$ .
+
+Note that, it is easy to obtain ${y}_{\max } = \parallel z\left( 0\right) \parallel + \frac{\mathrm{d}}{\parallel C\parallel }$ is a upper bound of $\mathbf{y}\left( t\right)$ on $\lbrack 0, + \infty )$ . Then, from inequality (7) and the Gronwall's lemma, it has
+
+$$
+{e}^{-\parallel C\parallel t}\parallel z\left( t\right) \parallel \leq \mathbf{z}\left( t\right) \leq {y}_{\max }{e}^{\mathcal{D}t}\text{, a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{8}
+$$
+
+which further means that $\Omega$ is bounded, a.a. $t \in \left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ .
+
+---
+
+${}^{1}\mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right)$ is the Banach space of the $n$ -dimensional vector-valued continuous functions defined on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ with norm defined by $\parallel x{\parallel }_{\infty } =$ $\sup \{ \parallel x\left( t\right) \parallel , t \in \left\lbrack {-\sigma ,\mathrm{t}}\right\rbrack \}$ .
+
+---
+
+From the discussions in [7], it is deduced that $\mathbb{G}\left( z\right)$ has a fixed point for $\forall \mathfrak{t} > 0$ , which infers that a Filippov solution to DDS (1) can be defined on $\lbrack 0, + \infty )$ .
+
+Remark 1: Delay $\sigma \left( t\right)$ in DDS (1) is merely bounded, which is a milder condition than those in [1], [7], [8]. For instance, the existence of Filippov solutions for DDSs has been discussed in [1], [7], [8] under the condition that the derivatives of delays are differentiable and their values do not exceed 1. Moreover, the proof in Lemma 1 differs from that in [6]. The technique in [6] for handling time delay involves the inequality $\parallel z\left( {t - \sigma \left( t\right) }\right) \parallel \leq \mathop{\max }\limits_{{1 \leq i < n}}\mathop{\max }\limits_{{-\sigma < s < 0}}\left\{ {{z}_{i}\left( s\right) }\right\} + \parallel z\left( t\right) \parallel$ , which is a difficult condition to verify.
+
+## B. Stability Theorem of DDSs
+
+Next, a lemma that can be used to realize synchronization of CDDSs with intermittent control is provided.
+
+Lemma 2: Given a time sequence ${\left\{ {t}_{\rho }\right\} }_{\rho = 0}^{\infty }$ with ${t}_{0} = 0$ , $\mathop{\lim }\limits_{{\rho \rightarrow + \infty }}{t}_{\rho } = + \infty$ , and $\mathop{\lim }\limits_{{\rho \rightarrow + \infty }}\sup \frac{{t}_{{2\rho } + 2} - {t}_{{2\rho } + 1}}{{t}_{{2\rho } + 2} - {t}_{2\rho }} = \phi \in$ (0,1), if there is a continuous and nonnegative function $w\left( t\right)$ with $t \in \lbrack - \sigma , + \infty )$ such that
+
+$$
+\left\{ \begin{array}{l} \dot{w}\left( t\right) \leq - {a}_{1}w\left( t\right) + b\bar{w}\left( t\right) - {c}_{1}, t \in {\mathfrak{c}}_{\rho } = \left\lbrack {{t}_{2\rho },{t}_{{2\rho } + 1}}\right) , \\ \dot{w}\left( t\right) \leq {a}_{2}w\left( t\right) + b\bar{w}\left( t\right) + {c}_{2}, t \in {\mathfrak{u}}_{\rho } = \left\lbrack {{t}_{{2\rho } + 1},{t}_{{2\rho } + 2}}\right) , \end{array}\right.
+$$
+
+(9)
+
+then it has that $w\left( t\right) < M{e}^{-\widetilde{\lambda }t},\widetilde{\lambda } = \lambda - \left( {{a}_{1} + {a}_{2}}\right) \phi > 0, t \geq$ 0, where $\rho \in \mathbb{N}, M > 0,\bar{w}\left( t\right) = w\left( {t - \sigma \left( t\right) }\right) ,\lambda > 0$ is the unique solution of transcendental equation ${a}_{1} - \lambda - {b}_{2}{e}^{\lambda \sigma } = 0$ , and the other parameters meet that ${a}_{1} > b \geq 0,{c}_{1} = \left( {{a}_{1} - }\right.$ $b)d > 0$ , and ${c}_{2} = \left( {{a}_{2} + b}\right) d > 0$ .
+
+Proof: Let $h\left( t\right) = w\left( t\right) + d$ . Then, it has that $\bar{h}\left( t\right) =$ $\bar{w}\left( t\right) + d$ and $h\left( s\right) = \phi \left( s\right) + d > 0, s \in \left\lbrack {-h,0}\right\rbrack$ ,
+
+$$
+\left\{ \begin{array}{ll} \dot{h}\left( t\right) \leq - {a}_{1}h\left( t\right) + b\bar{h}\left( t\right) , & t \in {\mathfrak{c}}_{\rho }, \\ \dot{h}\left( t\right) \leq {a}_{2}h\left( t\right) + b\bar{h}\left( t\right) , & t \in {\mathfrak{u}}_{\rho }, \end{array}\right. \tag{10}
+$$
+
+Following the results of [14], it concludes from the definition of $h\left( t\right)$ and (10) that $w\left( t\right) < h\left( t\right) \leq \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\bar{h}\left( s\right) {e}^{-\widetilde{\lambda }t}$ . By defining $M = \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\bar{h}\left( s\right)$ , the proof is finished.
+
+## C. Research Problem
+
+This talk discusses the complete synchronization of coupled networks with $\ell$ DDSs (1) via an event-triggered intermittent controller. The coupled network is modeled as
+
+$$
+\left\{ \begin{array}{l} {\dot{x}}_{s}\left( t\right) = F\left( {{x}_{s},{x}_{s,\sigma }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {x}_{j}\left( t\right) + {r}_{s}\left( t\right) , \\ {x}_{s}\left( o\right) = {\tau }_{s}\left( o\right) \in \mathcal{C}\left( {\left\lbrack {-\sigma ,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) , s \in {\mathbb{N}}_{1}^{\ell }, \end{array}\right. \tag{11}
+$$
+
+where ${x}_{s}\left( t\right) ,{r}_{s}\left( t\right) \in {\mathbb{R}}^{n}$ are respectively the state variable and the control input, outer-coupling matrix $U = {\left( {u}_{ij}\right) }_{\ell \times \ell }$ satisfies the diffusive condition, $\Phi$ is the inner-coupling matrix. Similar to (2), the CDDSs (11) in sense of Filippov solution is
+
+$$
+{\dot{x}}_{s}\left( t\right) = \mathbb{F}\left( {{x}_{s},{\gamma }_{s},{\zeta }_{s,\sigma }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {x}_{j}\left( t\right) + {r}_{s}\left( t\right) , \tag{12}
+$$
+
+where $\mathbb{F}\left( {{x}_{s},{\gamma }_{s},{\zeta }_{s,\sigma }}\right) = C{x}_{s}\left( t\right) + A{\gamma }_{s}\left( t\right) + B{\zeta }_{s}\left( {t - \sigma \left( t\right) }\right)$ , ${\gamma }_{s}\left( t\right) \in \mathrm{F}\left\{ {h\left( {{x}_{s}\left( t\right) }\right) }\right\}$ and ${\zeta }_{s}\left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\left\{ {g\left( {{x}_{s}\left( {t - \sigma \left( t\right) }\right) }\right) }\right\}$ .
+
+Definition 1: The CDDSs (11) is said to be globally exponentially synchronized with DDS (1) if, by designing suitable controllers ${r}_{s}\left( t\right) , s \in {\mathbb{N}}_{1}^{\ell }$ , there exist $M \geq 0$ and $\alpha > 0$ such that $\parallel e\left( t\right) \parallel \leq M{e}^{-{\alpha t}}$ , for $t \geq 0$ , where $e\left( t\right) = \operatorname{cl}{\left( {e}_{s}\left( t\right) \right) }_{\ell }$ , ${e}_{s}\left( t\right) = {x}_{s}\left( t\right) - z\left( t\right)$ .
+
+## III. Synchronization of CDDSs
+
+## A. Control Design
+
+According to [8], the control goal presented in Definition 1 is equivalence to the same issue for the Filippov systems (2) and (12). Hence, the subsequent study directly addresses the synchronization issue of (2) and (12). In this talk, the new event-triggered intermittent control is designed as
+
+$$
+{r}_{s}\left( t\right) = \left\{ \begin{array}{l} - {K}_{s}{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {\xi }_{s}\operatorname{sg}\left( {{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\right) , \\ \;t \in {\mathfrak{c}}_{\rho } \cap \left\lbrack {{t}_{k}^{s,{2\rho }},{t}_{k + 1}^{s,{2\rho }}}\right) , \\ 0, t \in {\mathfrak{u}}_{\rho }, \end{array}\right. \tag{13}
+$$
+
+where ${\xi }_{s} > 0$ and ${K}_{s} \in {\mathbb{R}}^{n \times n}$ are the control gains, ${t}_{k}^{s,{2\rho }}$ is the ${k}^{th}$ control signal update instant of subsystem $s$ , which is determined by the following ETM
+
+$$
+{t}_{k + 1}^{s,{2\rho }} = \inf \left\{ {t > {t}_{k}^{s,{2\rho }} : \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} - {\kappa }_{s}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} > 0}\right\} , \tag{14}
+$$
+
+where ${t}_{0}^{s,{2\rho }} = {t}_{2\rho },{\theta }_{s}\left( t\right) = {e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {e}_{s}\left( t\right)$ is the ME and ${\kappa }_{s} \in \left( {0,1}\right)$ is the threshold value.
+
+Remark 2: The ME ${\theta }_{s}\left( t\right)$ in (14) is linear and demands less computing power than the nonlinear ones, such as those in [11], [12], [17], which will further be clarified in the numerical example part. In addition, it observes that the MEs in [11], [12], [17] are piecewise continuous, which also introduce additional challenges in proving the exclusion of Zeno behavior. While, these challenges will not arise in the case of a linear ME. Hence, event-triggered nonsmooth control with a linear ME is more practical.
+
+Considering system (2) and CDDSs (12) with controller (13), the error system is obtained as
+
+$$
+{\dot{e}}_{s}\left( t\right) = {\mathrm{F}}_{s}\left( t\right) , t \in {\mathfrak{c}}_{\rho }, \tag{15a}
+$$
+
+$$
+{\dot{e}}_{s}\left( t\right) = {\widetilde{\mathrm{F}}}_{s}\left( t\right) , t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}, \tag{15b}
+$$
+
+and its compact Kronecker product form is
+
+$$
+\dot{\mathbf{e}}\left( t\right) = \mathrm{F}\left( {\mathbf{e},\theta ,\mathrm{r},{\mathbf{c}}_{\sigma }}\right) , t \in {\mathfrak{c}}_{\rho }, \tag{16a}
+$$
+
+$$
+\dot{\mathbf{e}}\left( t\right) = \widetilde{\mathrm{F}}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) , t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}, \tag{16b}
+$$
+
+where ${\mathrm{F}}_{s}\left( t\right) = {\widetilde{\mathrm{F}}}_{s}\left( t\right) - {\xi }_{s}\operatorname{sg}\left( {{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) }\right) - {K}_{s}\left( {{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) }\right)$ , ${\widetilde{\mathrm{F}}}_{s}\left( t\right) = C{e}_{s}\left( t\right) + A{\mathrm{r}}_{s}\left( t\right) + B{\mathrm{c}}_{s}\left( {t - \sigma \left( t\right) }\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {e}_{j}\left( t\right) ,$ $F\left( {e,\theta , r,{c}_{\sigma }}\right) = \widetilde{F}\left( {e,\theta , r,{c}_{\sigma }}\right) - \mathcal{K}\left( {e\left( t\right) + \theta \left( t\right) }\right) - {\xi sg}(e\left( t\right) +$ $\left. {\theta \left( t\right) }\right) ,\widetilde{\mathbf{F}}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) = \left( {\mathcal{C} + \mathcal{U}}\right) \mathbf{e}\left( t\right) + \mathcal{A}\mathbf{r}\left( t\right) + \mathcal{B}\mathbf{c}\left( {t - \sigma \left( t\right) }\right)$ , $\theta \left( t\right) = \operatorname{cl}{\left( {\theta }_{s}\left( t\right) \right) }_{\ell },\mathrm{r}\left( t\right) = \operatorname{cl}{\left( {\mathrm{r}}_{s}\left( t\right) \right) }_{\ell },{\mathrm{r}}_{s}\left( t\right) = {\gamma }_{s}\left( t\right) - \gamma \left( t\right)$ , $\mathbf{{sg}}\left( {\mathbf{e}\left( t\right) + \theta \left( t\right) }\right) = \operatorname{cl}{\left( \mathbf{{sg}}\left( {\mathbf{e}}_{s}\left( t\right) + {\theta }_{s}\left( t\right) \right) \right) }_{\ell },\mathbf{c}\left( {t - \sigma \left( t\right) }\right) =$ $\operatorname{cl}{\left( {\mathbf{c}}_{s}\left( t - \sigma \left( t\right) \right) \right) }_{\ell },{\mathbf{c}}_{s}\left( {t - \sigma \left( t\right) }\right) = {\zeta }_{s}\left( {t - \sigma \left( t\right) }\right) - \zeta \left( {t - \sigma \left( t\right) }\right)$ $\mathcal{X} = {I}_{\ell } \otimes X, X \in \{ A, B, C\} ,\mathcal{U} = U \otimes \Phi ,\mathcal{K} = \operatorname{dg}{\left( {K}_{s}\right) }_{\ell },$ and $\xi = \operatorname{dg}{\left( {\xi }_{s}{I}_{n}\right) }_{\ell }$ .
+
+## B. Synchronization Analysis
+
+The synchronization criteria are given below.
+
+Theorem 1: Assume that $\left( {\mathbf{A}}_{1}\right)$ holds. For given $\phi ,{\kappa }_{s} \in$ $\left( {0,1}\right) ,{a}_{1} > b = \begin{Vmatrix}{\mathcal{B}}_{D}^{g}\end{Vmatrix}$ , and ${a}_{1} + {a}_{2} > 0$ , there are matrices $\mathcal{K} = \operatorname{dg}{\left( {K}_{s}\right) }_{\ell } \in {\mathbb{R}}^{\ell n \times \ell n}$ and $\Psi = \operatorname{dg}{\left( {\Psi }_{s}\right) }_{\ell } \in {\mathbb{D}}_{ + }^{\ell n \times \ell n}$ such that $\eta = \frac{{a}_{1} - b}{{a}_{2} + b}v > 0,{\zeta }_{s} = \frac{1 + {\widetilde{\kappa }}_{s}}{1 - {\widetilde{\kappa }}_{s}}\eta ,{\xi }_{s} = \frac{1 + {\widetilde{\kappa }}_{s}}{1 - {\widetilde{\kappa }}_{s}}v + {\zeta }_{s}, s \in {\mathbb{N}}_{1}^{\ell }$ ,
+
+$$
+{\Omega }_{1} = \left( \begin{matrix} \operatorname{He}\left\lbrack {{\mathbb{A}}_{1} + {\mathcal{A}}_{D}^{h}}\right\rbrack + \widetilde{\Psi } & - \mathcal{K} \\ * & - \Psi \end{matrix}\right) < 0, \tag{17}
+$$
+
+$$
+{\Omega }_{2} = \operatorname{He}\left\lbrack {{\mathbb{A}}_{2} + {\mathcal{A}}_{D}^{h}}\right\rbrack < 0, \tag{18}
+$$
+
+then CDDS (11) with controller (13) is globally exponentially synchronized onto DDS (1), i.e., $\parallel e\left( t\right) \parallel \leq M{e}^{-\widetilde{c}t},\widetilde{c} = c -$ $\left( {{a}_{1} + {a}_{2}}\right) \phi > 0$ , where $c$ is the solution of ${a}_{1} - c - b{e}^{c\sigma } = 0,\phi$ is defined in Lemma 2, $M = \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\parallel \mathbf{e}\left( s\right) \parallel + \frac{v}{{a}_{2} + b},{\mathbb{A}}_{1} =$ $\mathcal{C} - \mathcal{K} + \mathcal{U} + {a}_{1}{I}_{\ell n},{\mathbb{A}}_{2} = \mathcal{C} + \mathcal{U} - {a}_{2}{I}_{\ell n},\widetilde{\Psi } = \operatorname{dg}{\left( {\widetilde{\kappa }}_{s}^{2}{\Psi }_{s}\right) }_{\ell },{\mathcal{A}}_{D}^{h} =$ ${I}_{\ell } \otimes {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {d}_{rj}^{h}\right) }_{n \times n},{\mathcal{B}}_{D}^{g} = {I}_{\ell } \otimes {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {d}_{rj}^{g}\right) }_{n \times n}$ , ${\mathbf{a}}_{h} = {\ell }^{\frac{1}{2}}\parallel \mathrm{{cl}}{\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {\widehat{d}}_{r}^{h}\right) }_{n}\parallel ,{\mathbf{b}}_{g} = {\ell }^{\frac{1}{2}}\parallel \mathrm{{cl}}{\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {\widehat{d}}_{r}^{g}\right) }_{n}\parallel ,$ ${\widetilde{\kappa }}_{s} = \frac{{\kappa }_{s}}{1 - {\kappa }_{s}}$ , and $v = {\mathbf{a}}_{h} + {\mathbf{b}}_{g}$ .
+
+Proof: Design a Lyapunov function $V\left( t\right) = \parallel \mathbf{e}\left( t\right) \parallel$ .
+
+For $t \in {\mathfrak{c}}_{\rho },\rho \in \mathbb{N}$ , it derives from (16a) that
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack = \frac{2{\mathbf{e}}^{\mathrm{T}}\left( t\right) \mathbf{F}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) }{{2V}\left( t\right) }. \tag{19}
+$$
+
+It follows from $\left( {\mathbf{A}}_{1}\right)$ and Cauchy-Schwarz inequality that
+
+$$
+{\mathbf{e}}^{\top }\left( t\right) \mathcal{A}\mathbf{r}\left( t\right) \leq {\mathbf{e}}^{\top }\left( t\right) {\mathcal{A}}_{D}^{h}\mathbf{e}\left( t\right) + {\mathbf{a}}_{h}\parallel \mathbf{e}\left( t\right) \parallel , \tag{20}
+$$
+
+$$
+{\mathbf{e}}^{\top }\left( t\right) \mathcal{B}\mathbf{c}\left( {t - \sigma \left( t\right) }\right) \leq \left( {b\parallel \mathbf{e}\left( {t - \sigma \left( t\right) }\right) \parallel + {\mathbf{b}}_{h}}\right) \parallel \mathbf{e}\left( t\right) \parallel . \tag{21}
+$$
+
+The ETM (14) means $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq {\widetilde{\kappa }}_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}$ and
+
+$$
+{\theta }^{\top }\left( t\right) {\Psi \theta }\left( t\right) \leq {\mathbf{e}}^{\top }\left( t\right) \widetilde{\Psi }\mathbf{e}\left( t\right) . \tag{22}
+$$
+
+Moreover, one has from $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq {\widetilde{\kappa }}_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}$ that
+
+$$
+{\mathbf{e}}^{\top }\left( t\right) \xi \operatorname{sg}\left( {\mathbf{e}\left( t\right) + \theta \left( t\right) }\right) \geq \mathop{\sum }\limits_{{s = 1}}^{\ell }\frac{{\xi }_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}\left( {\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix} - \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}}\right) }{\begin{Vmatrix}{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) \end{Vmatrix}}
+$$
+
+$$
+\geq \mathop{\sum }\limits_{{s = 1}}^{\ell }\frac{{\xi }_{s}\left( {1 - {\widetilde{\kappa }}_{s}}\right) {\begin{Vmatrix}{e}_{s}\left( t\right) \end{Vmatrix}}^{2}}{\left( {1 + {\widetilde{\kappa }}_{s}}\right) \begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}}
+$$
+
+$$
+\geq \left( {v + \eta }\right) \parallel \mathbf{e}\left( t\right) \parallel \text{.} \tag{23}
+$$
+
+Substituting inequalities (20)-(23) into (19) yields
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq \frac{{\varepsilon }^{\mathrm{T}}\left( t\right) {\Omega \varepsilon }\left( t\right) + {2bV}\left( t\right) V\left( {t - \sigma \left( t\right) }\right) }{{2V}\left( t\right) }
+$$
+
+$$
+- {a}_{1}V\left( t\right) - \eta \tag{24}
+$$
+
+where $\varepsilon \left( t\right) = {\left( {e}^{\top }\left( t\right) ,{\theta }^{\top }\left( t\right) \right) }^{\top }$ . Then, condition (17) and inequality (24) ensure that
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq - {a}_{1}V\left( t\right) + {bV}\left( {t - \sigma \left( t\right) }\right) - \eta . \tag{25}
+$$
+
+Similarly, for $t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}$ , it has from (16b) and (18) that
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq {a}_{2}V\left( t\right) + {bV}\left( {t - \sigma \left( t\right) }\right) + v. \tag{26}
+$$
+
+Then, from Lemma 2 and inequalities (25)-(26), the result of Theorem 1 can be obtained.
+
+Remark 3: Based on the novel nonsmooth event-triggered intermittent control (13) and Lemma 2, Theorem 1 presents the complete synchronization criteria for CDDS (11). The result is quite general since Theorem 1 allows that the derivative of $\sigma \left( t\right)$ is less, equal to, greater than 1, or even that $\sigma \left( t\right)$ is nondifferentiable. Specially, when the derivative of the delay $\sigma \left( t\right)$ exceeds 1 or even delay $\sigma \left( t\right)$ is nondifferentiable, the nonsmooth control (13) makes the Lyapunov-Krasovskii functional methods to show limitations in achieving the complete synchronization. The main reason is that many techniques dealing with time delay in the Lyapunov-Krasovskii functional methods only depend on linear controls, which cannot achieve the complete synchronization of CDDS (11). Hence, a new analysis framework of studying the complete synchronization of CDDSs with intermittent control is proposed.
+
+Next, let us discuss the Zeno behavior of ETM (14).
+
+Theorem 2: Under the assumption and conditions of Theorem 1 the triggering instants generated by ETM (14) can rule out the Zeno behavior.
+
+Proof: For $\forall s \in {\mathbb{N}}_{1}^{\ell }$ and $t \in {\mathfrak{c}}_{\rho } \cap \left\lbrack {{t}_{k}^{s,{2\rho }},{t}_{k + 1}^{s,{2\rho }}}\right)$ , it has that
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}\right\rbrack \leq \begin{Vmatrix}{{\mathcal{D}}^{ + }\left\lbrack {{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {e}_{s}\left( t\right) }\right\rbrack }\end{Vmatrix} = \begin{Vmatrix}{{\dot{e}}_{s}\left( t\right) }\end{Vmatrix}. \tag{27}
+$$
+
+In view of Theorem 1, it concludes that there is a ${u}_{s} >$ 0 such that $\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix} \leq {\mathrm{u}}_{s}$ . Then, one can obtain from error system(15a), and $\left( {\mathbf{A}}_{1}\right)$ that
+
+$$
+\begin{Vmatrix}{{\dot{e}}_{s}\left( t\right) }\end{Vmatrix} \leq {\vartheta }_{s} + \begin{Vmatrix}{K}_{s}\end{Vmatrix}\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}, \tag{28}
+$$
+
+where ${\vartheta }_{s} = \left( {\begin{Vmatrix}{C - {K}_{s}}\end{Vmatrix} + \begin{Vmatrix}{A}_{D}^{h}\end{Vmatrix} + \begin{Vmatrix}{B}_{D}^{g}\end{Vmatrix}}\right) {\mathrm{u}}_{s} + v + {\xi }_{s} +$ $2\left| {u}_{ss}\right| \parallel \Phi \parallel \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{j},{A}_{D}^{h} = {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {d}_{rj}^{h}\right) }_{n \times n}$ , and ${B}_{D}^{g} =$ ${\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {d}_{rj}^{y}\right) }_{n \times n}$ .
+
+One has from inequalities (27)-(28) and $\begin{Vmatrix}{{\theta }_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} =$ 0 that $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq \frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}{{\vartheta }_{s}}\left( {{e}^{\begin{Vmatrix}{K}_{s}\end{Vmatrix}\left( {t - {t}_{k}^{s,{2\rho }}}\right) } - 1}\right)$ , that is, $(t -$ $\left. {t}_{k}^{s,{2\rho }}\right) \geq \frac{1}{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}\ln \left( {\frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}{{\vartheta }_{s}}\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} + 1}\right)$ . Note that, the next event will not be triggering until $\begin{Vmatrix}{{\theta }_{s}\left( {t}_{k + 1}^{s,{2\rho } - }\right) }\end{Vmatrix} = {\kappa }_{s}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix}$ . Hence, the inequality above implies that $\left( {{t}_{k + 1}^{s,{2\rho } - } - {t}_{k}^{s,{2\rho }}}\right) \geq$ $\frac{\ln \left( {\frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}{\kappa }_{s}}{{\vartheta }_{s}}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} + 1}\right) }{\begin{Vmatrix}{K}_{s}\end{Vmatrix}} > 0.$
+
+## IV. NUMERICAL EXAMPLE
+
+This section utilizes the Hopfield neural network (HNN) with discontinuous activation functions to verify the effectiveness of our results. The circuit diagram of the HNN is shown in Fig. 1(a) with detailed explanations provided in [23]. By applying Kirchhoff's laws, the HNN can be represented as a DDS (1). Next, the parameters of the HNN, in the form of those in DDS (1), are selected for numerical simulation.
+
+Conside a HNN or the DDS (1) with $z\left( t\right) = {\left( {z}_{1}\left( t\right) ,{z}_{2}\left( t\right) \right) }^{\top }$ , $g\left( z\right) = {\left( {g}_{1}\left( {z}_{1}\right) ,{g}_{2}\left( {z}_{2}\right) \right) }^{\top }, h\left( z\right) = {\left( {h}_{1}\left( {z}_{1}\right) ,{h}_{2}\left( {z}_{2}\right) \right) }^{\top },\sigma \left( t\right) =$ ${0.65} + {0.35}\left| {\sin \left( t\right) }\right| , C = \mathrm{{dg}}\left( {-{1.5}, - 1}\right) , i = 1,2$ ,
+
+$$
+A = \left( \begin{matrix} 2 & - {0.1} \\ - {4.9} & 3 \end{matrix}\right) ,{g}_{i}\left( {z}_{i}\right) = \left\{ \begin{array}{l} \frac{\left| {{z}_{i} + 1}\right| - \left| {{z}_{i} - 1}\right| }{2} + {0.04},{z}_{i} > 0, \\ \frac{\left| {{z}_{i} + 1}\right| - \left| {{z}_{i} - 1}\right| }{2} - {0.01},{z}_{i} < 0, \end{array}\right.
+$$
+
+$$
+B = \left( \begin{matrix} - {1.5} & {0.1} \\ - {0.5} & - {0.5} \end{matrix}\right) ,{h}_{i}\left( {z}_{i}\right) = \left\{ \begin{array}{l} \tanh \left( {z}_{i}\right) + {0.01},{z}_{i} > 0, \\ \tanh \left( {z}_{i}\right) - {0.02},{z}_{i} < 0. \end{array}\right.
+$$
+
+It has that $\mathbf{a}\left( \cdot \right) ,\mathbf{a} = \{ h, g\}$ meet $\left( {\mathbf{A}}_{1}\right)$ with ${d}_{11}^{\mathbf{a}} = {d}_{22}^{\mathbf{a}} = 1$ , ${d}_{12}^{\mathrm{a}} = {d}_{21}^{\mathrm{a}} = 0,{\widehat{d}}_{1}^{h} = {\widehat{d}}_{2}^{h} = {0.03}$ , and ${\widehat{d}}_{21}^{g} = {\widehat{d}}_{21}^{g} = {0.05}$ .
+
+
+
+Fig. 1: (a) Circuit diagram of the HNN and coupling topology; (b) Trajectories of DDS (1) and CDDS (11) without controller.
+
+Now, consider that the coupled system (11) is composed of 3 DDS (1), where $\Phi = \operatorname{dg}\left( {2,1}\right)$ and $U = {\left( {u}_{ij}\right) }_{3 \times 3}$ is the Laplacian matrix of the digraph shown in Fig. 1(a). When the initial values of DDS (1) and CDDS (11) are randomly chosen on $\left\lbrack {-5,5}\right\rbrack ,\forall t \in \left\lbrack {-1,0}\right\rbrack$ , their trajectories are given in Fig. 1(b), from which one can see that the synchronization cannot be realized without the control.
+
+By taken ${a}_{1} = {4.6},{a}_{2} = {3.88},{\kappa }_{1} = {0.12},{\kappa }_{2} = {0.17}$ , and ${\kappa }_{3} = {0.15}$ , one gains that $b = {1.603}{\xi }_{1} = {1.197}$ , ${\xi }_{2} = {1.378},{\xi }_{3} = {1.299}$ and $\phi = {0.1002}$ . Solving conditions (17) and (18) obtains ${K}_{1} = \left( \begin{matrix} {11.480} & {3.759} \\ {3.759} & {13.908} \end{matrix}\right) ,{K}_{2} =$ $\left( \begin{matrix} {11.690} & {3.815} \\ {3.815} & {14.139} \end{matrix}\right) ,{K}_{3} = \left( \begin{matrix} {11.744} & {3.854} \\ {3.854} & {14.236} \end{matrix}\right)$ . Hence, Theorem 1 is true, that is, CDDS (11) with controller (13) can be synchronized onto DDS (1). Fig. 2(a) shows the evolution of error trajectories of (11) and (1) when the work intervals of controller (13) are $\lbrack 0,{0.5}) \cup \lbrack {0.5},{0.7}) \cup \lbrack {0.7},{1.6}) \cup \lbrack {1.6},{1.65}) \cup$ $\lbrack {1.65},{2.55}) \cup \lbrack {2.55},{2.68}) \cup \lbrack {2.68},{3.98}) \cup \lbrack {3.98},4)\cdots$ . In addition, the triggering instants and intervals of three subsystems are displayed in Fig. 2(b), respectively. It finds from Fig. 1 (b) and Fig. 2 that the designed event-triggered controller (13) is not only efficient but also resource-efficient.
+
+
+
+Fig. 2: (a) Error trajectories of DDS (1) and CDDS (11) with controller (13); (b) Triggering instants and intervals.
+
+Comparative Experiment: To prove the novelty 3), a comparative experiment with the ETMs from in [11], [12], [17] is conducted, where average running time (ART) and trigger rate (RT) are the measurement standards. The results are listed in Table I. In the simulation, the time-step size is 0.001 , and a total of 12420 control signals are generated for $\left\lbrack {0,{15}}\right\rbrack$ . The experiment code runs on a computer with Windows 10, Intel Core i5-10400, 2.9GHz, and 16GB RAM. It observes from Table I that ETM (14) not only saves ${52.78}\%$ of the running time but also reduces trigger frequency.
+
+TABLE I: ${\mathbf{{TR}}}^{1}$ and ${\mathbf{{ART}}}^{2}$ of ETM (14) and [11],[12],[17].
+
+| Methods | (14) | [11], [12], [17] |
| Nodes | 1 | 2 | 3 | 1 | 2 | 3 |
| TR (%) | 27.17 | 36.43 | 31.84 | 39.51 | 38.93 | 38.38 |
| $\mathbf{{ART}}$ (sec) | 0.5214 | 0.7966 |
+
+${}^{1}$ TR $= \frac{\text{The number of trigger releases}}{\text{Total signals}}$ ; ${}^{2}$ ART is the average obtained from 10 runs of the code.
+
+## V. CONCLUSION
+
+This talk has considered the complete synchronization of CDDSs under event-triggered intermittent control. By developing a new stability inequality and a weighted-norm-based Lyapunov function, sufficient synchronization conditions have been derived. Note that, the results of this talk did not impose any restrictions on the derivatives of the delay. Moreover, experiments shown that the novel event-triggered control with a linear ME requires less computing power than existing papers.
+
+## REFERENCES
+
+[1] M. Forti, P. Nistri, and D. Papini, "Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain," IEEE Trans. Neural Netw., vol. 16, no. 6, pp. 1449-1463, 2005.
+
+[2] P. Wang, G. Wen, T. Huang, et.al., "Consensus of Lur'e multi-agent systems with directed switching topology," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 69, no. 2, pp. 474-478, 2021.
+
+[3] W. Zhu, X. Yu, S. Li, et.al., "Finite-time discontinuous control of nonholonomic chained-form systems," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 70, no. 6, pp. 2001-2005, 2023.
+
+[4] Z. Cai, L. Huang, and Z. Wang, "Fixed/preassigned-time stability of time-varying nonlinear system with discontinuity: application to Chua's circuit," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 69, no. 6, pp. 2987-2991, 2022.
+
+[5] Z. Zhang, H. Chen, and H. Zhu, "Generalized halanay inequality and its application to delay differential inclusions," Automatica, vol. 166, p. 111704, 2024.
+
+[6] X. Liu, T. Chen, J. Cao, et.al., "Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches," Neural Netw., vol. 24, no. 10, pp. 1013-1021, 2011.
+
+[7] X. Yang, Z. Yang, and X. Nie, "Exponential synchronization of discontinuous chaotic systems via delayed impulsive control and its application to secure communication," Commun. Nonlinear Sci. Numer. Simul., vol. 19, no. 5, pp. 1529-1543, 2014.
+
+[8] X. Yang, Q. Song, J. Liang, et.al., "Finite-time synchronization of coupled discontinuous neural networks with mixed delays and nonidentical perturbations," J. Franklin Inst., vol. 352, no. 10, pp. 4382-4406, 2015.
+
+[9] X. Zhang, P. Niu, X. Hu, et.al., "Global quasi-synchronization and global anti-synchronization of delayed neural networks with discontinuous activations via non-fragile control strategy," Neurocomputing, vol. 361, pp. 1-9, 2019.
+
+[10] W. Zhang, X. Yang, C. Xu, et.al., "Finite-time synchronization of discontinuous neural networks with delays and mismatched parameters," IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 8, pp. 3761-3771, 2017.
+
+[11] Y. Zhou and Z. Zeng, "Event-triggered finite-time stabilization of fuzzy neural networks with infinite time delays and discontinuous activations," IEEE Trans. Fuzzy Syst., vol. 32, no. 1, pp. 1-11, 2024.
+
+[12] N. Rong and Z. Wang, "Event-based fixed-time control for interconnected systems with discontinuous interactions," IEEE Trans. Syst. Man Cybern.: Syst., vol. 52, no. 8, pp. 4925-4936, 2021.
+
+[13] X. She, L. Wang, and Y. Zhang, "Finite-time stability of genetic regulatory networks with nondifferential delays," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 70, no. 6, pp. 2107-2111, 2023.
+
+[14] X. Liu and T. Chen, "Synchronization of linearly coupled networks with delays via aperiodically intermittent pinning control," IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 10, pp. 2396-2407, 2015.
+
+[15] N. Xavier and B. Bandyopadhyay, "Practical sliding mode using state depended intermittent control," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 68, no. 1, pp. 341-345, 2020.
+
+[16] R. Tang, X. Yang, P. Shi, et.al.,"Finite-time ${\mathcal{L}}_{2}$ stabilization of uncertain delayed ts fuzzy systems via intermittent control," IEEE Trans. Fuzzy Syst., vol. 32, no. 1, pp. 116-125, 2024.
+
+[17] Y. Zou, E. Tian, and H. Chen, "Finite-time synchronization of neutral-type coupled systems via event-triggered control with controller failure," IEEE Trans. Control Network Syst., DOI: 10.1109/TCNS.2023.3336594, 2023.
+
+[18] X. Geng, J. Feng, N. Li, et.al., "Finite-time stochastic synchronization of multiweighted directed complex networks via intermittent control," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 70, no. 8, pp. 2964-2968, 2023.
+
+[19] C.-H. Yan, B. Liu, P. Xiao, et.al., "Stabilization of load frequency control system via event-triggered intermittent control," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 69, no. 12, pp. 4934-4938, 2022.
+
+[20] B. Liu, T. Liu, and P. Xiao, "Dynamic event-triggered intermittent control for stabilization of delayed dynamical systems," Automatica, vol. 149, p. 110847, 2023.
+
+[21] G. Yang, F. Hao, L. Zhang, et.al., "Stabilization for positive linear systems: A novel event-triggered mechanism," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 71, no. 3, pp. 1231-1235, 2024.
+
+[22] A. F. Filippov, "Differential equations with discontinuous right-hand side," Matematicheskii sbornik, vol. 93, no. 1, pp. 99-128, 1960.
+
+[23] X. Yang, J. Cao, and J. Qiu, "Pth moment exponential stochastic synchronization of coupled memristor-based neural networks with mixed delays via delayed impulsive control," Neural Netw., vol. 65, pp. 80-91, 2015.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..790dc835bf8a6a4b4751b31c913473134312ba1d
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,374 @@
+§ SYNCHRONIZATION OF COUPLED DELAYED DISCONTINUOUS SYSTEMS VIA EVENT-TRIGGED INTERMITTENT CONTROL
+
+${1}^{\text{ st }}$ Rongqiang Tang
+
+College of Electronics and Information Engineering
+
+Sichuan University
+
+Chengdu, Sichuan
+
+tangrongqiang@stu.scu.edu.cn
+
+${2}^{\text{ nd }}$ Xinsong Yang*
+
+College of Electronics and Information Engineering
+
+Sichuan University
+
+Chengdu, Sichuan
+
+xinsongyang@scu.edu.cn
+
+Abstract-This talk focuses on the complete synchronization of coupled delayed discontinuous systems (DDSs). Without constraints on the derivatives of time delays, several new conditions are exploited to guarantee the global existence of Filippov solutions for DDSs. A nonsmooth intermittent control combined with an event-triggering strategy is then designed. The conspicuous feature of this control scheme is that the measurement error in the event-triggering mechanism is formulated as a linear form, which can reduce computation burden compared to classical approaches. To address the challenges posed by Filippov solutions and intermittent control, novel analytical techniques, including an original lemma and a weighted-norm-based Lyapunov function, are developed so that sufficient synchronization conditions for DDSs are obtained. Finally, the effectiveness of the theoretical findings is confirmed by Hopfield neural networks.
+
+Index Terms-Discontinuous systems, event-triggered intermittent control, Filippov solution, synchronization, time delays.
+
+§ I. INTRODUCTION
+
+Coupled discontinuous systems (DSs), modeled by some interconnected differential equations with discontinuous righthand sides, are a special type of complex network. Their applications span various areas of applied science and engineering, such as variable structure systems, neural networks [1], control synthesis [2], etc. Recently, there has been substantial attention on the dynamic behaviors of DSs with or without time delays, covering stability, stabilization, and synchronization [3]-[5].
+
+Considering the discontinuities of the states on the righthand side of DSs, especially delayed DSs (DDSs), it is paramount to discuss the existence of Filippov solutions. Some limitations on time delays are necessary to ensure the existence of Filippov solutions for DDSs. For example, literature [1] considered DDSs with constant delays. Liu et al. [6] demanded that the state variables with time delays satisfy $\parallel z\left( {t - \sigma \left( t\right) }\right) \parallel \leq \parallel z\left( t\right) \parallel + \mathop{\max }\limits_{{1 \leq i \leq n}}\mathop{\max }\limits_{{-\sigma \leq s \leq 0}}\left\{ {{z}_{i}\left( s\right) }\right\} ,$ where $z\left( t\right) \in {\mathbb{R}}^{n}$ is the state variable and $\sigma \left( t\right) \in \left\lbrack {0,\sigma }\right\rbrack$ is the time delay. Yang et al. [7], [8] provided sufficient criteria for the existence of global Filippov solutions for DDSs, based on the condition that the derivatives of time delays are less than 1. However, in reality, the derivatives of some time delays can exceed or equal 1, and even be non-differentiable in some cases. A fundamental question arises: What conditions guarantee the existence of Filippov solutions for DDSs when these constraints are removed?
+
+To study the synchronization of coupled DDSs (CDDSs), the basic idea is to transform CDDSs into uncertain systems using Filippov regularization and the measurable selection theorem, and then to address the corresponding issues for the uncertain systems [8]. Quasi-synchronization criteria for CDDSs have been obtained via smooth state feedback control [6], [9]. A nonsmooth control incorporating sign functions was proposed to achieve complete synchronization of CDDSs [7], where the sign function is use to mitigate the effects of uncertainties caused by Filippov solutions. Subsequent results on exponential, finite-time, and fixed-time synchronization of CDDSs have been published in [10]-[13]. However, little work has been done to achieve the complete synchronization of CDDSs via intermittent control. Actually, intermittent control offers better robustness and lower control cost than continuous control, as control signals can be artificially interrupted without affecting the final control purposes [14]-[18]. If the intermittent control is adopted for complete synchronization of CDDSs, the main obstacle lies in that the uncertainties posed by Filippov solution are difficult to cancel out during the interrupted intervals of control signals. So, how to develop new analytical methods to study the complete synchronization of CDDSs with intermittent control is another motivation.
+
+Event-triggered control has recently sparked increasing interest due to its ability to reduce computational overhead by updating the sampled signal based on a preset supervision mechanism [19]-[21]. To fully leverage the merits of event-triggered strategy and intermittent control, this paper considers the complete synchronization of general CDDSs via a novel event-trigged intermittent control. The primary contributions of this work are:
+
+1) The existence of Filippov solutions of DDSs is discussed. Different from existing papers [1], [6]-[8], several harsh constrictions on delays are removed.
+
+2) A novel lemma is developed to address the difficulties induced by intermittent control. Then, complete synchronization criteria for CDDSs with intermittent control are obtained for the first time.
+
+This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant Nos. 62373262 and 62303336, and in part by the Central guiding local science and technology development special project of Sichuan, and in part by the Fundamental Research Funds for Central Universities under Grant No. 2022SCU12009, and in part by the Sichuan Province Natural Science Foundation of China (NSFSC) under Grant Nos. 2022NSFSC0541, 2022NSFSC0875, 2023NSFSC1433.(Corresponding Author: Xinsong Yang)
+
+3) A simple robust intermittent control scheme is designed by combining an event-triggered strategy with nonsmooth control. Unlike many event-triggered nonsmooth controls [12], [17], the measurement error (ME) in a linear form for the event-triggering mechanism (ETM) is considered, which facilitates easy computation (see Table I).
+
+Notation: Let ${\mathcal{D}}^{ + }\left\lbrack \cdot \right\rbrack$ be the upper right Dini derivative operator. ${\mathbb{N}}_{k}^{j} \triangleq \{ k,k + 1,\ldots ,j\}$ with $k < j \in \mathbb{N},\operatorname{dg}\left( \cdot \right)$ is the block-diagonal matrix. For $a \in {\mathbb{R}}^{n}$ , let $\operatorname{cl}{\left( {a}_{i}\right) }_{n} = {\left( {a}_{1},{a}_{2},\ldots ,{a}_{n}\right) }^{\top }$ , and $\operatorname{dg}{\left( {a}_{i}\right) }_{n} = \operatorname{diag}\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) ,\operatorname{sg}\left( a\right) = \frac{a}{\parallel a\parallel },\parallel a\parallel \neq 0$ , otherwise $\operatorname{sg}\left( a\right) = 0$ . The other notations used in this paper are same as those in [16].
+
+§ II. PRELIMINARIES
+
+In this paper, the problem of synchronization and control in an array of coupled DDSs is considered. Before starting the research works, several necessary preparations on the solution of DDSs and stability theorem are provided.
+
+§ A. FILIPPOV SOLUTION OF DDSS
+
+Consider a DDS as follows:
+
+$$
+\dot{z}\left( t\right) = F\left( {z,{z}_{\sigma }}\right) ,z\left( o\right) = \tau \left( o\right) \in \mathcal{C}\left( {\left\lbrack {-\sigma ,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) . \tag{1}
+$$
+
+Here $F\left( {z,{z}_{\sigma }}\right) \triangleq {Cz}\left( t\right) + {Ah}\left( {z\left( t\right) }\right) + {Bg}\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) ,z\left( t\right) \in$ ${\mathbb{R}}^{n}$ denotes the state vector, $\sigma \left( t\right) \in \left\lbrack {0,\sigma }\right\rbrack$ is the bounded delay, $C,A = {\left( {a}_{ij}\right) }_{n \times n}$ , and $B = {\left( {b}_{ij}\right) }_{n \times n} \in {\mathbb{R}}^{n \times n}$ are known constant matrices, nonlinear functions $h\left( \cdot \right) ,g\left( \cdot \right) : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$ are continuous except on a series of smooth hypersurfaces domains [7]. Chosen an initial value $z\left( o\right)$ for system (1), its trajectory can establish the desired state, such as equilibrium point, chaotic orbit, or nontrivial periodic orbit.
+
+Due to the discontinuity of $\mathbf{a}\left( \cdot \right)$ with $\mathbf{a} = \{ h,g\}$ , classical solutions of DDS (1) do not exist. To further study the dynamical behaviors of DDS (1), this paper utilizes the framework of the Filippov solution, in which the definition of Filippov solution can be founded in [6]-[8]. It is concluded that, for DDS (1), there exists a continuous function $z\left( t\right)$ on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ to be absolutely continuous on $\left\lbrack {0,\mathrm{t}}\right\rbrack$ such that
+
+$$
+\dot{z}\left( t\right) = \mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{2}
+$$
+
+where $\mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) = {Cz}\left( t\right) + {A\gamma }\left( t\right) + {B\zeta }\left( {t - \sigma \left( t\right) }\right) ,\gamma \left( t\right) \in$ $\mathrm{F}\{ h\left( {z\left( t\right) }\right) \}$ and $\zeta \left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \}$ are measurable functions, and $\mathrm{F}\{ \cdot \}$ is the Filippov set-valued map [22].
+
+For the Cauchy problem of DDS (1) in the sense of Filippov, it implies that there is a triple of function $\left( {z\left( t\right) ,\gamma \left( t\right) ,\zeta \left( t\right) }\right)$ : $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack \rightarrow {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \times {\mathbb{R}}^{n}$ such that $z\left( t\right)$ is a Filippov solution on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ with $\mathfrak{t} > 0$ and
+
+$$
+\left\{ \begin{array}{l} \dot{z}\left( t\right) = \mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \\ \gamma \left( s\right) = \zeta \left( s\right) = \mathrm{F}\{ \phi \left( s\right) \} ,\text{ a.a. }s \in \left\lbrack {-\sigma ,0}\right\rbrack , \\ z\left( s\right) = \varphi \left( s\right) ,\forall s \in \left\lbrack {-\sigma ,0}\right\rbrack , \end{array}\right. \tag{3}
+$$
+
+where $\varphi \left( t\right)$ is a continuous function on $\left\lbrack {-\sigma ,0}\right\rbrack$ and $\phi \left( t\right)$ is a measurable selection function.
+
+The following lemma provides some mild conditions to ensure the existence of Filippov solutions for DDS (1).
+
+Lemma 1: Suppose that $\mathrm{a}\left( 0\right) = 0,\mathrm{a} = \{ h,g\}$ and there exist constants ${d}_{rj}^{\mathrm{a}} \geq 0$ and ${d}_{r}^{\mathrm{a}} \geq 0$ such that, for $\forall \mathrm{x} =$ $\operatorname{cl}{\left( {x}_{i}\right) }_{n},\mathbf{y} = \operatorname{cl}{\left( {y}_{i}\right) }_{n} \in {\mathbb{R}}^{n}$ ,
+
+$\left( {\mathbf{A}}_{1}\right) : \left| {{\mathbf{a}}_{r}\left( \mathbf{x}\right) - {\mathbf{a}}_{r}\left( \mathbf{y}\right) }\right| \leq \mathop{\sum }\limits_{{j = 1}}^{n}{d}_{rj}^{\mathbf{a}}\left| {{x}_{j} - {y}_{j}}\right| + {\widehat{d}}_{r}^{\mathbf{a}},r \in {\mathbb{N}}_{1}^{n}$ . Then, there is at least one Filippov solution $z\left( t\right)$ to DDS (1) on $\lbrack 0, + \infty )$ .
+
+Proof: The prove process is similar to those in [7], [8] with slightly changes, that is, the Cauchy problem in (3) is transformed into a fixed point problem.
+
+Denote a map $\mathbb{G}\left( z\right) : \mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right) \rightarrow \mathcal{C}{\left( \left\lbrack -\sigma ,\mathfrak{t}\right\rbrack ,{\mathbb{R}}^{n}\right) }^{1}$ as:
+
+$$
+\mathbb{G}\left( z\right) = \begin{cases} {e}^{Ct}z\left( 0\right) + {\int }_{0}^{t}{e}^{C\left( {t - s}\right) } & \lbrack B\mathrm{\;F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \} \\ + A\mathrm{\;F}\{ h\left( {z\left( t\right) }\right) \} & \mathrm{d}s,t \in \left\lbrack {0,\mathrm{t}}\right\rbrack ,t > 0, \\ \varphi \left( s\right) ,\forall s \leq 0. & \end{cases} \tag{4}
+$$
+
+It has that $\mathbb{G}\left( z\right)$ is completely continuous and upper semicontinuous with convex closed values. Further, one knows that the solutions of the Cauchy problem of DDS (3) are the fixed points of $\mathbb{G}\left( z\right)$ .
+
+By $\left( {\mathbf{A}}_{1}\right)$ , the set $\Omega = \left\{ {z \in \mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right) : {\lambda z} \in \mathbb{G}\left( z\right) ,\lambda > }\right.$ $1\}$ is non-empty. Next, let us prove that the set $\Omega$ is bounded.
+
+For $z \in \Omega$ , it holds that ${\lambda z} \in \mathbb{G}\left( z\right)$ for $\lambda > 1$ . So, there are $\gamma \left( t\right) \in \mathrm{F}\{ h\left( {z\left( t\right) }\right) \}$ and $\zeta \left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \}$ such that
+
+$$
+z\left( t\right) = \frac{1}{\lambda }\left\lbrack {z\left( 0\right) {e}^{Ct} + {\int }_{0}^{t}{e}^{C\left( {t - s}\right) }\mathbb{c}\left( s\right) \mathrm{d}s}\right\rbrack ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{5}
+$$
+
+where $\mathbb{c}\left( t\right) = {A\gamma }\left( s\right) + {B\zeta }\left( {s - \tau \left( s\right) }\right)$ .
+
+In view of $\left( {\mathbf{A}}_{1}\right)$ , there are constants ${D}_{\mathbf{a}}$ and ${d}_{\mathbf{a}}$ such that
+
+$$
+\parallel \mathbb{c}\left( t\right) \parallel \leq {D}_{h}\parallel A\parallel \parallel z\left( t\right) \parallel + {D}_{g}\parallel B\parallel \parallel z\left( {t - \sigma \left( t\right) }\right) \parallel + \mathbb{d}, \tag{6}
+$$
+
+where $\mathbb{d} = \left( {{d}_{h}\parallel A\parallel + {d}_{g}\parallel B\parallel }\right)$ and $\mathbb{a} = \{ h,g\}$ . Considering inequalities (5) and (6), it follows that
+
+$$
+\parallel z\left( t\right) \parallel \leq {e}^{\parallel C\parallel t}\left\lbrack {\mathbb{y}\left( t\right) + {D}_{g}\parallel B\parallel {\int }_{0}^{t}{e}^{-\parallel C\parallel s}\parallel z\left( {s - \tau \left( s\right) }\right) \parallel \mathrm{d}s}\right.
+$$
+
+$$
++ {D}_{h}\parallel A\parallel {\int }_{0}^{t}{e}^{-\parallel C\parallel s}\parallel z\left( s\right) \parallel \mathrm{d}s\rbrack ,a.a.t \in \left\lbrack {0,\mathrm{t}}\right\rbrack ,
+$$
+
+which implies that
+
+$$
+\mathbf{z}\left( t\right) \leq \mathbb{y}\left( t\right) + \mathcal{D}{\int }_{0}^{t}\mathbf{z}\left( s\right) \mathrm{d}s,\;\text{ a.a. }t \in \left\lbrack {0,\mathfrak{t}}\right\rbrack , \tag{7}
+$$
+
+where $\mathbf{z}\left( t\right) = {e}^{-\parallel C\parallel t}\mathop{\sup }\limits_{{\theta \in \left\lbrack {-\sigma ,t}\right\rbrack }}\parallel z\left( \theta \right) \parallel ,\mathcal{D} = \left( {{D}_{h}\parallel A\parallel + }\right.$ $\left. {{D}_{g}\parallel B\parallel }\right)$ , and $\mathbb{y}\left( t\right) = \parallel z\left( 0\right) \parallel + \frac{\mathrm{d}}{\parallel C\parallel }\left( {1 - {e}^{-\parallel C\parallel t}}\right)$ .
+
+Note that, it is easy to obtain ${y}_{\max } = \parallel z\left( 0\right) \parallel + \frac{\mathrm{d}}{\parallel C\parallel }$ is a upper bound of $\mathbf{y}\left( t\right)$ on $\lbrack 0, + \infty )$ . Then, from inequality (7) and the Gronwall's lemma, it has
+
+$$
+{e}^{-\parallel C\parallel t}\parallel z\left( t\right) \parallel \leq \mathbf{z}\left( t\right) \leq {y}_{\max }{e}^{\mathcal{D}t}\text{ , a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{8}
+$$
+
+which further means that $\Omega$ is bounded, a.a. $t \in \left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ .
+
+${}^{1}\mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right)$ is the Banach space of the $n$ -dimensional vector-valued continuous functions defined on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ with norm defined by $\parallel x{\parallel }_{\infty } =$ $\sup \{ \parallel x\left( t\right) \parallel ,t \in \left\lbrack {-\sigma ,\mathrm{t}}\right\rbrack \}$ .
+
+From the discussions in [7], it is deduced that $\mathbb{G}\left( z\right)$ has a fixed point for $\forall \mathfrak{t} > 0$ , which infers that a Filippov solution to DDS (1) can be defined on $\lbrack 0, + \infty )$ .
+
+Remark 1: Delay $\sigma \left( t\right)$ in DDS (1) is merely bounded, which is a milder condition than those in [1], [7], [8]. For instance, the existence of Filippov solutions for DDSs has been discussed in [1], [7], [8] under the condition that the derivatives of delays are differentiable and their values do not exceed 1. Moreover, the proof in Lemma 1 differs from that in [6]. The technique in [6] for handling time delay involves the inequality $\parallel z\left( {t - \sigma \left( t\right) }\right) \parallel \leq \mathop{\max }\limits_{{1 \leq i < n}}\mathop{\max }\limits_{{-\sigma < s < 0}}\left\{ {{z}_{i}\left( s\right) }\right\} + \parallel z\left( t\right) \parallel$ , which is a difficult condition to verify.
+
+§ B. STABILITY THEOREM OF DDSS
+
+Next, a lemma that can be used to realize synchronization of CDDSs with intermittent control is provided.
+
+Lemma 2: Given a time sequence ${\left\{ {t}_{\rho }\right\} }_{\rho = 0}^{\infty }$ with ${t}_{0} = 0$ , $\mathop{\lim }\limits_{{\rho \rightarrow + \infty }}{t}_{\rho } = + \infty$ , and $\mathop{\lim }\limits_{{\rho \rightarrow + \infty }}\sup \frac{{t}_{{2\rho } + 2} - {t}_{{2\rho } + 1}}{{t}_{{2\rho } + 2} - {t}_{2\rho }} = \phi \in$ (0,1), if there is a continuous and nonnegative function $w\left( t\right)$ with $t \in \lbrack - \sigma , + \infty )$ such that
+
+$$
+\left\{ \begin{array}{l} \dot{w}\left( t\right) \leq - {a}_{1}w\left( t\right) + b\bar{w}\left( t\right) - {c}_{1},t \in {\mathfrak{c}}_{\rho } = \left\lbrack {{t}_{2\rho },{t}_{{2\rho } + 1}}\right) , \\ \dot{w}\left( t\right) \leq {a}_{2}w\left( t\right) + b\bar{w}\left( t\right) + {c}_{2},t \in {\mathfrak{u}}_{\rho } = \left\lbrack {{t}_{{2\rho } + 1},{t}_{{2\rho } + 2}}\right) , \end{array}\right.
+$$
+
+(9)
+
+then it has that $w\left( t\right) < M{e}^{-\widetilde{\lambda }t},\widetilde{\lambda } = \lambda - \left( {{a}_{1} + {a}_{2}}\right) \phi > 0,t \geq$ 0, where $\rho \in \mathbb{N},M > 0,\bar{w}\left( t\right) = w\left( {t - \sigma \left( t\right) }\right) ,\lambda > 0$ is the unique solution of transcendental equation ${a}_{1} - \lambda - {b}_{2}{e}^{\lambda \sigma } = 0$ , and the other parameters meet that ${a}_{1} > b \geq 0,{c}_{1} = \left( {{a}_{1} - }\right.$ $b)d > 0$ , and ${c}_{2} = \left( {{a}_{2} + b}\right) d > 0$ .
+
+Proof: Let $h\left( t\right) = w\left( t\right) + d$ . Then, it has that $\bar{h}\left( t\right) =$ $\bar{w}\left( t\right) + d$ and $h\left( s\right) = \phi \left( s\right) + d > 0,s \in \left\lbrack {-h,0}\right\rbrack$ ,
+
+$$
+\left\{ \begin{array}{ll} \dot{h}\left( t\right) \leq - {a}_{1}h\left( t\right) + b\bar{h}\left( t\right) , & t \in {\mathfrak{c}}_{\rho }, \\ \dot{h}\left( t\right) \leq {a}_{2}h\left( t\right) + b\bar{h}\left( t\right) , & t \in {\mathfrak{u}}_{\rho }, \end{array}\right. \tag{10}
+$$
+
+Following the results of [14], it concludes from the definition of $h\left( t\right)$ and (10) that $w\left( t\right) < h\left( t\right) \leq \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\bar{h}\left( s\right) {e}^{-\widetilde{\lambda }t}$ . By defining $M = \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\bar{h}\left( s\right)$ , the proof is finished.
+
+§ C. RESEARCH PROBLEM
+
+This talk discusses the complete synchronization of coupled networks with $\ell$ DDSs (1) via an event-triggered intermittent controller. The coupled network is modeled as
+
+$$
+\left\{ \begin{array}{l} {\dot{x}}_{s}\left( t\right) = F\left( {{x}_{s},{x}_{s,\sigma }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {x}_{j}\left( t\right) + {r}_{s}\left( t\right) , \\ {x}_{s}\left( o\right) = {\tau }_{s}\left( o\right) \in \mathcal{C}\left( {\left\lbrack {-\sigma ,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) ,s \in {\mathbb{N}}_{1}^{\ell }, \end{array}\right. \tag{11}
+$$
+
+where ${x}_{s}\left( t\right) ,{r}_{s}\left( t\right) \in {\mathbb{R}}^{n}$ are respectively the state variable and the control input, outer-coupling matrix $U = {\left( {u}_{ij}\right) }_{\ell \times \ell }$ satisfies the diffusive condition, $\Phi$ is the inner-coupling matrix. Similar to (2), the CDDSs (11) in sense of Filippov solution is
+
+$$
+{\dot{x}}_{s}\left( t\right) = \mathbb{F}\left( {{x}_{s},{\gamma }_{s},{\zeta }_{s,\sigma }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {x}_{j}\left( t\right) + {r}_{s}\left( t\right) , \tag{12}
+$$
+
+where $\mathbb{F}\left( {{x}_{s},{\gamma }_{s},{\zeta }_{s,\sigma }}\right) = C{x}_{s}\left( t\right) + A{\gamma }_{s}\left( t\right) + B{\zeta }_{s}\left( {t - \sigma \left( t\right) }\right)$ , ${\gamma }_{s}\left( t\right) \in \mathrm{F}\left\{ {h\left( {{x}_{s}\left( t\right) }\right) }\right\}$ and ${\zeta }_{s}\left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\left\{ {g\left( {{x}_{s}\left( {t - \sigma \left( t\right) }\right) }\right) }\right\}$ .
+
+Definition 1: The CDDSs (11) is said to be globally exponentially synchronized with DDS (1) if, by designing suitable controllers ${r}_{s}\left( t\right) ,s \in {\mathbb{N}}_{1}^{\ell }$ , there exist $M \geq 0$ and $\alpha > 0$ such that $\parallel e\left( t\right) \parallel \leq M{e}^{-{\alpha t}}$ , for $t \geq 0$ , where $e\left( t\right) = \operatorname{cl}{\left( {e}_{s}\left( t\right) \right) }_{\ell }$ , ${e}_{s}\left( t\right) = {x}_{s}\left( t\right) - z\left( t\right)$ .
+
+§ III. SYNCHRONIZATION OF CDDSS
+
+§ A. CONTROL DESIGN
+
+According to [8], the control goal presented in Definition 1 is equivalence to the same issue for the Filippov systems (2) and (12). Hence, the subsequent study directly addresses the synchronization issue of (2) and (12). In this talk, the new event-triggered intermittent control is designed as
+
+$$
+{r}_{s}\left( t\right) = \left\{ \begin{array}{l} - {K}_{s}{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {\xi }_{s}\operatorname{sg}\left( {{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\right) , \\ \;t \in {\mathfrak{c}}_{\rho } \cap \left\lbrack {{t}_{k}^{s,{2\rho }},{t}_{k + 1}^{s,{2\rho }}}\right) , \\ 0,t \in {\mathfrak{u}}_{\rho }, \end{array}\right. \tag{13}
+$$
+
+where ${\xi }_{s} > 0$ and ${K}_{s} \in {\mathbb{R}}^{n \times n}$ are the control gains, ${t}_{k}^{s,{2\rho }}$ is the ${k}^{th}$ control signal update instant of subsystem $s$ , which is determined by the following ETM
+
+$$
+{t}_{k + 1}^{s,{2\rho }} = \inf \left\{ {t > {t}_{k}^{s,{2\rho }} : \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} - {\kappa }_{s}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} > 0}\right\} , \tag{14}
+$$
+
+where ${t}_{0}^{s,{2\rho }} = {t}_{2\rho },{\theta }_{s}\left( t\right) = {e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {e}_{s}\left( t\right)$ is the ME and ${\kappa }_{s} \in \left( {0,1}\right)$ is the threshold value.
+
+Remark 2: The ME ${\theta }_{s}\left( t\right)$ in (14) is linear and demands less computing power than the nonlinear ones, such as those in [11], [12], [17], which will further be clarified in the numerical example part. In addition, it observes that the MEs in [11], [12], [17] are piecewise continuous, which also introduce additional challenges in proving the exclusion of Zeno behavior. While, these challenges will not arise in the case of a linear ME. Hence, event-triggered nonsmooth control with a linear ME is more practical.
+
+Considering system (2) and CDDSs (12) with controller (13), the error system is obtained as
+
+$$
+{\dot{e}}_{s}\left( t\right) = {\mathrm{F}}_{s}\left( t\right) ,t \in {\mathfrak{c}}_{\rho }, \tag{15a}
+$$
+
+$$
+{\dot{e}}_{s}\left( t\right) = {\widetilde{\mathrm{F}}}_{s}\left( t\right) ,t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}, \tag{15b}
+$$
+
+and its compact Kronecker product form is
+
+$$
+\dot{\mathbf{e}}\left( t\right) = \mathrm{F}\left( {\mathbf{e},\theta ,\mathrm{r},{\mathbf{c}}_{\sigma }}\right) ,t \in {\mathfrak{c}}_{\rho }, \tag{16a}
+$$
+
+$$
+\dot{\mathbf{e}}\left( t\right) = \widetilde{\mathrm{F}}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) ,t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}, \tag{16b}
+$$
+
+where ${\mathrm{F}}_{s}\left( t\right) = {\widetilde{\mathrm{F}}}_{s}\left( t\right) - {\xi }_{s}\operatorname{sg}\left( {{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) }\right) - {K}_{s}\left( {{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) }\right)$ , ${\widetilde{\mathrm{F}}}_{s}\left( t\right) = C{e}_{s}\left( t\right) + A{\mathrm{r}}_{s}\left( t\right) + B{\mathrm{c}}_{s}\left( {t - \sigma \left( t\right) }\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {e}_{j}\left( t\right) ,$ $F\left( {e,\theta ,r,{c}_{\sigma }}\right) = \widetilde{F}\left( {e,\theta ,r,{c}_{\sigma }}\right) - \mathcal{K}\left( {e\left( t\right) + \theta \left( t\right) }\right) - {\xi sg}(e\left( t\right) +$ $\left. {\theta \left( t\right) }\right) ,\widetilde{\mathbf{F}}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) = \left( {\mathcal{C} + \mathcal{U}}\right) \mathbf{e}\left( t\right) + \mathcal{A}\mathbf{r}\left( t\right) + \mathcal{B}\mathbf{c}\left( {t - \sigma \left( t\right) }\right)$ , $\theta \left( t\right) = \operatorname{cl}{\left( {\theta }_{s}\left( t\right) \right) }_{\ell },\mathrm{r}\left( t\right) = \operatorname{cl}{\left( {\mathrm{r}}_{s}\left( t\right) \right) }_{\ell },{\mathrm{r}}_{s}\left( t\right) = {\gamma }_{s}\left( t\right) - \gamma \left( t\right)$ , $\mathbf{{sg}}\left( {\mathbf{e}\left( t\right) + \theta \left( t\right) }\right) = \operatorname{cl}{\left( \mathbf{{sg}}\left( {\mathbf{e}}_{s}\left( t\right) + {\theta }_{s}\left( t\right) \right) \right) }_{\ell },\mathbf{c}\left( {t - \sigma \left( t\right) }\right) =$ $\operatorname{cl}{\left( {\mathbf{c}}_{s}\left( t - \sigma \left( t\right) \right) \right) }_{\ell },{\mathbf{c}}_{s}\left( {t - \sigma \left( t\right) }\right) = {\zeta }_{s}\left( {t - \sigma \left( t\right) }\right) - \zeta \left( {t - \sigma \left( t\right) }\right)$ $\mathcal{X} = {I}_{\ell } \otimes X,X \in \{ A,B,C\} ,\mathcal{U} = U \otimes \Phi ,\mathcal{K} = \operatorname{dg}{\left( {K}_{s}\right) }_{\ell },$ and $\xi = \operatorname{dg}{\left( {\xi }_{s}{I}_{n}\right) }_{\ell }$ .
+
+§ B. SYNCHRONIZATION ANALYSIS
+
+The synchronization criteria are given below.
+
+Theorem 1: Assume that $\left( {\mathbf{A}}_{1}\right)$ holds. For given $\phi ,{\kappa }_{s} \in$ $\left( {0,1}\right) ,{a}_{1} > b = \begin{Vmatrix}{\mathcal{B}}_{D}^{g}\end{Vmatrix}$ , and ${a}_{1} + {a}_{2} > 0$ , there are matrices $\mathcal{K} = \operatorname{dg}{\left( {K}_{s}\right) }_{\ell } \in {\mathbb{R}}^{\ell n \times \ell n}$ and $\Psi = \operatorname{dg}{\left( {\Psi }_{s}\right) }_{\ell } \in {\mathbb{D}}_{ + }^{\ell n \times \ell n}$ such that $\eta = \frac{{a}_{1} - b}{{a}_{2} + b}v > 0,{\zeta }_{s} = \frac{1 + {\widetilde{\kappa }}_{s}}{1 - {\widetilde{\kappa }}_{s}}\eta ,{\xi }_{s} = \frac{1 + {\widetilde{\kappa }}_{s}}{1 - {\widetilde{\kappa }}_{s}}v + {\zeta }_{s},s \in {\mathbb{N}}_{1}^{\ell }$ ,
+
+$$
+{\Omega }_{1} = \left( \begin{matrix} \operatorname{He}\left\lbrack {{\mathbb{A}}_{1} + {\mathcal{A}}_{D}^{h}}\right\rbrack + \widetilde{\Psi } & - \mathcal{K} \\ * & - \Psi \end{matrix}\right) < 0, \tag{17}
+$$
+
+$$
+{\Omega }_{2} = \operatorname{He}\left\lbrack {{\mathbb{A}}_{2} + {\mathcal{A}}_{D}^{h}}\right\rbrack < 0, \tag{18}
+$$
+
+then CDDS (11) with controller (13) is globally exponentially synchronized onto DDS (1), i.e., $\parallel e\left( t\right) \parallel \leq M{e}^{-\widetilde{c}t},\widetilde{c} = c -$ $\left( {{a}_{1} + {a}_{2}}\right) \phi > 0$ , where $c$ is the solution of ${a}_{1} - c - b{e}^{c\sigma } = 0,\phi$ is defined in Lemma 2, $M = \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\parallel \mathbf{e}\left( s\right) \parallel + \frac{v}{{a}_{2} + b},{\mathbb{A}}_{1} =$ $\mathcal{C} - \mathcal{K} + \mathcal{U} + {a}_{1}{I}_{\ell n},{\mathbb{A}}_{2} = \mathcal{C} + \mathcal{U} - {a}_{2}{I}_{\ell n},\widetilde{\Psi } = \operatorname{dg}{\left( {\widetilde{\kappa }}_{s}^{2}{\Psi }_{s}\right) }_{\ell },{\mathcal{A}}_{D}^{h} =$ ${I}_{\ell } \otimes {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {d}_{rj}^{h}\right) }_{n \times n},{\mathcal{B}}_{D}^{g} = {I}_{\ell } \otimes {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {d}_{rj}^{g}\right) }_{n \times n}$ , ${\mathbf{a}}_{h} = {\ell }^{\frac{1}{2}}\parallel \mathrm{{cl}}{\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {\widehat{d}}_{r}^{h}\right) }_{n}\parallel ,{\mathbf{b}}_{g} = {\ell }^{\frac{1}{2}}\parallel \mathrm{{cl}}{\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {\widehat{d}}_{r}^{g}\right) }_{n}\parallel ,$ ${\widetilde{\kappa }}_{s} = \frac{{\kappa }_{s}}{1 - {\kappa }_{s}}$ , and $v = {\mathbf{a}}_{h} + {\mathbf{b}}_{g}$ .
+
+Proof: Design a Lyapunov function $V\left( t\right) = \parallel \mathbf{e}\left( t\right) \parallel$ .
+
+For $t \in {\mathfrak{c}}_{\rho },\rho \in \mathbb{N}$ , it derives from (16a) that
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack = \frac{2{\mathbf{e}}^{\mathrm{T}}\left( t\right) \mathbf{F}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) }{{2V}\left( t\right) }. \tag{19}
+$$
+
+It follows from $\left( {\mathbf{A}}_{1}\right)$ and Cauchy-Schwarz inequality that
+
+$$
+{\mathbf{e}}^{\top }\left( t\right) \mathcal{A}\mathbf{r}\left( t\right) \leq {\mathbf{e}}^{\top }\left( t\right) {\mathcal{A}}_{D}^{h}\mathbf{e}\left( t\right) + {\mathbf{a}}_{h}\parallel \mathbf{e}\left( t\right) \parallel , \tag{20}
+$$
+
+$$
+{\mathbf{e}}^{\top }\left( t\right) \mathcal{B}\mathbf{c}\left( {t - \sigma \left( t\right) }\right) \leq \left( {b\parallel \mathbf{e}\left( {t - \sigma \left( t\right) }\right) \parallel + {\mathbf{b}}_{h}}\right) \parallel \mathbf{e}\left( t\right) \parallel . \tag{21}
+$$
+
+The ETM (14) means $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq {\widetilde{\kappa }}_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}$ and
+
+$$
+{\theta }^{\top }\left( t\right) {\Psi \theta }\left( t\right) \leq {\mathbf{e}}^{\top }\left( t\right) \widetilde{\Psi }\mathbf{e}\left( t\right) . \tag{22}
+$$
+
+Moreover, one has from $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq {\widetilde{\kappa }}_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}$ that
+
+$$
+{\mathbf{e}}^{\top }\left( t\right) \xi \operatorname{sg}\left( {\mathbf{e}\left( t\right) + \theta \left( t\right) }\right) \geq \mathop{\sum }\limits_{{s = 1}}^{\ell }\frac{{\xi }_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}\left( {\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix} - \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}}\right) }{\begin{Vmatrix}{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) \end{Vmatrix}}
+$$
+
+$$
+\geq \mathop{\sum }\limits_{{s = 1}}^{\ell }\frac{{\xi }_{s}\left( {1 - {\widetilde{\kappa }}_{s}}\right) {\begin{Vmatrix}{e}_{s}\left( t\right) \end{Vmatrix}}^{2}}{\left( {1 + {\widetilde{\kappa }}_{s}}\right) \begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}}
+$$
+
+$$
+\geq \left( {v + \eta }\right) \parallel \mathbf{e}\left( t\right) \parallel \text{ . } \tag{23}
+$$
+
+Substituting inequalities (20)-(23) into (19) yields
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq \frac{{\varepsilon }^{\mathrm{T}}\left( t\right) {\Omega \varepsilon }\left( t\right) + {2bV}\left( t\right) V\left( {t - \sigma \left( t\right) }\right) }{{2V}\left( t\right) }
+$$
+
+$$
+- {a}_{1}V\left( t\right) - \eta \tag{24}
+$$
+
+where $\varepsilon \left( t\right) = {\left( {e}^{\top }\left( t\right) ,{\theta }^{\top }\left( t\right) \right) }^{\top }$ . Then, condition (17) and inequality (24) ensure that
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq - {a}_{1}V\left( t\right) + {bV}\left( {t - \sigma \left( t\right) }\right) - \eta . \tag{25}
+$$
+
+Similarly, for $t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}$ , it has from (16b) and (18) that
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq {a}_{2}V\left( t\right) + {bV}\left( {t - \sigma \left( t\right) }\right) + v. \tag{26}
+$$
+
+Then, from Lemma 2 and inequalities (25)-(26), the result of Theorem 1 can be obtained.
+
+Remark 3: Based on the novel nonsmooth event-triggered intermittent control (13) and Lemma 2, Theorem 1 presents the complete synchronization criteria for CDDS (11). The result is quite general since Theorem 1 allows that the derivative of $\sigma \left( t\right)$ is less, equal to, greater than 1, or even that $\sigma \left( t\right)$ is nondifferentiable. Specially, when the derivative of the delay $\sigma \left( t\right)$ exceeds 1 or even delay $\sigma \left( t\right)$ is nondifferentiable, the nonsmooth control (13) makes the Lyapunov-Krasovskii functional methods to show limitations in achieving the complete synchronization. The main reason is that many techniques dealing with time delay in the Lyapunov-Krasovskii functional methods only depend on linear controls, which cannot achieve the complete synchronization of CDDS (11). Hence, a new analysis framework of studying the complete synchronization of CDDSs with intermittent control is proposed.
+
+Next, let us discuss the Zeno behavior of ETM (14).
+
+Theorem 2: Under the assumption and conditions of Theorem 1 the triggering instants generated by ETM (14) can rule out the Zeno behavior.
+
+Proof: For $\forall s \in {\mathbb{N}}_{1}^{\ell }$ and $t \in {\mathfrak{c}}_{\rho } \cap \left\lbrack {{t}_{k}^{s,{2\rho }},{t}_{k + 1}^{s,{2\rho }}}\right)$ , it has that
+
+$$
+{\mathcal{D}}^{ + }\left\lbrack \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}\right\rbrack \leq \begin{Vmatrix}{{\mathcal{D}}^{ + }\left\lbrack {{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {e}_{s}\left( t\right) }\right\rbrack }\end{Vmatrix} = \begin{Vmatrix}{{\dot{e}}_{s}\left( t\right) }\end{Vmatrix}. \tag{27}
+$$
+
+In view of Theorem 1, it concludes that there is a ${u}_{s} >$ 0 such that $\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix} \leq {\mathrm{u}}_{s}$ . Then, one can obtain from error system(15a), and $\left( {\mathbf{A}}_{1}\right)$ that
+
+$$
+\begin{Vmatrix}{{\dot{e}}_{s}\left( t\right) }\end{Vmatrix} \leq {\vartheta }_{s} + \begin{Vmatrix}{K}_{s}\end{Vmatrix}\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}, \tag{28}
+$$
+
+where ${\vartheta }_{s} = \left( {\begin{Vmatrix}{C - {K}_{s}}\end{Vmatrix} + \begin{Vmatrix}{A}_{D}^{h}\end{Vmatrix} + \begin{Vmatrix}{B}_{D}^{g}\end{Vmatrix}}\right) {\mathrm{u}}_{s} + v + {\xi }_{s} +$ $2\left| {u}_{ss}\right| \parallel \Phi \parallel \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{j},{A}_{D}^{h} = {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {d}_{rj}^{h}\right) }_{n \times n}$ , and ${B}_{D}^{g} =$ ${\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {d}_{rj}^{y}\right) }_{n \times n}$ .
+
+One has from inequalities (27)-(28) and $\begin{Vmatrix}{{\theta }_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} =$ 0 that $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq \frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}{{\vartheta }_{s}}\left( {{e}^{\begin{Vmatrix}{K}_{s}\end{Vmatrix}\left( {t - {t}_{k}^{s,{2\rho }}}\right) } - 1}\right)$ , that is, $(t -$ $\left. {t}_{k}^{s,{2\rho }}\right) \geq \frac{1}{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}\ln \left( {\frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}{{\vartheta }_{s}}\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} + 1}\right)$ . Note that, the next event will not be triggering until $\begin{Vmatrix}{{\theta }_{s}\left( {t}_{k + 1}^{s,{2\rho } - }\right) }\end{Vmatrix} = {\kappa }_{s}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix}$ . Hence, the inequality above implies that $\left( {{t}_{k + 1}^{s,{2\rho } - } - {t}_{k}^{s,{2\rho }}}\right) \geq$ $\frac{\ln \left( {\frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}{\kappa }_{s}}{{\vartheta }_{s}}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} + 1}\right) }{\begin{Vmatrix}{K}_{s}\end{Vmatrix}} > 0.$
+
+§ IV. NUMERICAL EXAMPLE
+
+This section utilizes the Hopfield neural network (HNN) with discontinuous activation functions to verify the effectiveness of our results. The circuit diagram of the HNN is shown in Fig. 1(a) with detailed explanations provided in [23]. By applying Kirchhoff's laws, the HNN can be represented as a DDS (1). Next, the parameters of the HNN, in the form of those in DDS (1), are selected for numerical simulation.
+
+Conside a HNN or the DDS (1) with $z\left( t\right) = {\left( {z}_{1}\left( t\right) ,{z}_{2}\left( t\right) \right) }^{\top }$ , $g\left( z\right) = {\left( {g}_{1}\left( {z}_{1}\right) ,{g}_{2}\left( {z}_{2}\right) \right) }^{\top },h\left( z\right) = {\left( {h}_{1}\left( {z}_{1}\right) ,{h}_{2}\left( {z}_{2}\right) \right) }^{\top },\sigma \left( t\right) =$ ${0.65} + {0.35}\left| {\sin \left( t\right) }\right| ,C = \mathrm{{dg}}\left( {-{1.5}, - 1}\right) ,i = 1,2$ ,
+
+$$
+A = \left( \begin{matrix} 2 & - {0.1} \\ - {4.9} & 3 \end{matrix}\right) ,{g}_{i}\left( {z}_{i}\right) = \left\{ \begin{array}{l} \frac{\left| {{z}_{i} + 1}\right| - \left| {{z}_{i} - 1}\right| }{2} + {0.04},{z}_{i} > 0, \\ \frac{\left| {{z}_{i} + 1}\right| - \left| {{z}_{i} - 1}\right| }{2} - {0.01},{z}_{i} < 0, \end{array}\right.
+$$
+
+$$
+B = \left( \begin{matrix} - {1.5} & {0.1} \\ - {0.5} & - {0.5} \end{matrix}\right) ,{h}_{i}\left( {z}_{i}\right) = \left\{ \begin{array}{l} \tanh \left( {z}_{i}\right) + {0.01},{z}_{i} > 0, \\ \tanh \left( {z}_{i}\right) - {0.02},{z}_{i} < 0. \end{array}\right.
+$$
+
+It has that $\mathbf{a}\left( \cdot \right) ,\mathbf{a} = \{ h,g\}$ meet $\left( {\mathbf{A}}_{1}\right)$ with ${d}_{11}^{\mathbf{a}} = {d}_{22}^{\mathbf{a}} = 1$ , ${d}_{12}^{\mathrm{a}} = {d}_{21}^{\mathrm{a}} = 0,{\widehat{d}}_{1}^{h} = {\widehat{d}}_{2}^{h} = {0.03}$ , and ${\widehat{d}}_{21}^{g} = {\widehat{d}}_{21}^{g} = {0.05}$ .
+
+ < g r a p h i c s >
+
+Fig. 1: (a) Circuit diagram of the HNN and coupling topology; (b) Trajectories of DDS (1) and CDDS (11) without controller.
+
+Now, consider that the coupled system (11) is composed of 3 DDS (1), where $\Phi = \operatorname{dg}\left( {2,1}\right)$ and $U = {\left( {u}_{ij}\right) }_{3 \times 3}$ is the Laplacian matrix of the digraph shown in Fig. 1(a). When the initial values of DDS (1) and CDDS (11) are randomly chosen on $\left\lbrack {-5,5}\right\rbrack ,\forall t \in \left\lbrack {-1,0}\right\rbrack$ , their trajectories are given in Fig. 1(b), from which one can see that the synchronization cannot be realized without the control.
+
+By taken ${a}_{1} = {4.6},{a}_{2} = {3.88},{\kappa }_{1} = {0.12},{\kappa }_{2} = {0.17}$ , and ${\kappa }_{3} = {0.15}$ , one gains that $b = {1.603}{\xi }_{1} = {1.197}$ , ${\xi }_{2} = {1.378},{\xi }_{3} = {1.299}$ and $\phi = {0.1002}$ . Solving conditions (17) and (18) obtains ${K}_{1} = \left( \begin{matrix} {11.480} & {3.759} \\ {3.759} & {13.908} \end{matrix}\right) ,{K}_{2} =$ $\left( \begin{matrix} {11.690} & {3.815} \\ {3.815} & {14.139} \end{matrix}\right) ,{K}_{3} = \left( \begin{matrix} {11.744} & {3.854} \\ {3.854} & {14.236} \end{matrix}\right)$ . Hence, Theorem 1 is true, that is, CDDS (11) with controller (13) can be synchronized onto DDS (1). Fig. 2(a) shows the evolution of error trajectories of (11) and (1) when the work intervals of controller (13) are $\lbrack 0,{0.5}) \cup \lbrack {0.5},{0.7}) \cup \lbrack {0.7},{1.6}) \cup \lbrack {1.6},{1.65}) \cup$ $\lbrack {1.65},{2.55}) \cup \lbrack {2.55},{2.68}) \cup \lbrack {2.68},{3.98}) \cup \lbrack {3.98},4)\cdots$ . In addition, the triggering instants and intervals of three subsystems are displayed in Fig. 2(b), respectively. It finds from Fig. 1 (b) and Fig. 2 that the designed event-triggered controller (13) is not only efficient but also resource-efficient.
+
+ < g r a p h i c s >
+
+Fig. 2: (a) Error trajectories of DDS (1) and CDDS (11) with controller (13); (b) Triggering instants and intervals.
+
+Comparative Experiment: To prove the novelty 3), a comparative experiment with the ETMs from in [11], [12], [17] is conducted, where average running time (ART) and trigger rate (RT) are the measurement standards. The results are listed in Table I. In the simulation, the time-step size is 0.001, and a total of 12420 control signals are generated for $\left\lbrack {0,{15}}\right\rbrack$ . The experiment code runs on a computer with Windows 10, Intel Core i5-10400, 2.9GHz, and 16GB RAM. It observes from Table I that ETM (14) not only saves ${52.78}\%$ of the running time but also reduces trigger frequency.
+
+TABLE I: ${\mathbf{{TR}}}^{1}$ and ${\mathbf{{ART}}}^{2}$ of ETM (14) and [11],[12],[17].
+
+max width=
+
+Methods 3|c|(14) 3|c|[11], [12], [17]
+
+1-7
+Nodes 1 2 3 1 2 3
+
+1-7
+TR (%) 27.17 36.43 31.84 39.51 38.93 38.38
+
+1-7
+$\mathbf{{ART}}$ (sec) 3|c|0.5214 3|c|0.7966
+
+1-7
+
+${}^{1}$ TR $= \frac{\text{ The number of trigger releases }}{\text{ Total signals }}$ ; ${}^{2}$ ART is the average obtained from 10 runs of the code.
+
+§ V. CONCLUSION
+
+This talk has considered the complete synchronization of CDDSs under event-triggered intermittent control. By developing a new stability inequality and a weighted-norm-based Lyapunov function, sufficient synchronization conditions have been derived. Note that, the results of this talk did not impose any restrictions on the derivatives of the delay. Moreover, experiments shown that the novel event-triggered control with a linear ME requires less computing power than existing papers.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..95ba21133ea77733350e83635f6113cc4aa0288a
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,465 @@
+# MRBicopter: Modular Reconfigurable Transverse Tilt-rotor Bicopter System
+
+${1}^{\text{st }}$ Qianyao Pan
+
+School of Automation
+
+Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+panqianyaoupc@163.com
+
+${2}^{\text{nd }}$ Xin Lu
+
+School of Automation
+
+Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+luxin_uestc@163.com
+
+${3}^{\text{rd }}$ Weijun Yuan
+
+School of Automation
+
+Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+ywj861087955@163.com
+
+${4}^{\text{th }}$ Fusheng Li*
+
+School of Automation
+
+Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+lifusheng@uestc.edu.cn
+
+Abstract-This paper introduces a modular UAV(MRBicopter) that can realize structural combination reconstruction. Each module contains a rotor tilting structure and an active docking mechanism. By separating and combining submodules, the UAV functions can match the requirements of different flight tasks in real time. First, we designed the mechanical actuator to allow physically connected assembly to fly collaboratively. Secondly, according to different reconstructed structures, we propose two generalized control strategies to realize the independent control of posture through the reassignment of rotor speed and tilt angle.The feasibility of the mechanical design and control method is verified by simulation and ground experiment under ambient wind interference .
+
+Keywords—Reconfigurable and modular robots, bicopter, active docking mechanism, rotor tilting, wind interference, simulation.
+
+## I. INTRODUCTION
+
+In recent years, multi-rotor UAVs have received a lot of attention due to their simplicity, agility and versatility. Research in multi-rotor UAVs has extended to air maneuvering, collective behavior, multi-modal motion, and modular reconfigurable robots[1]-[5]. Among them, the advantages brought by the modular and reconfigurable capabilities of UAVs are increasingly reflected. For example, in the context of disaster relief, modular reconfigurable robots can realize adaptability to different task scenarios through structural reconstruction, such as cooperating in the transportation of large items[16] and completing search and rescue tracking in complex environments[17].
+
+In order to improve the stability and safety of modular reconfigurable UAV. Reference[6] designed an airborne self-assembled flying robot, ModQuad, which is composed of flexible flight modules and can easily move in a three-dimensional environment. For airborne real-time separation systems, a new deformable multi-link aerial robot is proposed in reference[7], which consists of a link module of a 1-DOF thrust vector mechanism, and a transformation planning method is proposed to ensure the minimum force/moment by taking into account the 1-DOF thrust vector angle. Design for separation structure; Reference[8] proposes a magnetic-based connection mechanism, which uses a lightweight passive mechanism to dock and unload in mid-air. Aiming at the application scenario of modular UAV, a self-assembly robot based on autonomous module was proposed in literature[9], which can fly together and assemble into rectangular structures in the air. Literature[10] proposes a full-attitude geometric control algorithm for synchronous tilting hexagonal rotorcraft to realize arbitrary Angle flight at the cost of efficiency. In literature[11], a tilt-rotor UAV was designed. The tilt-rotor mechanism can restrain power dissipation and has a wider inclination range. In literature[12], a structure connecting two helicopter modules is designed, which can fly along any Angle of the wall; Literature[13] proposed the idea of splitting quadcopter UAV into two twin-rotor UAV in real time in the air and developed the modular quadcopter(SplitFlyer). Literature[14] developed a combinable and extensible tilt-rotor UAV(CEDTR), which can match different task scenarios by changing the combination and number of unmanned sub-modules. Literature[15] developed an airborne detachable quadrotor UAV suitable for narrow gaps, which improved the environmental adaptability of reconfigurable UAV.
+
+In this paper, we design a transverse two-rotor tilting bicopter that can be combined and reconstructe, called modular reconfigurable bicopter(MRBicopter), which can not only realize cooperative flight in single module state, but also can get multi-module combination flight control.The main contributions of this paper are in three aspects:
+
+1) Modular reconfigurable bicopter with rotor vector tilting structure and active combination docking mechanism is designed and modeled, which can realize structural reconfiguration to adapt to different task requirements.
+
+2) The UAV dynamics model is built and the UAV control distribution and controller design are completed to realize the control of a single module and the full degree of freedom control of the assembly.
+
+3) The environmental wind interference module is innovatively introduced in the simulation to make the simulation result more close to the reality.
+
+The structure of this paper is as follows: Section II introduces the structure of MRBicopter. Section III describes the modeling of MRBicopter. Section IV shows the control distribution and controller design of MRBicopter. Section V demonstrates the results of simulation and tests. The conclusions are presented in section VI.
+
+---
+
+*The author is the corresponding author.
+
+---
+
+
+
+Fig.1: MRBicopter mechanical structure. (a) rotor vector tilting structure, (b) electromagnet combination butt mechanism, (c) submodule structure.
+
+## II. DESIGN
+
+## A. Rotor vector tilting structure
+
+The rotor propeller axis of the traditional UAV is fixed, which direction of lift force cannot be changed. Here, we adopt the design of rotor vector tilting structure(Fig.1(a)). The rotor can tilt around the arm shaft, and each rotor is separately installed with a servo steering machine to control the tilting angle. This structure increases the input of UAV assembly control quantity, and can realize the full freedom control of MRBicopter assembly.
+
+## B. Electromagnet combination butt structure
+
+For the docking device between modular reconfigurable MRBicopter, permanent magnets(NdFe) are used in traditional reconfigurable UAVs. This scheme has a slow control response during separation and is easy to cause instability. Therefore, we designed a multi-locking electromagnet combination docking mechanism(Fig.1(b)).It uses a circular electromagnet as the main actuator, and realizes the on-off of the electromagnet by using a program to control the relay. A total of three locking nodes are included, each node can provide $5\mathrm{{KG}}$ of locking suction.
+
+## C. Electromagnet combination butt structure
+
+MRBicopter consists of two cross-mounted bicopter single modules(Fig.1(c)). The single module can not only realize autonomous cooperative flight, but also complete assembly reconstruction by magnetic attraction.
+
+## III. DYNAMICS
+
+## A. Establishment of the frame
+
+In this section, four different frames are introduced to define the flight attitude of MRBicopter(Fig.2). The rotation frame system as follows.
+
+1) World frame ${W}_{E}$ . World frame is fixed coordinate system.
+
+2) Assembly frame ${B}_{z}$ . The origin of the ${B}_{z}$ is located at the center of mass of the assembly, and its position relative to the world frame is expressed as ${P}_{w} = \left\{ \begin{array}{lll} {x}_{w} & {y}_{w} & {z}_{w} \end{array}\right\}$ ; Relative velocity is expressed as ${V}_{W} = \left\{ \begin{array}{lll} {V}_{WX} & {V}_{WY} & {V}_{WZ} \end{array}\right\}$ ;The angular velocity of the assembly is expressed as $\Omega = {\left\lbrack \begin{array}{lll} {\omega }_{x} & {\omega }_{y} & {\omega }_{z} \end{array}\right\rbrack }^{T}$ ; The attitude angle is expressed as $\Theta = {\left\{ \begin{array}{lll} \phi & \theta & \psi \end{array}\right\} }^{T}$ , where $\phi$ is the roll angle, $\theta$ is the pitch angle, and $\psi$ is the yaw angle.
+
+
+
+Fig.2: MRBicopter frame system Settings.
+
+3) Submodule frame ${B}_{i}$ . The origin of the submodule frame is located at the centroid of the submodule MRBicopter, which is defined as $\left\{ \begin{array}{lll} {X}_{bi} & {Y}_{bi} & {Z}_{bi} \end{array}\right\}$ . The Euler angle in the submodule frame ${B}_{i}$ is expressed as ${\Theta }_{i} = {\left\lbrack \begin{array}{lll} {\phi }_{i} & {\theta }_{i} & {\psi }_{i} \end{array}\right\rbrack }^{T}$ .
+
+4) Rotor frame ${P}_{ij}$ . The origin of the rotor frame is located at the position of the rotor motor centroid, the $\mathrm{z}$ axis points to the rotor lift direction, and the $\mathrm{x}$ axis points to the body centroid. The tilt angle of the rotor is set as ${\alpha }_{ij}$ .
+
+## B. Derivation of Dynamics and Kinematic Model
+
+In this section, we will deduce the attitude dynamics and kinematics equations of MRBicopter, which will eventually be used in the control allocation and controller model construction in section 4. The i-th submodule in the assembly has two rotors, which are distributed on an axis. The rotor speed is expressed as ${\varpi }_{ij}$ . Therefore, the lift force and rotation torque generated by the $j$ -th rotor in the module can be written as:
+
+$$
+{f}_{ij} = {K}_{T}{{\varpi }_{ij}}^{2} \tag{1}
+$$
+
+$$
+{\tau }_{ij} = {K}_{Q}{\omega }_{ij}^{2} \tag{2}
+$$
+
+Where, ${K}_{T},{K}_{Q}$ is the rotor motor constant.
+
+In the Assembly frame ${B}_{z}$ , MRBicopter’s lift force ${F}_{B}$ is as follows.
+
+$$
+{F}_{ij}^{B} = {f}_{ij}{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) E
+$$
+
+$$
+{F}_{B} = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B} \tag{3}
+$$
+
+Where $E = {\left\lbrack \begin{array}{lll} 0 & 0 & 1 \end{array}\right\rbrack }^{T}$ is the unit coefficient matrix, ${}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{y}\right\rbrack }\left( {\alpha }_{ij}\right) \in {SO}\left( 3\right)$ represents the rotation matrix from the rotor frame ${P}_{y}$ to the assembly frame ${B}_{z}$ , which satisfied as:
+
+$$
+{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {R}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) = {}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {B}_{i}\right\rbrack }{}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {R}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) \tag{4}
+$$
+
+Where ${}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack } \in {SO}\left( 3\right)$ represents the rotation matrix from the submodule frame ${B}_{i}$ to the assembly frame ${B}_{z}$ ${}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) \in {SO}\left( 3\right)$ represents the rotation matrix from rotor frame ${P}_{ij}$ to submodule frame ${B}_{i}$ , which satisfied as:
+
+$$
+\left\{ \begin{array}{l} {}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{i1}\right) = R\left( {{\sigma }_{1},{\alpha }_{i1}}\right) \\ {}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{i2}\right) = R\left( {{\sigma }_{2},{\alpha }_{i2}}\right) \end{array}\right. \tag{5}
+$$
+
+$$
+R\left( {\sigma ,\alpha }\right)
+$$
+
+$$
+= \left\lbrack \begin{matrix} \cos \left( \sigma \right) & - \sin \left( \sigma \right) \cos \left( \alpha \right) & \sin \left( \alpha \right) \sin \left( \sigma \right) \\ \sin \left( \sigma \right) & \cos \left( \sigma \right) \cos \left( \alpha \right) & - \sin \left( \alpha \right) \cos \left( \sigma \right) \\ 0 & \sin \left( \alpha \right) & \cos \left( \alpha \right) \end{matrix}\right\rbrack \tag{6}
+$$
+
+Where $\sigma$ is the angle between the arm axis and the X-axis. According to the structure of the transverse twin-rotor UAV, it can be seen that ${\sigma }_{1} = - \pi /2,{\sigma }_{2} = \pi /2$ .
+
+In the assembly frame ${B}_{z}$ , the rotor torque ${\tau }_{a}$ of MRBicopter is shown as follows.
+
+$$
+{\tau }_{a} = \mathop{\sum }\limits_{{ij}}{}^{\left\{ {B}_{z}\right\} }{p}_{\left\lbrack {P}_{j}\right\rbrack } \times {F}_{ij}^{B} \tag{7}
+$$
+
+Due to the action of air resistance, the yaw moment $Q$ generated by the rotor propeller is shown as follows.
+
+$$
+{Q}_{ij} = {\left( -1\right) }^{j - 1}{C}_{t}{\varpi }_{ij}E
+$$
+
+$$
+Q = \mathop{\sum }\limits_{{ij}}{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) {Q}_{ij} \tag{8}
+$$
+
+Finally, the MRBicopter’s body torque $\tau$ can be written as:
+
+$$
+\tau = {\tau }_{a} + Q \tag{9}
+$$
+
+The dynamics equation of MRBicopter is established by using Newton-Euler equation.
+
+$$
+\tau = {J}_{S}\dot{\Omega } + \Omega \times {J}_{S}\Omega
+$$
+
+$$
+\mathop{\sum }\limits_{i}{m}_{i}^{\left\{ {W}_{E}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack }{\dot{V}}_{W} = {}^{\left\{ {W}_{E}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack }{F}_{B} - \mathop{\sum }\limits_{i}{m}_{i}{gE} \tag{10}
+$$
+
+Where ${m}_{i}$ is the mass of the submodule and ${J}_{S}$ is the total inertia matrix of the assembly. At the same time, a kinematic
+
+
+
+Fig.3: MRBicopter submodule(mode 1) and assembly(mode 2).
+
+model is established on this basis, in which the position kinematic equation is expressed as:
+
+$$
+{\dot{P}}_{W} = {V}_{W} \tag{11}
+$$
+
+The attitude kinematics equation is expressed as:
+
+$$
+\dot{\Theta } = {W}_{R} \cdot \Omega \tag{12}
+$$
+
+## IV. CONTROL
+
+Section IV introduces the controller design of MRBicopter single module and assembly(Fig.3), and introduces the control distribution mode of two flight modes and the feedforward Angle design of assembly [18].
+
+## A. Controller design
+
+Fig. 5 shows the structural block diagram of the MRBicopter controller. The architecture of the controller is based on the cascade double closed-loop PID control law, with the position controller as the outer ring and the attitude controller as the inner ring. As shown in Fig.4(a), the MRBicopter submodule (mode 1) in-flight control system is an underactuated system, so we adopt the controller architecture similar to the traditional bicopter[19]. The MRBicopter assembly (mode 2) control system is an overdrive system that can achieve hovering flight at any pitch angle(Fig.4(b)).
+
+The flight controller can be divided into four channels and output four control quantities ${T}_{1},{T}_{2},{T}_{3},{T}_{4}$ , which can not only control the linear displacement and angular motion of the UAV dynamics model, but also be used for decoupling the linear displacement and angular motion. The controller takes the expected position ${P}_{\text{des }} = {\left\lbrack \begin{array}{lll} X & Y & Z \end{array}\right\rbrack }^{T}$ and the expected yaw angle $\psi$ as the target control inputs respectively. ${K}_{P}^{P},{K}_{I}^{P},{K}_{D}^{P}$ is the proportion coefficient, differential coefficient and integral coefficient of the position ring respectively. Where the position controller meets:
+
+$$
+\ddot{X} = {K}_{P}^{P}\left( {P - {P}_{des}}\right) + {K}_{I}^{P}{\int }_{0}^{t}\left( {P - {P}_{des}}\right) + {K}_{D}^{P}\frac{d\left( {\dot{P} - {\dot{P}}_{des}}\right) }{dt} \tag{13}
+$$
+
+The attitude controller takes the expected attitude angle $\cdot$ ${\Theta }_{des} = {\left\lbrack \begin{array}{lll} \phi & \theta & \psi \end{array}\right\rbrack }^{T}$ as input and the control quantity $T = {\left\lbrack \begin{array}{lll} {T}_{2} & {T}_{3} & {T}_{4} \end{array}\right\rbrack }^{T}$ as output, ${K}_{P}^{\Theta },{K}_{I}^{\Theta },{K}_{D}^{\Theta }$ are the proportion coefficient, differential coefficient and integral coefficient of the attitude ring respectively, which are satisfied as follows:
+
+$$
+T = {K}_{P}^{\Theta }\left( {\Theta - {\Theta }_{des}}\right) + {K}_{I}^{\Theta }{\int }_{0}^{t}\left( {\Theta - {\Theta }_{des}}\right) + {K}_{D}^{\Theta }\frac{d\left( {\dot{\Theta } - {\dot{\Theta }}_{des}}\right) }{dt} \tag{14}
+$$
+
+
+
+Fig.4: MRBicopter structural block diagram of flight controller.
+
+## B. Tilt angle feedforward initialization calculate
+
+The main function of feedforward initial value calculation is to solve the approximate value of rotor tilt angle when MRBicopter assembly is hovering at any pitch angle, which can effectively reduce the overshoot and response time of position control. Here, it is assumed that all rotor propellers have the same lift when the assembly hovers at any pitch angle, the hover angle is $\mathbf{\theta }$ , and the initial feedforward value of the tilt angle is ${\alpha }_{\text{offset }}$ . As shown in Fig.5, we can establish the following force balance equation:
+
+$$
+\mathop{\sum }\limits_{i}{m}_{i}g\cos \theta = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B}\cos \left( {\alpha }_{\text{offset }}^{ij}\right)
+$$
+
+$$
+\mathop{\sum }\limits_{i}{m}_{i}g\sin \theta = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B}\sin \left( {\alpha }_{\text{offset }}^{ij}\right) \tag{15}
+$$
+
+Since the resultant force in the $\mathrm{x}$ and $\mathrm{y}$ directions is zero, when the MRBicopter hovers, the initial feedforward value of the tilt angle can be obtained as:
+
+$$
+{\alpha }_{\text{offset }}^{ij} = \theta \tag{16}
+$$
+
+## C. Control distribution
+
+The control distribution module can assign the throttle speed of the rotor and the tilt angle of the rotor in real time according
+
+
+
+Fig.5: MRBicopter hover force analysis diagram with pitch angle.
+
+to the mode and flight condition of the UAV, so as to achieve the purpose of controlling the attitude of the UAV.
+
+## 1) Submodule control distribution
+
+The MRBicopter submodule can be regarded as a transverse twin-rotor bicopter, with the rotor tilt axis located in the same straight line and the rotors symmetrical. Literature [20] proposed a cross-row dual-rotor UAV control method, so the control distribution mode can be transferred to the MRBicopter submodule, and the rotational speed of the left and right rotors can be expressed as:
+
+$$
+{\varpi }_{L} = \sqrt{\frac{{T}_{1}}{2{K}_{T}} + {T}_{2}} \tag{17}
+$$
+
+$$
+{\varpi }_{R} = \sqrt{\frac{{T}_{1}}{2{K}_{T}} - {T}_{2}}
+$$
+
+The tilt angle of the left and right rotors can be expressed as:
+
+$$
+{\alpha }_{L} = {C}_{1}{T}_{3} + {C}_{2}{T}_{4} \tag{18}
+$$
+
+$$
+{\alpha }_{R} = {C}_{1}{T}_{3} - {C}_{2}{T}_{4}
+$$
+
+In equation, ${C}_{1},{C}_{2}$ are constants.
+
+## 2) Assembly control distribution
+
+Taking MRBicopter assembly mass center ${B}_{z}$ as the center, ${X}_{W},{Y}_{W}$ can be used to divide the rotor into four parts (Fig.6): top left rotor: ${P}_{k}\left( {k = 1,2,\cdots , n}\right)$ ; Lower left rotor: ${P}_{k}\left( {k = n + 1,\cdots ,{2n}}\right)$ ;top right rotor: ${P}_{k}\left( {k = {2n} + 1,\cdots ,{3n}}\right)$ ; lower right rotor: ${P}_{k}\left( {k = {3n} + 1,\cdots ,{4n}}\right)$ .
+
+Literature[20] proposes a mechanism for connecting two twin rotor modules, each of which combines two of the four propellers into a group, similar to the MRBicopter assembly structure. Therefore, the control distribution mode can be extended here. The rotor speed control distribution of the four parts can be write as follows:
+
+$$
+{\varpi }_{i}^{1} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} + {T}_{3} + {T}_{2}}\left( {i = 1,\cdots , n}\right)
+$$
+
+$$
+{\varpi }_{i}^{2} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} - {T}_{3} + {T}_{2}}\left( {i = n + 1,\cdots ,{2n}}\right)
+$$
+
+(19)
+
+$$
+{\varpi }_{i}^{3} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} + {T}_{3} - {T}_{2}}\left( {i = {2n} + 1,\cdots ,{3n}}\right)
+$$
+
+$$
+{\varpi }_{i}^{4} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} - {T}_{3} - {T}_{2}}\left( {i = {3n} + 1,\cdots ,{4n}}\right)
+$$
+
+
+
+Fig.6: Mechanism model of MRBicopte.
+
+The MRBicopte assembly uses the ${X}_{W}$ axis to divide the left and right rotor tilt angles using different control distributions:
+
+$$
+{\alpha }_{i}^{1} = {\alpha }_{\text{offset }} + {C}_{1}{T}_{4} + {C}_{2}\frac{{F}_{Y}}{4n}\left( {i = 1,2,\cdots ,{2n}}\right) \tag{20}
+$$
+
+$$
+{\alpha }_{i}^{2} = {\alpha }_{\text{offset }} - {C}_{1}{T}_{4} + {C}_{2}\frac{{F}_{Y}}{4n}\left( {i = {2n} + 1,\cdots ,{4n}}\right)
+$$
+
+In equation, ${C}_{1},{C}_{2}$ are constants.
+
+## Simulation&Experment
+
+Section V mainly introduces MRBicopte submodule and assembly simulation and ground test. In order to make the simulation more realistic, we introduce the ambient wind interference model here, which can verify the robustness of the MRBicopter controller in the face of ambient wind interference.
+
+## A. Environmental wind model
+
+In order to simulate the mathematical model of the atmospheric wind field as much as possible, we divide the environmental wind into constant wind, gust wind, gradient wind and random wind four parts.
+
+Constant wind: The wind power of constant wind is a constant value $\delta$ , the wind speed does not change. Its mathematical model of wind speed is expressed as follows:
+
+$$
+{V}_{f1} = \delta \tag{21}
+$$
+
+Gust wind: Gust wind is a kind of periodic change of wind speed in atmospheric motion, which is characterized by the sudden increase of wind speed at a certain moment and the self-weakening after a period of time. Its mathematical model can be expressed as a piecewise function:
+
+$$
+{V}_{f2} = \left\{ \begin{matrix} 0 & \left( {x < 0}\right) \\ \frac{{V}_{m}}{2}\left( {1 - \cos \left( \frac{\pi x}{{d}_{m}}\right) }\right) & \left( {0 \leq x \leq {d}_{m}}\right) \\ {V}_{m} & \left( {x > {d}_{m}}\right) \end{matrix}\right. \tag{22}
+$$
+
+Where ${V}_{m}$ is the gust amplitude, ${d}_{m}$ is the gust length, $\mathrm{x}$ is the gust travel distance.
+
+Gradient wind: Gradient wind refers to the ambient wind whose wind speed increases from zero to a certain value over time. Its mathematical model expression is as follows:
+
+$$
+{V}_{f3} = \frac{t - {t}_{1}}{{t}_{2} - {t}_{1}}{V}_{f - \max } \tag{23}
+$$
+
+Where ${V}_{{f}_{-\max }}$ represents the peak of the gradual wind speed, ${t}_{1},{t}_{2}$ represent the beginning and end of the gradual wind, respectively.
+
+Random wind: Random wind refers to the air disturbance generated by random changes in the atmosphere. Here, we use random number generator to build a mathematical model of random wind:
+
+$$
+{V}_{f4} = {V}_{{f4}\_ \max }\pi \left( {-{10} \sim {10}}\right) \cos \left( {{\alpha t} + \beta }\right) \tag{24}
+$$
+
+Where ${V}_{{f4}\_ \max }$ represents the theoretical peak of random wind; It is a random number generated by a random number generator, and its range is $- {10} \sim {10}.\alpha$ represents the average frequency of random wind speed fluctuation, with the value ranging ${0.5} \sim 2\mathrm{{rad}}/\mathrm{s}.\beta$ indicates the offset of random wind speed. The value ranges from ${0.1\pi r} \sim {2\pi r}$ .
+
+Therefore, if the total wind speed of the ambient wind field is ${V}_{F}$ , it can be obtained as:
+
+$$
+{V}_{F} = {V}_{f1} + {V}_{f2} + {V}_{f3} + {V}_{f4} \tag{25}
+$$
+
+In order to simplify the calculation, the wind speed direction is taken as the opposite of the MRBicopter's flight direction, so the air resistance generated by ambient wind field interference can be calculated:
+
+$$
+{F}_{w} = \frac{1}{2}{C\rho S}{\left( {V}_{F} + {v}_{UAV}\right) }^{2} \tag{26}
+$$
+
+Where $C$ represents the air resistance coefficient, the value is ${0.31};\rho$ indicates the air density, which is ${1.29}\mathrm{\;{kg}}/{\mathrm{m}}^{3}$ . $S$ represents the windward area of MRBicopte, which is ${31}{\mathrm{\;{cm}}}^{3}$ . ${v}_{UAV}$ stands for flight speed.
+
+
+
+Fig.7: Simulation of MRBicopter hover under ambient wind interference, (a) MRBicopter single module; (b) MRBicopter assembly.
+
+## B. Simulation
+
+Fig. 7 shows the simulation diagram of the three-axis attitude angle of the two MRBicopter structures in hovering state under the presence of ambient wind interference. The blue line represents the roll angle tracking curve, the red line represents the pitch angle tracking curve, and the green line represents the yaw angle tracking curve.
+
+## 1) Submodule
+
+This experiment is a hover simulation experiment of MRBicopter submodule in the presence of ambient wind interference. The average ambient wind speed is set at ${10.5}\mathrm{\;m}/\mathrm{s}$ . The simulation experiment results are shown in Fig.7(a): the hover attitude angle oscillation of a single module does not exceed 0.05rad, which meets the design requirements.
+
+## 2) Assembly
+
+This experiment is a hover simulation experiment of the MRBicopter assembly in the presence of ambient wind interference. The simulation results are shown in Fig.7(b): instantaneous oscillation of $> {0.4}$ rad occurs in the pitch and roll angle of the assembly at ${0.3}\mathrm{\;s}$ , and the adjustment is completed within ${0.2}\mathrm{\;s}$ , and the subsequent oscillation amplitude does not exceed 0.1rad. It shows that the combination controller has a strong ability to suppress the environmental wind interference.
+
+## C. Ground experiment
+
+In order to ensure the safety of the test, the experiment was carried out on the indoor aircraft test platform, and 1/6HP650 pneumatic industrial fan was used as the ambient wind source. The MRBicopter flight control module uses STM32F427VIT6 as the main processor; The power supply adopts LiPo(4S1P:14.8V,3000mAh); The combination docking module uses ZigBee serial communication to receive the control signal and convert it into analog PWM signal to control the on-off of the relay. The ${2.4}\mathrm{{GHz}}{14}$ channel communication module is used for signal sending and receiving. The experimental results are shown in Fig.8.
+
+
+
+Fig.8: MRBicopter ground experiment under ambient wind interference.
+
+## 1) Submodule experiment
+
+Two MRBicopter submodules are built here, and one of them is selected for experiment. The experimental results are shown in Fig.8(a): when there is wind interference, the average oscillation amplitude of pitch angle and roll angle of the submodule is $\pm {4.98}^{ \circ }$ and the average oscillation amplitude of yaw Angle is $\pm {7.91}^{ \circ }$ , which meets the stability requirements.
+
+## 2) Assembly experiment
+
+The MRBicopter assembly is composed of two submodules. The experimental results are shown in Fig.8(b): when there is wind interference, the average oscillation amplitude of pitch and roll angle of the assembly is $\pm {5.12}^{ \circ }$ , and the average oscillation amplitude of yaw angle is $\pm {7.33}^{ \circ }$ , which meets the stability requirements.
+
+## CONCLUSION
+
+In this paper, a modular and reconfigurable multi-UAV platform MRBicopter is proposed, in which the transverse twin rotor submodule can realize structural reconstruction through the electromagnet combination docking structure, and can realize different flight states by changing the motor speed and tilt angle to meet the needs of different tasks. In order to further improve the controllability of MRBicopter and expand its application fields, improvements will be made in the following aspects in the future:
+
+1) The fuzzy PID control algorithm is proposed to further improve the interference compensation capability of MRBicopter and improve the stability of the flight process of the assembly.
+
+2) Structurally, further mount relevant computing units on the UAV, such as Lidar, airborne computer NUC, etc., to expand the application scenario of the MRBicopter.
+
+## REFERENCES
+
+[1] B. Mu and P. Chirarattananon, "Universal flying objects: Modular multirotor system for flight of rigid objects," IEEE Transactions on Robotics, 2019.
+
+[2] D. Saldana, B. Gabrich, G. Li, M. Yim, and V. Kumar, "Modquad: The flying modular structure that self-assembles in midair," in 2018 IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2018, pp. 691-698.
+
+[3] T. Anzai, M. Zhao, M. Murooka, F. Shi, K. Okada, and M. Inaba, "Design, modeling and control of fully actuated 2d transformable aerial robot with 1 dof thrust vectorable link module," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 2820-2826.K. Elissa, "Title of paper if known," unpublished.
+
+[4] S. K. H. Win, L. S. T. Win, D. Sufiyan, G. S. Soh, and S. Foong, "Dynamics and control of a collaborative and separating descent of samara autorotating wings," IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 3067-3074, 2019.
+
+[5] H. Jia et al, "A Quadrotor With a Passively Reconfigurable Airframe for Hybrid Terrestrial Locomotion," in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4741-4751, Dec. 2022, doi: 10.1109/TMECH.2022.3164929.
+
+[6] D. Saldaña, B. Gabrich, G. Li, M. Yim and V. Kumar, "ModQuad: The Flying Modular Structure that Self-Assembles in Midair," 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 2018, pp. 691-698, doi: 10.1109/ICRA.2018.8461014.
+
+[7] Zhao M, Anzai T, Shi F, Chen X, Okada K, Inaba M. Design, modeling, and control of an aerial robot dragon: A dual-rotor-embedded multilink robot with the ability of multi-degree-offreedom aerial transformation. IEEE Robotics and Automation Letters. 2018 Jan 15;3(2):117683.
+
+[8] D. Saldaña, P. M. Gupta and V. Kumar, "Design and Control of Aerial Modules for Inflight Self-Disassembly," in IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3410-3417, Oct. 2019, doi: 10.1109/LRA.2019.2926680.
+
+[9] H. Yang, S. Park, J. Lee, J. Ahn, D. Son, and D. Lee, "Lasdra: Largesize aerial skeleton system with distributed rotor actuation," in 2018 IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2018, pp. 7017-7023.
+
+[10] M. Ryll, D. Bicego, and A. Franchi, "Modeling and control of fast-hex: A fully-actuated by synchronized-tilting hexarotor," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 1689-1694.
+
+[11] A. Oosedo, S. Abiko, S. Narasaki, A. Kuno, A. Konno, and M. Uchiyama, "Large attitude change flight of a quad tilt rotor unmanned aerial vehicle," Advanced Robotics, vol. 30, no. 5, pp. 326-337, 2016.
+
+[12] K. Kawasaki, Y. Motegi, M. Zhao, K. Okada, and M. Inaba, "Dual connected bi-copter with new wall trace locomotion feasibility that can fly at arbitrary tilt angle," in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015, pp. 524-531.
+
+[13] Songnan Bai Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, 2022 IEEE/ASME Transactions on Mechatronics SplitFlyer: SplitFlyer Air: A Modular Quadcopter that Disassembles into Two Bicopters Mid-Air.
+
+[14] Z. Wu et al, "Design, Modeling and Control of a Composable and Extensible Drone with Tilting Rotors," 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 2022, pp. 12682-12689, doi: 10.1109/IROS47612.2022.9982090.
+
+[15] S. Li, F. Liu, Y. Gao, J. Xiang, Z. Tu and D. Li, "AirTwins: Modular Bi-Copters Capable of Splitting From Their Combined Quadcopter in Midair," in IEEE Robotics and Automation Letters, vol. 8, no. 9, pp. 6068- 6075, Sept. 2023, doi: 10.1109/LRA.2023.3301776.
+
+[16] M. Zhao, K. Kawasaki, X. Chen, S. Noda, K. Okada, and M. Inaba, "Whole-body aerial manipulation by transformable multirotor with twodimensional multilinks," in Proc. IEEE Int. Conf. Robot. Automat., 2017, pp. 5175-5182.
+
+[17] B. Gabrich, D. Saldaña, V. Kumar, and M. Yim, "A flying gripper based on cuboid modular robots," in Proc. IEEE Int. Conf. Robot. Automat., 2018, pp. 7024-7030.
+
+[18] J. Zhang et al., "Design and Control of Rapid In-Air Reconfiguration for Modular Quadrotors With Full Controllable Degrees of Freedom," in IEEE Robotics and Automation Letters, vol. 9, no. 8, pp. 6920-6927, Aug. 2024, doi: 10.1109/LRA.2024.3416797.
+
+[19] Ö. B. Albayrak, Y. Ersan, A. S. Bağbaşı, A. Turgut Başaranoğlu and K. B. Arikan, "Design of a Robotic Bicopter," 2019 7th International Conference on Control, Mechatronics and Automation (ICCMA), Delft, Netherlands,2019, pp. 98-103, doi: 10.1109/ICCMA46720.2019.8988694.
+
+[20] K. Kawasaki, Y. Motegi, M. Zhao, K. Okada and M. Inaba, "Dual connected Bi-Copter with new wall trace locomotion feasibility that can fly at arbitrary tilt angle," 2015 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 524-531, doi: 10.1109/IROS.2015.7353422.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..fcabc081f029a6dffdd57b42d2ddbb61eca82358
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,419 @@
+§ MRBICOPTER: MODULAR RECONFIGURABLE TRANSVERSE TILT-ROTOR BICOPTER SYSTEM
+
+${1}^{\text{ st }}$ Qianyao Pan
+
+School of Automation
+
+Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+panqianyaoupc@163.com
+
+${2}^{\text{ nd }}$ Xin Lu
+
+School of Automation
+
+Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+luxin_uestc@163.com
+
+${3}^{\text{ rd }}$ Weijun Yuan
+
+School of Automation
+
+Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+ywj861087955@163.com
+
+${4}^{\text{ th }}$ Fusheng Li*
+
+School of Automation
+
+Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chendu, China
+
+lifusheng@uestc.edu.cn
+
+Abstract-This paper introduces a modular UAV(MRBicopter) that can realize structural combination reconstruction. Each module contains a rotor tilting structure and an active docking mechanism. By separating and combining submodules, the UAV functions can match the requirements of different flight tasks in real time. First, we designed the mechanical actuator to allow physically connected assembly to fly collaboratively. Secondly, according to different reconstructed structures, we propose two generalized control strategies to realize the independent control of posture through the reassignment of rotor speed and tilt angle.The feasibility of the mechanical design and control method is verified by simulation and ground experiment under ambient wind interference .
+
+Keywords—Reconfigurable and modular robots, bicopter, active docking mechanism, rotor tilting, wind interference, simulation.
+
+§ I. INTRODUCTION
+
+In recent years, multi-rotor UAVs have received a lot of attention due to their simplicity, agility and versatility. Research in multi-rotor UAVs has extended to air maneuvering, collective behavior, multi-modal motion, and modular reconfigurable robots[1]-[5]. Among them, the advantages brought by the modular and reconfigurable capabilities of UAVs are increasingly reflected. For example, in the context of disaster relief, modular reconfigurable robots can realize adaptability to different task scenarios through structural reconstruction, such as cooperating in the transportation of large items[16] and completing search and rescue tracking in complex environments[17].
+
+In order to improve the stability and safety of modular reconfigurable UAV. Reference[6] designed an airborne self-assembled flying robot, ModQuad, which is composed of flexible flight modules and can easily move in a three-dimensional environment. For airborne real-time separation systems, a new deformable multi-link aerial robot is proposed in reference[7], which consists of a link module of a 1-DOF thrust vector mechanism, and a transformation planning method is proposed to ensure the minimum force/moment by taking into account the 1-DOF thrust vector angle. Design for separation structure; Reference[8] proposes a magnetic-based connection mechanism, which uses a lightweight passive mechanism to dock and unload in mid-air. Aiming at the application scenario of modular UAV, a self-assembly robot based on autonomous module was proposed in literature[9], which can fly together and assemble into rectangular structures in the air. Literature[10] proposes a full-attitude geometric control algorithm for synchronous tilting hexagonal rotorcraft to realize arbitrary Angle flight at the cost of efficiency. In literature[11], a tilt-rotor UAV was designed. The tilt-rotor mechanism can restrain power dissipation and has a wider inclination range. In literature[12], a structure connecting two helicopter modules is designed, which can fly along any Angle of the wall; Literature[13] proposed the idea of splitting quadcopter UAV into two twin-rotor UAV in real time in the air and developed the modular quadcopter(SplitFlyer). Literature[14] developed a combinable and extensible tilt-rotor UAV(CEDTR), which can match different task scenarios by changing the combination and number of unmanned sub-modules. Literature[15] developed an airborne detachable quadrotor UAV suitable for narrow gaps, which improved the environmental adaptability of reconfigurable UAV.
+
+In this paper, we design a transverse two-rotor tilting bicopter that can be combined and reconstructe, called modular reconfigurable bicopter(MRBicopter), which can not only realize cooperative flight in single module state, but also can get multi-module combination flight control.The main contributions of this paper are in three aspects:
+
+1) Modular reconfigurable bicopter with rotor vector tilting structure and active combination docking mechanism is designed and modeled, which can realize structural reconfiguration to adapt to different task requirements.
+
+2) The UAV dynamics model is built and the UAV control distribution and controller design are completed to realize the control of a single module and the full degree of freedom control of the assembly.
+
+3) The environmental wind interference module is innovatively introduced in the simulation to make the simulation result more close to the reality.
+
+The structure of this paper is as follows: Section II introduces the structure of MRBicopter. Section III describes the modeling of MRBicopter. Section IV shows the control distribution and controller design of MRBicopter. Section V demonstrates the results of simulation and tests. The conclusions are presented in section VI.
+
+*The author is the corresponding author.
+
+ < g r a p h i c s >
+
+Fig.1: MRBicopter mechanical structure. (a) rotor vector tilting structure, (b) electromagnet combination butt mechanism, (c) submodule structure.
+
+§ II. DESIGN
+
+§ A. ROTOR VECTOR TILTING STRUCTURE
+
+The rotor propeller axis of the traditional UAV is fixed, which direction of lift force cannot be changed. Here, we adopt the design of rotor vector tilting structure(Fig.1(a)). The rotor can tilt around the arm shaft, and each rotor is separately installed with a servo steering machine to control the tilting angle. This structure increases the input of UAV assembly control quantity, and can realize the full freedom control of MRBicopter assembly.
+
+§ B. ELECTROMAGNET COMBINATION BUTT STRUCTURE
+
+For the docking device between modular reconfigurable MRBicopter, permanent magnets(NdFe) are used in traditional reconfigurable UAVs. This scheme has a slow control response during separation and is easy to cause instability. Therefore, we designed a multi-locking electromagnet combination docking mechanism(Fig.1(b)).It uses a circular electromagnet as the main actuator, and realizes the on-off of the electromagnet by using a program to control the relay. A total of three locking nodes are included, each node can provide $5\mathrm{{KG}}$ of locking suction.
+
+§ C. ELECTROMAGNET COMBINATION BUTT STRUCTURE
+
+MRBicopter consists of two cross-mounted bicopter single modules(Fig.1(c)). The single module can not only realize autonomous cooperative flight, but also complete assembly reconstruction by magnetic attraction.
+
+§ III. DYNAMICS
+
+§ A. ESTABLISHMENT OF THE FRAME
+
+In this section, four different frames are introduced to define the flight attitude of MRBicopter(Fig.2). The rotation frame system as follows.
+
+1) World frame ${W}_{E}$ . World frame is fixed coordinate system.
+
+2) Assembly frame ${B}_{z}$ . The origin of the ${B}_{z}$ is located at the center of mass of the assembly, and its position relative to the world frame is expressed as ${P}_{w} = \left\{ \begin{array}{lll} {x}_{w} & {y}_{w} & {z}_{w} \end{array}\right\}$ ; Relative velocity is expressed as ${V}_{W} = \left\{ \begin{array}{lll} {V}_{WX} & {V}_{WY} & {V}_{WZ} \end{array}\right\}$ ;The angular velocity of the assembly is expressed as $\Omega = {\left\lbrack \begin{array}{lll} {\omega }_{x} & {\omega }_{y} & {\omega }_{z} \end{array}\right\rbrack }^{T}$ ; The attitude angle is expressed as $\Theta = {\left\{ \begin{array}{lll} \phi & \theta & \psi \end{array}\right\} }^{T}$ , where $\phi$ is the roll angle, $\theta$ is the pitch angle, and $\psi$ is the yaw angle.
+
+ < g r a p h i c s >
+
+Fig.2: MRBicopter frame system Settings.
+
+3) Submodule frame ${B}_{i}$ . The origin of the submodule frame is located at the centroid of the submodule MRBicopter, which is defined as $\left\{ \begin{array}{lll} {X}_{bi} & {Y}_{bi} & {Z}_{bi} \end{array}\right\}$ . The Euler angle in the submodule frame ${B}_{i}$ is expressed as ${\Theta }_{i} = {\left\lbrack \begin{array}{lll} {\phi }_{i} & {\theta }_{i} & {\psi }_{i} \end{array}\right\rbrack }^{T}$ .
+
+4) Rotor frame ${P}_{ij}$ . The origin of the rotor frame is located at the position of the rotor motor centroid, the $\mathrm{z}$ axis points to the rotor lift direction, and the $\mathrm{x}$ axis points to the body centroid. The tilt angle of the rotor is set as ${\alpha }_{ij}$ .
+
+§ B. DERIVATION OF DYNAMICS AND KINEMATIC MODEL
+
+In this section, we will deduce the attitude dynamics and kinematics equations of MRBicopter, which will eventually be used in the control allocation and controller model construction in section 4. The i-th submodule in the assembly has two rotors, which are distributed on an axis. The rotor speed is expressed as ${\varpi }_{ij}$ . Therefore, the lift force and rotation torque generated by the $j$ -th rotor in the module can be written as:
+
+$$
+{f}_{ij} = {K}_{T}{{\varpi }_{ij}}^{2} \tag{1}
+$$
+
+$$
+{\tau }_{ij} = {K}_{Q}{\omega }_{ij}^{2} \tag{2}
+$$
+
+Where, ${K}_{T},{K}_{Q}$ is the rotor motor constant.
+
+In the Assembly frame ${B}_{z}$ , MRBicopter’s lift force ${F}_{B}$ is as follows.
+
+$$
+{F}_{ij}^{B} = {f}_{ij}{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) E
+$$
+
+$$
+{F}_{B} = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B} \tag{3}
+$$
+
+Where $E = {\left\lbrack \begin{array}{lll} 0 & 0 & 1 \end{array}\right\rbrack }^{T}$ is the unit coefficient matrix, ${}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{y}\right\rbrack }\left( {\alpha }_{ij}\right) \in {SO}\left( 3\right)$ represents the rotation matrix from the rotor frame ${P}_{y}$ to the assembly frame ${B}_{z}$ , which satisfied as:
+
+$$
+{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {R}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) = {}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {B}_{i}\right\rbrack }{}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {R}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) \tag{4}
+$$
+
+Where ${}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack } \in {SO}\left( 3\right)$ represents the rotation matrix from the submodule frame ${B}_{i}$ to the assembly frame ${B}_{z}$ ${}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) \in {SO}\left( 3\right)$ represents the rotation matrix from rotor frame ${P}_{ij}$ to submodule frame ${B}_{i}$ , which satisfied as:
+
+$$
+\left\{ \begin{array}{l} {}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{i1}\right) = R\left( {{\sigma }_{1},{\alpha }_{i1}}\right) \\ {}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{i2}\right) = R\left( {{\sigma }_{2},{\alpha }_{i2}}\right) \end{array}\right. \tag{5}
+$$
+
+$$
+R\left( {\sigma ,\alpha }\right)
+$$
+
+$$
+= \left\lbrack \begin{matrix} \cos \left( \sigma \right) & - \sin \left( \sigma \right) \cos \left( \alpha \right) & \sin \left( \alpha \right) \sin \left( \sigma \right) \\ \sin \left( \sigma \right) & \cos \left( \sigma \right) \cos \left( \alpha \right) & - \sin \left( \alpha \right) \cos \left( \sigma \right) \\ 0 & \sin \left( \alpha \right) & \cos \left( \alpha \right) \end{matrix}\right\rbrack \tag{6}
+$$
+
+Where $\sigma$ is the angle between the arm axis and the X-axis. According to the structure of the transverse twin-rotor UAV, it can be seen that ${\sigma }_{1} = - \pi /2,{\sigma }_{2} = \pi /2$ .
+
+In the assembly frame ${B}_{z}$ , the rotor torque ${\tau }_{a}$ of MRBicopter is shown as follows.
+
+$$
+{\tau }_{a} = \mathop{\sum }\limits_{{ij}}{}^{\left\{ {B}_{z}\right\} }{p}_{\left\lbrack {P}_{j}\right\rbrack } \times {F}_{ij}^{B} \tag{7}
+$$
+
+Due to the action of air resistance, the yaw moment $Q$ generated by the rotor propeller is shown as follows.
+
+$$
+{Q}_{ij} = {\left( -1\right) }^{j - 1}{C}_{t}{\varpi }_{ij}E
+$$
+
+$$
+Q = \mathop{\sum }\limits_{{ij}}{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) {Q}_{ij} \tag{8}
+$$
+
+Finally, the MRBicopter’s body torque $\tau$ can be written as:
+
+$$
+\tau = {\tau }_{a} + Q \tag{9}
+$$
+
+The dynamics equation of MRBicopter is established by using Newton-Euler equation.
+
+$$
+\tau = {J}_{S}\dot{\Omega } + \Omega \times {J}_{S}\Omega
+$$
+
+$$
+\mathop{\sum }\limits_{i}{m}_{i}^{\left\{ {W}_{E}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack }{\dot{V}}_{W} = {}^{\left\{ {W}_{E}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack }{F}_{B} - \mathop{\sum }\limits_{i}{m}_{i}{gE} \tag{10}
+$$
+
+Where ${m}_{i}$ is the mass of the submodule and ${J}_{S}$ is the total inertia matrix of the assembly. At the same time, a kinematic
+
+ < g r a p h i c s >
+
+Fig.3: MRBicopter submodule(mode 1) and assembly(mode 2).
+
+model is established on this basis, in which the position kinematic equation is expressed as:
+
+$$
+{\dot{P}}_{W} = {V}_{W} \tag{11}
+$$
+
+The attitude kinematics equation is expressed as:
+
+$$
+\dot{\Theta } = {W}_{R} \cdot \Omega \tag{12}
+$$
+
+§ IV. CONTROL
+
+Section IV introduces the controller design of MRBicopter single module and assembly(Fig.3), and introduces the control distribution mode of two flight modes and the feedforward Angle design of assembly [18].
+
+§ A. CONTROLLER DESIGN
+
+Fig. 5 shows the structural block diagram of the MRBicopter controller. The architecture of the controller is based on the cascade double closed-loop PID control law, with the position controller as the outer ring and the attitude controller as the inner ring. As shown in Fig.4(a), the MRBicopter submodule (mode 1) in-flight control system is an underactuated system, so we adopt the controller architecture similar to the traditional bicopter[19]. The MRBicopter assembly (mode 2) control system is an overdrive system that can achieve hovering flight at any pitch angle(Fig.4(b)).
+
+The flight controller can be divided into four channels and output four control quantities ${T}_{1},{T}_{2},{T}_{3},{T}_{4}$ , which can not only control the linear displacement and angular motion of the UAV dynamics model, but also be used for decoupling the linear displacement and angular motion. The controller takes the expected position ${P}_{\text{ des }} = {\left\lbrack \begin{array}{lll} X & Y & Z \end{array}\right\rbrack }^{T}$ and the expected yaw angle $\psi$ as the target control inputs respectively. ${K}_{P}^{P},{K}_{I}^{P},{K}_{D}^{P}$ is the proportion coefficient, differential coefficient and integral coefficient of the position ring respectively. Where the position controller meets:
+
+$$
+\ddot{X} = {K}_{P}^{P}\left( {P - {P}_{des}}\right) + {K}_{I}^{P}{\int }_{0}^{t}\left( {P - {P}_{des}}\right) + {K}_{D}^{P}\frac{d\left( {\dot{P} - {\dot{P}}_{des}}\right) }{dt} \tag{13}
+$$
+
+The attitude controller takes the expected attitude angle $\cdot$ ${\Theta }_{des} = {\left\lbrack \begin{array}{lll} \phi & \theta & \psi \end{array}\right\rbrack }^{T}$ as input and the control quantity $T = {\left\lbrack \begin{array}{lll} {T}_{2} & {T}_{3} & {T}_{4} \end{array}\right\rbrack }^{T}$ as output, ${K}_{P}^{\Theta },{K}_{I}^{\Theta },{K}_{D}^{\Theta }$ are the proportion coefficient, differential coefficient and integral coefficient of the attitude ring respectively, which are satisfied as follows:
+
+$$
+T = {K}_{P}^{\Theta }\left( {\Theta - {\Theta }_{des}}\right) + {K}_{I}^{\Theta }{\int }_{0}^{t}\left( {\Theta - {\Theta }_{des}}\right) + {K}_{D}^{\Theta }\frac{d\left( {\dot{\Theta } - {\dot{\Theta }}_{des}}\right) }{dt} \tag{14}
+$$
+
+ < g r a p h i c s >
+
+Fig.4: MRBicopter structural block diagram of flight controller.
+
+§ B. TILT ANGLE FEEDFORWARD INITIALIZATION CALCULATE
+
+The main function of feedforward initial value calculation is to solve the approximate value of rotor tilt angle when MRBicopter assembly is hovering at any pitch angle, which can effectively reduce the overshoot and response time of position control. Here, it is assumed that all rotor propellers have the same lift when the assembly hovers at any pitch angle, the hover angle is $\mathbf{\theta }$ , and the initial feedforward value of the tilt angle is ${\alpha }_{\text{ offset }}$ . As shown in Fig.5, we can establish the following force balance equation:
+
+$$
+\mathop{\sum }\limits_{i}{m}_{i}g\cos \theta = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B}\cos \left( {\alpha }_{\text{ offset }}^{ij}\right)
+$$
+
+$$
+\mathop{\sum }\limits_{i}{m}_{i}g\sin \theta = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B}\sin \left( {\alpha }_{\text{ offset }}^{ij}\right) \tag{15}
+$$
+
+Since the resultant force in the $\mathrm{x}$ and $\mathrm{y}$ directions is zero, when the MRBicopter hovers, the initial feedforward value of the tilt angle can be obtained as:
+
+$$
+{\alpha }_{\text{ offset }}^{ij} = \theta \tag{16}
+$$
+
+§ C. CONTROL DISTRIBUTION
+
+The control distribution module can assign the throttle speed of the rotor and the tilt angle of the rotor in real time according
+
+ < g r a p h i c s >
+
+Fig.5: MRBicopter hover force analysis diagram with pitch angle.
+
+to the mode and flight condition of the UAV, so as to achieve the purpose of controlling the attitude of the UAV.
+
+§ 1) SUBMODULE CONTROL DISTRIBUTION
+
+The MRBicopter submodule can be regarded as a transverse twin-rotor bicopter, with the rotor tilt axis located in the same straight line and the rotors symmetrical. Literature [20] proposed a cross-row dual-rotor UAV control method, so the control distribution mode can be transferred to the MRBicopter submodule, and the rotational speed of the left and right rotors can be expressed as:
+
+$$
+{\varpi }_{L} = \sqrt{\frac{{T}_{1}}{2{K}_{T}} + {T}_{2}} \tag{17}
+$$
+
+$$
+{\varpi }_{R} = \sqrt{\frac{{T}_{1}}{2{K}_{T}} - {T}_{2}}
+$$
+
+The tilt angle of the left and right rotors can be expressed as:
+
+$$
+{\alpha }_{L} = {C}_{1}{T}_{3} + {C}_{2}{T}_{4} \tag{18}
+$$
+
+$$
+{\alpha }_{R} = {C}_{1}{T}_{3} - {C}_{2}{T}_{4}
+$$
+
+In equation, ${C}_{1},{C}_{2}$ are constants.
+
+§ 2) ASSEMBLY CONTROL DISTRIBUTION
+
+Taking MRBicopter assembly mass center ${B}_{z}$ as the center, ${X}_{W},{Y}_{W}$ can be used to divide the rotor into four parts (Fig.6): top left rotor: ${P}_{k}\left( {k = 1,2,\cdots ,n}\right)$ ; Lower left rotor: ${P}_{k}\left( {k = n + 1,\cdots ,{2n}}\right)$ ;top right rotor: ${P}_{k}\left( {k = {2n} + 1,\cdots ,{3n}}\right)$ ; lower right rotor: ${P}_{k}\left( {k = {3n} + 1,\cdots ,{4n}}\right)$ .
+
+Literature[20] proposes a mechanism for connecting two twin rotor modules, each of which combines two of the four propellers into a group, similar to the MRBicopter assembly structure. Therefore, the control distribution mode can be extended here. The rotor speed control distribution of the four parts can be write as follows:
+
+$$
+{\varpi }_{i}^{1} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} + {T}_{3} + {T}_{2}}\left( {i = 1,\cdots ,n}\right)
+$$
+
+$$
+{\varpi }_{i}^{2} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} - {T}_{3} + {T}_{2}}\left( {i = n + 1,\cdots ,{2n}}\right)
+$$
+
+(19)
+
+$$
+{\varpi }_{i}^{3} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} + {T}_{3} - {T}_{2}}\left( {i = {2n} + 1,\cdots ,{3n}}\right)
+$$
+
+$$
+{\varpi }_{i}^{4} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} - {T}_{3} - {T}_{2}}\left( {i = {3n} + 1,\cdots ,{4n}}\right)
+$$
+
+ < g r a p h i c s >
+
+Fig.6: Mechanism model of MRBicopte.
+
+The MRBicopte assembly uses the ${X}_{W}$ axis to divide the left and right rotor tilt angles using different control distributions:
+
+$$
+{\alpha }_{i}^{1} = {\alpha }_{\text{ offset }} + {C}_{1}{T}_{4} + {C}_{2}\frac{{F}_{Y}}{4n}\left( {i = 1,2,\cdots ,{2n}}\right) \tag{20}
+$$
+
+$$
+{\alpha }_{i}^{2} = {\alpha }_{\text{ offset }} - {C}_{1}{T}_{4} + {C}_{2}\frac{{F}_{Y}}{4n}\left( {i = {2n} + 1,\cdots ,{4n}}\right)
+$$
+
+In equation, ${C}_{1},{C}_{2}$ are constants.
+
+§ SIMULATION&EXPERMENT
+
+Section V mainly introduces MRBicopte submodule and assembly simulation and ground test. In order to make the simulation more realistic, we introduce the ambient wind interference model here, which can verify the robustness of the MRBicopter controller in the face of ambient wind interference.
+
+§ A. ENVIRONMENTAL WIND MODEL
+
+In order to simulate the mathematical model of the atmospheric wind field as much as possible, we divide the environmental wind into constant wind, gust wind, gradient wind and random wind four parts.
+
+Constant wind: The wind power of constant wind is a constant value $\delta$ , the wind speed does not change. Its mathematical model of wind speed is expressed as follows:
+
+$$
+{V}_{f1} = \delta \tag{21}
+$$
+
+Gust wind: Gust wind is a kind of periodic change of wind speed in atmospheric motion, which is characterized by the sudden increase of wind speed at a certain moment and the self-weakening after a period of time. Its mathematical model can be expressed as a piecewise function:
+
+$$
+{V}_{f2} = \left\{ \begin{matrix} 0 & \left( {x < 0}\right) \\ \frac{{V}_{m}}{2}\left( {1 - \cos \left( \frac{\pi x}{{d}_{m}}\right) }\right) & \left( {0 \leq x \leq {d}_{m}}\right) \\ {V}_{m} & \left( {x > {d}_{m}}\right) \end{matrix}\right. \tag{22}
+$$
+
+Where ${V}_{m}$ is the gust amplitude, ${d}_{m}$ is the gust length, $\mathrm{x}$ is the gust travel distance.
+
+Gradient wind: Gradient wind refers to the ambient wind whose wind speed increases from zero to a certain value over time. Its mathematical model expression is as follows:
+
+$$
+{V}_{f3} = \frac{t - {t}_{1}}{{t}_{2} - {t}_{1}}{V}_{f - \max } \tag{23}
+$$
+
+Where ${V}_{{f}_{-\max }}$ represents the peak of the gradual wind speed, ${t}_{1},{t}_{2}$ represent the beginning and end of the gradual wind, respectively.
+
+Random wind: Random wind refers to the air disturbance generated by random changes in the atmosphere. Here, we use random number generator to build a mathematical model of random wind:
+
+$$
+{V}_{f4} = {V}_{{f4}\_ \max }\pi \left( {-{10} \sim {10}}\right) \cos \left( {{\alpha t} + \beta }\right) \tag{24}
+$$
+
+Where ${V}_{{f4}\_ \max }$ represents the theoretical peak of random wind; It is a random number generated by a random number generator, and its range is $- {10} \sim {10}.\alpha$ represents the average frequency of random wind speed fluctuation, with the value ranging ${0.5} \sim 2\mathrm{{rad}}/\mathrm{s}.\beta$ indicates the offset of random wind speed. The value ranges from ${0.1\pi r} \sim {2\pi r}$ .
+
+Therefore, if the total wind speed of the ambient wind field is ${V}_{F}$ , it can be obtained as:
+
+$$
+{V}_{F} = {V}_{f1} + {V}_{f2} + {V}_{f3} + {V}_{f4} \tag{25}
+$$
+
+In order to simplify the calculation, the wind speed direction is taken as the opposite of the MRBicopter's flight direction, so the air resistance generated by ambient wind field interference can be calculated:
+
+$$
+{F}_{w} = \frac{1}{2}{C\rho S}{\left( {V}_{F} + {v}_{UAV}\right) }^{2} \tag{26}
+$$
+
+Where $C$ represents the air resistance coefficient, the value is ${0.31};\rho$ indicates the air density, which is ${1.29}\mathrm{\;{kg}}/{\mathrm{m}}^{3}$ . $S$ represents the windward area of MRBicopte, which is ${31}{\mathrm{\;{cm}}}^{3}$ . ${v}_{UAV}$ stands for flight speed.
+
+ < g r a p h i c s >
+
+Fig.7: Simulation of MRBicopter hover under ambient wind interference, (a) MRBicopter single module; (b) MRBicopter assembly.
+
+§ B. SIMULATION
+
+Fig. 7 shows the simulation diagram of the three-axis attitude angle of the two MRBicopter structures in hovering state under the presence of ambient wind interference. The blue line represents the roll angle tracking curve, the red line represents the pitch angle tracking curve, and the green line represents the yaw angle tracking curve.
+
+§ 1) SUBMODULE
+
+This experiment is a hover simulation experiment of MRBicopter submodule in the presence of ambient wind interference. The average ambient wind speed is set at ${10.5}\mathrm{\;m}/\mathrm{s}$ . The simulation experiment results are shown in Fig.7(a): the hover attitude angle oscillation of a single module does not exceed 0.05rad, which meets the design requirements.
+
+§ 2) ASSEMBLY
+
+This experiment is a hover simulation experiment of the MRBicopter assembly in the presence of ambient wind interference. The simulation results are shown in Fig.7(b): instantaneous oscillation of $> {0.4}$ rad occurs in the pitch and roll angle of the assembly at ${0.3}\mathrm{\;s}$ , and the adjustment is completed within ${0.2}\mathrm{\;s}$ , and the subsequent oscillation amplitude does not exceed 0.1rad. It shows that the combination controller has a strong ability to suppress the environmental wind interference.
+
+§ C. GROUND EXPERIMENT
+
+In order to ensure the safety of the test, the experiment was carried out on the indoor aircraft test platform, and 1/6HP650 pneumatic industrial fan was used as the ambient wind source. The MRBicopter flight control module uses STM32F427VIT6 as the main processor; The power supply adopts LiPo(4S1P:14.8V,3000mAh); The combination docking module uses ZigBee serial communication to receive the control signal and convert it into analog PWM signal to control the on-off of the relay. The ${2.4}\mathrm{{GHz}}{14}$ channel communication module is used for signal sending and receiving. The experimental results are shown in Fig.8.
+
+ < g r a p h i c s >
+
+Fig.8: MRBicopter ground experiment under ambient wind interference.
+
+§ 1) SUBMODULE EXPERIMENT
+
+Two MRBicopter submodules are built here, and one of them is selected for experiment. The experimental results are shown in Fig.8(a): when there is wind interference, the average oscillation amplitude of pitch angle and roll angle of the submodule is $\pm {4.98}^{ \circ }$ and the average oscillation amplitude of yaw Angle is $\pm {7.91}^{ \circ }$ , which meets the stability requirements.
+
+§ 2) ASSEMBLY EXPERIMENT
+
+The MRBicopter assembly is composed of two submodules. The experimental results are shown in Fig.8(b): when there is wind interference, the average oscillation amplitude of pitch and roll angle of the assembly is $\pm {5.12}^{ \circ }$ , and the average oscillation amplitude of yaw angle is $\pm {7.33}^{ \circ }$ , which meets the stability requirements.
+
+§ CONCLUSION
+
+In this paper, a modular and reconfigurable multi-UAV platform MRBicopter is proposed, in which the transverse twin rotor submodule can realize structural reconstruction through the electromagnet combination docking structure, and can realize different flight states by changing the motor speed and tilt angle to meet the needs of different tasks. In order to further improve the controllability of MRBicopter and expand its application fields, improvements will be made in the following aspects in the future:
+
+1) The fuzzy PID control algorithm is proposed to further improve the interference compensation capability of MRBicopter and improve the stability of the flight process of the assembly.
+
+2) Structurally, further mount relevant computing units on the UAV, such as Lidar, airborne computer NUC, etc., to expand the application scenario of the MRBicopter.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..1006f4b718cc35557fa211c012080b6cc54ca896
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,483 @@
+# Dynamical analysis of rumor propagation model considering media refutation and individual refutation*
+
+${1}^{\text{st }}$ Wenqi Pan
+
+College of Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+panwenqi07@163.com
+
+${2}^{\text{nd }}$ Li-Ying Hao*
+
+College of Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+haoliying_0305@163.com
+
+Abstract-The factor of refutation significantly impacts the spread of rumors. Common methods of refuting rumors include media intervention and individual efforts. While many scholars have explored the effects of these factors separately, few studies have comprehensively examined both simultaneously. This model integrates the influence of both media and individual refutation on the rumor propagation process. We propose a novel two-tier network model for rumor spread. We demonstrated the existence and stability of equilibrium points within the model. Theoretical analysis demonstrates that authoritative media refutations exert a broader and more substantial influence on rumor dissemination compared to individual refutations.
+
+Index Terms-rumor propagation, rumor refuting medias, rumor refuters, stability
+
+## I. INTRODUCTION
+
+Rumor refers to the speech fabricated without corresponding factual basis and with a certain purpose and promoted its dissemination by some means. With the exponential growth of technology and the widespread adoption of internet-based social networks, misinformation and harmful rumors have the potential to swiftly propagate across online platforms, posing significant threats to social cohesion, stability, as well as disrupting people's daily lives and productive activities. For example, the panic of buying salt caused by the Fukushima Daiichi Nuclear Disaster [1], there was a rumor that SHL-C could prevent COVID-19, which caused great harm to the public's psychology and body, and seriously disturbed the normal order of the society.
+
+The propagate of rumors has attracted the attention and research of many scholars. Some scholars compared the disseminate of rumors with the propagate of infectious diseases in humans, and applied the infectious disease model to the disseminate of rumors [2]-[5]. Considering the influence of different propagation mechanisms on rumor propagation, many scholars have studied cross propagation mechanism [7] and education mechanism [6]. Komi [8] established rumor propagation model based on population education and forgetting mechanism, and found that educated ignorant people are less likely to be transformed into disseminators and more likely to be transformed into suppressors than uneducated ignorant people.
+
+At the same time, many scholars considered the influence of different function methods [9]-[11] in the research process. Zhu et al. [14] proposed a rumor propagation model in homogeneous and heterogeneous networks, and comprehensively studied the influence of forced silence function, time delay and network topology on rumor propagation in social networks. The influence of time delay on the propagation process has also been studied by many scholars [15]-[18] on rumor propagation process in existing research. Cheng et al. [21] established an improved ${XY} - {ISR}$ rumor propagation model on the basis of interactive system, comprehensively discussed the influence of different delays on rumor propagation, further proposed control strategies such as deleting posts, popular science education and immunotherapy.
+
+With the complexity of the network environment, some scholars have comprehensively considered the influence of various factors on rumor propagation on the complex network [22]-[24]. Considering the reaction of the ignorant when hearing the rumor for the first time, Huo et al. [25] divided the individuals in the network into four groups: the ignorant, the trustworthy, the spreader and the uninterested, and proposed ${SIbInIu}$ rumor propagation model in the complex network. The theoretical analysis and simulation results show that the loss rate and suppression rate have a negative impact on the final rumor spread scale.
+
+In the existing literature, it is not common to comprehensively consider the impact of media refutation and individual refutation on the two-tier network rumor propagation model. Based on the actual assumptions, we believe that the rumor refutation effect of comprehensive consideration of the two is better than that of single consideration. This paper mainly make a dynamic analysis on the rumor propagation considering the rumor refutation effect of these two factors.
+
+The rest of this paper is distributed as follows. We propose a two-tier network rumor propagation model in section 1. Section 2 describes a two-tier network rumor propagation model considering both rumor refuting media and rumor refuter groups. In section 3, we discuss the existence and stability conditions of the equilibrium points. Finally, the feasibility of the results presented in this paper was confirmed through numerical simulations.
+
+---
+
+This work was funded by the National Natural Science Foundation of China (51939001, 52171292), Dalian Outstanding Young Talents Program (2022RJ05).
+
+---
+
+## II. TWO-TIER NETWORK RUMOR PROPAGATION MODEL
+
+In the two-tier rumor propagation model constructed in this paper, the media network model is composed of networks with $M$ media websites, and the personal friendship network model is composed of networks with $N$ personal friendship websites.
+
+In the network layer of media websites, media can be divided into three states: vulnerable media without rumor information (represented by $X$ ), affected media with rumor information (represented by $Y$ ) and rumor refuting media with rumor refuting information (represented by $Z$ ). When communicators visit the vulnerable media, they will release or leave rumors on the media network, so that the vulnerable media will be affected and become the affected media. When the rumor refuters visit the affected media, they will release or leave rumor refutation information on the media network to make the affected media become rumor refutation media.
+
+In the personal network layer, individuals are categorized into four distinct groups: those who have never heard of the rumors (denoted by $S$ ), those who actively spread rumors (denoted by $I$ ), those who do not believe in the rumors but disseminate refutation information (denoted by $D$ ), and those who neither believe in nor propagate any information (denoted by $R$ ). In the process of network node interaction, after visiting the vulnerable media, the disseminator spreads rumor information on the media website, so that the vulnerable media is infected and evolved into the affected media. When an ignorant person visits the affected media, affected by the rumor information, the ignorant person becomes a disseminator with a certain probability. Thus, rumors can be spread not only between people, but also between individuals and online media. The basic assumptions of this paper are as follows:
+
+Hypothesis 1: In the media network layer, considering that the media website has a certain registration rate and cancellation rate, the number of vulnerable media entering the communication system per unit time is ${\Lambda }_{1}$ . Moreover, there will be benign competition among the media. The three types of media websites $X, Y$ and $Z$ may move out of the communication system with a certain probability ${\mu }_{1}$ . When communicators visit vulnerable media and publish their own views and comments, the rate of conversion to affected media is $\lambda$ . When the rumor refuter visits the affected media and publishes rumor refutation information on it, the affected media will change into rumor refutation media with a certain probability $\eta$ .
+
+Hypothesis 2: In the personal interpersonal network layer, assume that the rate at which individuals who are unaware of rumors enter the communication system is ${\Lambda }_{2}$ . Those who question the rumor but neither spread rumor information nor disseminate refutation will transition to an immune state at a rate of ${\xi }_{2}$ . Individuals who initially spread rumors but later find the information untrue may become rumor disclaimers with probability $\delta$ . If these communicators lose interest in rumors and cease both rumor propagation and refutation, they will transition to an immune state with probability $\theta$ . Rumor disclaimers affected by the environment or who lose interest in refutation will also become immune with probability $\phi$ . Additionally, individual groups may exit the rumor spreading network due to migration at a rate ${\mu }_{2}$ .
+
+Hypothesis 3: In the interaction of offline individuals, the ignorant will become the disseminator at a certain rate $\alpha$ after contacting the disseminator. If ignorant person believe and propagate rumors after visiting the affected media, they will become disseminators at a certain rate $\beta$ . It is assumed that after the unknown person contacts the rumor information (including contact with people and knowing the rumor information from the media), they realize that the rumor information is untrue due to them own experience or discrimination ability. If an individual who is initially unaware of the rumors chooses to disseminate rumor refutation information, they will transition to the status of a rumor refuter at a rate of ${\xi }_{1}$ .
+
+Based on the above analysis, the rumor propagation process of ${XYZ} - {SIDR}$ model established in this paper is shown in Fig. 1.
+
+The meanings of symbols in Fig. 1 are shown in the following table. I.
+
+TABLE I
+
+DESCRIPTION OF PARAMETERS IN THE MODEL
+
+| $\mathbf{{Parameter}}$ | Description |
| ${\Lambda }_{1}$ | The number of susceptible media entering the communication system per unit time. |
| ${\Lambda }_{2}$ | The number of ignorant individuals entering the communication system per unit time. |
| $\lambda$ | The contact rate of susceptible medias with spreaders. |
| $\eta$ | The probability of affected media becoming rumor refuting media. |
| $\alpha$ | Rumor propagation rate of offline personal interaction. |
| $\beta$ | Rumor propagation rate under two-tier network interaction. |
| $\delta$ | The probability of propagating individuals becoming rumor refuting individuals. |
| $\theta$ | The probability of propagating individuals becoming immune individuals. |
| ${\xi }_{1}$ | The rate of ignorant individuals becoming rumor refuting individuals. |
| ${\xi }_{2}$ | The rate of ignorant individuals becoming immune individuals. |
| $\phi$ | The probability of rumor refuting individuals becoming immune individuals. |
| ${\mu }_{1}$ | The rate at which medias in the network move out of the propagation system. |
| ${\mu }_{2}$ | Migration rate of individuals in personal friendship network layer. |
+
+Based on the above analysis, we participated in the construction of an ${XYZ} - {SIDR}$ rumor propagation model. Then,
+
+$$
+\left\{ \begin{array}{l} {X}^{\prime } = {\Lambda }_{1} - {\lambda XI} - {\mu }_{1}X, \\ {Y}^{\prime } = {\lambda XI} - {\eta Y} - {\mu }_{1}Y, \\ {Z}^{\prime } = {\eta Y} - {\mu }_{1}Z, \\ {S}^{\prime } = {\Omega }_{2} - {\alpha SY} - {\beta SI} - \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {I + Y}\right) S - {\mu }_{2}S, \\ {I}^{\prime } = {\alpha SY} + {\beta SI} - \left( {\theta + \delta }\right) I - {\mu }_{2}I, \\ {D}^{\prime } = {\xi }_{1}S\left( {I + Y}\right) + {\delta I} - {\phi D} - {\mu }_{2}D, \\ {B}^{\prime } = {\xi }_{2}S\left( {I + Y}\right) + {\theta I} + {\delta D} - {\mu }_{2}B, \end{array}\right. \tag{1}
+$$
+
+
+
+Fig 1. Schematic representation of the ${XYZ} - {SIDR}$ rumor spreading model
+
+Since the model represents the process of rumor propagation, the parameters involved are non negative, and the initial conditions are met:
+
+$$
+X\left( 0\right) = {X}_{0} \geq 0, Y\left( 0\right) = {Y}_{0} \geq 0, Z\left( 0\right) = {Z}_{0} \geq 0,
+$$
+
+$$
+S\left( 0\right) = {S}_{0} \geq 0, I\left( 0\right) = {I}_{0} \geq 0, D\left( 0\right) = {D}_{0} \geq 0\text{,} \tag{2}
+$$
+
+$$
+R\left( 0\right) = {R}_{0} \geq 0\text{.}
+$$
+
+## III. MODEL ANALYSIS AND CALCULATION
+
+### A.The basic reproduction number ${R}_{0}$
+
+For system (1), the basic regeneration number ${R}_{0}$ is calculated as follows:
+
+Let $\mathcal{X} = {\left( I, Y, R, D, S, X, Z\right) }^{T}$ , equation (1) can be written as $\frac{d\mathcal{X}}{dt} = \mathcal{F}\left( \mathcal{X}\right) - \mathcal{V}\left( \mathcal{X}\right)$ .
+
+$$
+\mathcal{F}\left( \mathcal{X}\right) = \left( \begin{matrix} {\alpha SY} + {\beta SI} \\ {\lambda XI} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{matrix}\right) , \tag{3}
+$$
+
+$$
+\mathcal{V}\left( \mathcal{X}\right) = \left( \begin{matrix} {\theta I} + {\delta I} + {\mu }_{2}I \\ {\eta Y} + {\mu }_{1}Y \\ - {\xi }_{2}{SI} - {\xi }_{2}{SY} - {\theta I} - {\phi D} + {\mu }_{2}R \\ - {\xi }_{1}{SI} - {\xi }_{1}{SY} - {\delta I} + {\phi D} + {\mu }_{2}D \\ {H}_{1} \\ - {\Lambda }_{1} + {\lambda SI} + {\mu }_{1}X \\ - {\eta Y} + {\mu }_{1}Z \end{matrix}\right) \tag{4}
+$$
+
+where ${H}_{1} = - {\Lambda }_{2} + {\alpha SY} + {\beta SI} + {\xi }_{1}{SI} + {\xi }_{1}{SY} + {\xi }_{2}{SI} +$ ${\xi }_{2}{SY} + {\mu }_{2}S$ .
+
+Therefore
+
+$$
+F = \left( \begin{matrix} \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 \\ \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) , \tag{5}
+$$
+
+$$
+V = \left( \begin{matrix} \theta + \delta + {\mu }_{2} & 0 & 0 & 0 \\ 0 & \eta + {\mu }_{1} & 0 & 0 \\ - {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} - \theta & - {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & {\mu }_{2} & - \phi \\ - {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} - \delta & - {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & \phi + {\mu }_{2} \end{matrix}\right) \tag{6}
+$$
+
+By calculation we can get
+
+$$
+F{V}^{-1} = \left( \begin{matrix} \frac{\beta {\Lambda }_{2}}{{\mu }_{2}\left( {\theta + \delta + {\mu }_{2}}\right) } & \frac{\alpha {\Lambda }_{2}}{{\mu }_{2}\left( {\eta + {\mu }_{1}}\right) } & 0 & 0 \\ \frac{\lambda {\Lambda }_{1}}{{\mu }_{1}\left( {\theta + \delta + {\mu }_{2}}\right) } & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \tag{7}
+$$
+
+Hence, according to reference [27], the basic reproduction number of system (1) is the spectral radius of matrix $F{V}^{-1}$ as follows:
+
+$$
+{R}_{0} = \frac{\beta {\Lambda }_{2}}{{\mu }_{2}\left( {\theta + \delta + {\mu }_{2}}\right) } \tag{8}
+$$
+
+## B. Existence of equilibrium
+
+According to the system dynamics equation (1), we can calculate the equilibrium $E = \left( {X, Y, Z, S, I, D, R}\right)$ . It is easy to see that the positive equilibrium points of system (1) are ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ and ${E}^{ * } =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ , and the rumor free equilibrium point ${E}_{0}$ always exists.
+
+Theorem 1 The equilibrium point ${E}^{ * }\; =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ exists if ${R}_{0} > 1$ and $\left( {\theta + \delta + {\mu }_{2}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) > {\beta \lambda }{\Lambda }_{2}.$
+
+Proof The rumors about system (1) have a balance point that satisfies:
+
+$$
+\left\{ \begin{array}{l} {\Lambda }_{1} - {\lambda XI} - {\mu }_{1}X = 0, \\ {\lambda XI} - {\eta Y} - {\mu }_{1}Y = 0, \\ {\eta Y} - {\mu }_{1}Z = 0, \\ {\Lambda }_{2} - {\alpha SY} - {\beta SI} - \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {I + Y}\right) S - {\mu }_{2}S = 0, \\ {\alpha SY} + {\beta SI} - \left( {\theta + \delta }\right) I - {\mu }_{2}I = 0, \\ {\xi }_{1}S\left( {I + Y}\right) + {\delta I} - {\phi D} - {\mu }_{2}D = 0, \\ {\xi }_{2}S\left( {I + Y}\right) + {\theta I} + {\phi D} - {\mu }_{2}R = 0. \end{array}\right. \tag{9}
+$$
+
+According to formula (9), ${X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{D}^{ * },{R}^{ * }$ are represented by ${I}^{ * }$ respectively and brought into the fifth equation to get
+
+$$
+a{I}^{2} + {bI} + c = 0 \tag{10}
+$$
+
+Where
+
+$$
+a = \lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) ,
+$$
+
+$$
+b = \left( {\theta + \delta + {\mu }_{2}}\right) \left\lbrack {\left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack
+$$
+
+$$
++ \lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) - {\beta \lambda }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \text{,}
+$$
+
+$$
+c = \left( {\eta + {\mu }_{1}}\right) \left\lbrack {{\mu }_{2}\lambda \left( {\theta + \delta + {\mu }_{2}}\right) - {\mu }_{1}\beta {\Lambda }_{2}}\right\rbrack - {\alpha \lambda }{\Lambda }_{1}{\Lambda }_{2}.
+$$
+
+(11)
+
+It can be obtained by calculation that
+
+$$
+\Delta = {b}^{2} - {4ac}
+$$
+
+$$
+= {\left\lbrack \lambda {\Lambda }_{1}\left( \alpha + {\xi }_{2}\right) + \left( \eta + {\mu }_{1}\right) \left( {\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda \right) \right\rbrack }^{2}
+$$
+
+$$
+* {\left( \theta + \delta + {\mu }_{2}\right) }^{2} + {\left\lbrack \beta \lambda {\Lambda }_{2}\left( \eta + {\mu }_{1}\right) \right\rbrack }^{2} + {4\alpha }{\lambda }^{2}{\Lambda }_{1}{\Lambda }_{2}\left( {\beta + {\xi }_{1}}\right)
+$$
+
+$$
+* \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) - {2\beta }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right)
+$$
+
+$$
+* \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack
+$$
+
+$$
+- {4\lambda }\left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right)
+$$
+
+$$
+* \left\lbrack {{\mu }_{2}\lambda \left( {\theta + \delta + {\mu }_{2}}\right) - \beta {\mu }_{1}{\Lambda }_{2}}\right\rbrack
+$$
+
+(12)
+
+According to the discriminant calculation, when ${R}_{0} > 1$ and
+
+$\left( {\theta + \delta + {\mu }_{2}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) > {\beta \lambda }{\Lambda }_{2}$ , the negative solution is omitted:
+
+$$
+{I}^{ * } = \frac{{\beta \lambda }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) - {H}_{2}\left( {\theta + \delta + {\mu }_{2}}\right) + \sqrt{\Delta }}{{2\lambda }\left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) } \tag{13}
+$$
+
+where ${H}_{2} = \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack$ .
+
+Therefore ${E}^{ * } = \left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ , where
+
+$$
+{X}^{ * } = \frac{{\Lambda }_{1}}{\lambda {I}^{ * } + {\mu }_{1}}, \tag{14}
+$$
+
+$$
+{Y}^{ * } = \frac{\lambda {\Lambda }_{1}{I}^{ * }}{\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }, \tag{15}
+$$
+
+$$
+{Z}^{ * } = \frac{{\lambda \eta }{\Lambda }_{1}{I}^{ * }}{{\mu }_{1}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }, \tag{16}
+$$
+
+$$
+{S}^{ * } = \frac{{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }{T}, \tag{17}
+$$
+
+$$
+{D}^{ * } = \frac{\lambda {\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) {I}^{*2} + \left\lbrack {\lambda {\Lambda }_{1}{\Lambda }_{2} + {\mu }_{1}\left( {\eta + {\mu }_{1}}\right) }\right\rbrack {I}^{ * }}{\left( {\phi + {\mu }_{2}}\right) T}, \tag{18}
+$$
+
+$$
+{R}^{ * } = \frac{{\xi }_{2}{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) {H}_{3} + \theta {H}_{4}}{\left( {{\mu }_{2} - \phi }\right) {H}_{4}} \tag{19}
+$$
+
+where ${H}_{3} = \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) \left\lbrack {\lambda {\Lambda }_{1} + \left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }\right\rbrack ,{H}_{4} =$ $\lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) {I}^{*2} + \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + }\right. }\right.$ $\left. \left. {{\mu }_{2}\lambda }\right) \right\rbrack {I}^{ * } + {\mu }_{2}\lambda \left( {\eta + {\mu }_{1}}\right)$ .
+
+## C. Stability of equilibrium
+
+Theorem 2 The equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is locally asymptotically stable if ${R}_{0} < 1$ . And the equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is unstable if ${R}_{0} > 1$ .
+
+Proof The Jacobian matrix of system (1) at
+
+${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is
+
+$J\left( {E}_{0}\right) =$
+
+$$
+\left( \begin{matrix} - {\mu }_{1} & 0 & 0 & 0 & - \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & - \eta - {\mu }_{1} & 0 & 0 & \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & \eta & - {\mu }_{1} & 0 & 0 & 0 & 0 \\ 0 & {H}_{5} & 0 & - {\mu }_{2} & {H}_{6} & 0 & 0 \\ 0 & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {H}_{7} & 0 & 0 \\ 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} + \theta & - {\mu }_{2} & - {\mu }_{2} \\ 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} & 0 & 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} + \delta & 0 & {H}_{8} \end{matrix}\right)
+$$
+
+where ${H}_{5} = - \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{H}_{6} = - \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}}$ ,
+
+${H}_{7} = \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) ,{H}_{8} = - \left( {\phi + {\mu }_{2}}\right) .$
+
+The characteristic equation of matrix $J\left( {E}_{0}\right)$ is
+
+$\left| {J\left( {E}_{0}\right) - {hE}}\right| =$
+
+$$
+\left. \begin{matrix} {T}_{1} & 0 & 0 & 0 & - \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & {T}_{1} & 0 & 0 & \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & \eta & {T}_{4} & 0 & 0 & 0 & 0 \\ 0 & {T}_{2} & 0 & {T}_{5} & {T}_{3} & 0 & 0 \\ 0 & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {T}_{4} & 0 & 0 \\ 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {T}_{5} & - {\mu }_{2} - h & - {\mu }_{2} \\ 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} & 0 & 0 & {T}_{6} & 0 & - \left( {\phi + {\mu }_{2}}\right) - h \end{matrix}\right|
+$$
+
+$= {\left( {\mu }_{1} + h\right) }^{2}{\left( {\mu }_{2} + h\right) }^{2}\left( {\phi + {\mu }_{2} + h}\right) \left( {\eta + {\mu }_{1} + h}\right) \left\lbrack {\beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - (\theta + }\right.$
+
+$$
+\left. {\left. {\delta + {\mu }_{2}}\right) - h}\right\rbrack = 0
+$$
+
+Where ${T}_{1} = - {\mu }_{1} - h,{T}_{2} = - \eta - {\mu }_{1} - h,{T}_{3} =$ $- \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{T}_{4} = - {\mu }_{1} - h,{T}_{5} = - {\mu }_{2} - h,{T}_{6} =$ $- \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{T}_{7} = \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{T}_{8} =$ $\beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{T}_{9} = {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} + \delta .$
+
+Therefore, the characteristic root corresponding to the characteristic equation of $J\left( {E}_{0}\right)$ is:
+
+$$
+{h}_{01} = - {\mu }_{1} < 0,{h}_{02} = - {\mu }_{2} < 0,{h}_{03} = - \left( {\phi + {\mu }_{2}}\right) < 0,
+$$
+
+$$
+{h}_{04} = - \left( {\eta + {\mu }_{1}}\right) < 0,{h}_{05} = \frac{\theta + \delta + {\mu }_{2}}{{\mu }_{2}}\left( {{R}_{0} - 1}\right) < 0
+$$
+
+(20)
+
+According to Routh-Hurwitz stability criterion, the equilibrium point
+
+${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is locally asymptotically stable if ${R}_{0} < 1$ .
+
+And the equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is unstable if ${R}_{0} > 1$ .
+
+Theorem 3 The equilibrium point ${E}^{ * }\; =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ is locally asymptotically stable if ${R}_{0} > 1$ and $\beta {\Lambda }_{2} < {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) \left( {\theta + \delta + {\mu }_{2}}\right)$ , otherwise, the equilibrium point ${E}^{ * }$ is unstable.
+
+Proof The Jacobian matrix at ${E}^{ * } =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ is
+
+$J\left( {E}^{ * }\right) =$
+
+$$
+\left( \begin{matrix} {A}_{1} & 0 & 0 & 0 & - \lambda {X}^{ * } & 0 & 0 \\ \lambda {I}^{ * } & {A}_{2} & 0 & 0 & \lambda {X}^{ * } & 0 & 0 \\ 0 & \eta & - {\mu }_{1} & 0 & 0 & 0 & 0 \\ 0 & {A}_{3} & 0 & {A}_{4} & {A}_{8} & 0 & 0 \\ 0 & \alpha {S}^{ * } & 0 & {A}_{5} & {A}_{9} & 0 & 0 \\ 0 & {\xi }_{2}{S}^{ * } & 0 & {A}_{6} & {\xi }_{2}{S}^{ * } + \theta & - {\mu }_{2} & - {\mu }_{2} \\ 0 & {\xi }_{1}{S}^{ * } & 0 & {A}_{7} & {\xi }_{1}{S}^{ * } + \delta & 0 & {A}_{10} \end{matrix}\right)
+$$
+
+Where ${A}_{1} = \lambda {I}^{ * } - {\mu }_{1},{A}_{2} = - \eta - {\mu }_{1},{A}_{3} =$
+
+$$
+- \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * },{A}_{4} = - \alpha {Y}^{ * } - \beta {I}^{ * },{A}_{5} = \alpha {Y}^{ * } + \beta {I}^{ * }\text{,}
+$$
+
+$$
+{A}_{6} = {\xi }_{2}\left( {{I}^{ * } + {Y}^{ * }}\right) ,\;{A}_{7} = {\xi }_{1}\left( {{I}^{ * } + {Y}^{ * }}\right) ,\;{A}_{8} =
+$$
+
+$$
+- \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * },{A}_{9} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) ,{A}_{10} =
+$$
+
+$- \left( {\phi + {\mu }_{2}}\right)$ .
+
+The characteristic equation of matrix $J\left( {E}^{ * }\right)$ is
+
+$\left| {J\left( {E}^{ * }\right) - {hE}}\right| =$
+
+$$
+\left| \begin{matrix} {B}_{1} & 0 & 0 & 0 & - \lambda {X}^{ * } & 0 & 0 \\ \lambda {I}^{ * } & {B}_{2} & 0 & 0 & \lambda {X}^{ * } & 0 & 0 \\ 0 & \eta & - {\mu }_{1} - h & 0 & 0 & 0 & 0 \\ 0 & {B}_{3} & 0 & {B}_{3} & {B}_{7} & 0 & 0 \\ 0 & \alpha {S}^{ * } & 0 & {B}_{4} & {B}_{8} & 0 & 0 \\ 0 & {\xi }_{2}{S}^{ * } & 0 & {B}_{5} & {\xi }_{2}{S}^{ * } + \theta & {B}_{9} & - {\mu }_{2} \\ 0 & {\xi }_{1}{S}^{ * } & 0 & {B}_{6} & {\xi }_{1}{S}^{ * } + \delta & 0 & {B}_{10} \end{matrix}\right|
+$$
+
+Where ${B}_{1} = \lambda {I}^{ * } - {\mu }_{1} - h,{B}_{2} = - \eta - {\mu }_{1} - h,{B}_{3} =$ $- \alpha {Y}^{ * } - \beta {I}^{ * } - h,{B}_{4} = \alpha {Y}^{ * } + \beta {I}^{ * },{B}_{5} = {\xi }_{2}\left( {{I}^{ * } + }\right.$ $\left. {Y}^{ * }\right) ,{B}_{6} = {\xi }_{1}\left( {{I}^{ * } + {Y}^{ * }}\right) ,{B}_{7} = - \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * }$ , ${B}_{8} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{B}_{9} = - {\mu }_{2} - h,{B}_{10} =$ $- \left( {\phi + {\mu }_{2}}\right) - h$ .
+
+Thus, we can obtain
+
+$$
+\left| {J\left( {E}^{ * }\right) - {hE}}\right| = \left( {{\mu }_{1} + h}\right) \left( {{\mu }_{2} + h}\right) \left( {\phi + {\mu }_{2} + h}\right) \left( {\eta + {\mu }_{1} + }\right.
+$$
+
+$\left. h\right) \left( {\lambda {I}^{ * } + {\mu }_{1} + h}\right) G$ .
+
+Where $G = - \left\lbrack {\alpha {Y}^{ * } + \beta {I}^{ * } + \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {{I}^{ * } + {Y}^{ * }}\right) + {\mu }_{2}}\right\rbrack - h$ .
+
+Therefore, the characteristic root corresponding to the characteristic equation of $J\left( {E}^{ * }\right)$ is:
+
+$$
+{h}_{01} = - {\mu }_{1} < 0,{h}_{02} = - {\mu }_{2} < 0, \tag{21}
+$$
+
+$$
+{h}_{03} = - \left( {\phi + {\mu }_{2}}\right) < 0,{h}_{04} = - \left( {\eta + {\mu }_{1}}\right) < 0, \tag{22}
+$$
+
+$$
+{h}_{05} = - \left\lbrack {\alpha {Y}^{ * } + \beta {I}^{ * } + \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {{I}^{ * } + {Y}^{ * }}\right) + {\mu }_{2}}\right\rbrack < 0, \tag{23}
+$$
+
+$$
+{h}_{06} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) . \tag{24}
+$$
+
+Then, we take ${S}^{ * }$ into ${h}_{06}$ ,
+
+${h}_{06} = \frac{\beta {\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }{\lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) {I}^{*2} + {C}_{1} + {\mu }_{2}\lambda \left( {\eta + {\mu }_{1}}\right) } - \left( {\theta + \delta + {\mu }_{2}}\right) ,$
+
+where ${C}_{1} = \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack {I}^{ * }$ .
+
+## IV. NUMERICAL SIMULATION
+
+In this section, we will assign reasonable values to the parameters in system (1) as established in this paper, and verify the results of our theoretical analysis through numerical simulations. On the one hand, we combine some similar examples in reality. On the other hand, the parameter values in relevant literature are referred to.
+
+Order ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\lambda = {0.01},\eta = {0.3},\alpha = {0.01},\beta =$ ${0.01},\theta = {0.2},\delta = {0.2},\phi = {0.15},{\xi }_{1} = {0.1},{\xi }_{2} = {0.1},{\mu }_{1} =$ ${0.2},{\mu }_{2} = {0.2}$ . Calculated ${R}_{0} = {0.8333} < 1$ , then the no rumor propagation equilibrium point ${E}_{0}$ is stable.
+
+
+
+Fig 2. Stability of equilibrium point ${E}_{0}$ .
+
+Fig. 2 shows when ${R}_{0} = {0.0833} < 1$ , the density of each subclass in the model changes with time. At first, the number of unaffected media and unknowns gradually decreased at a similar rate, and finally stabilized. Due to the limited number of moving in and the large number of moving out, the number of affected media and communicators gradually decreases at a similar rate and finally becomes 0 . The number of rumor refuting media and rumor refuters first increased with the increase of the number of affected media and disseminators. It gradually decreases over time and finally becomes 0 . The number of immunized persons increased with the increase of the number of communicators and rumor refuters. The growth rate gradually slowed down and finally stabilized. Namely, the rumor disappears and reaches a stable equilibrium point, and there is no rumor.
+
+Let ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\lambda = {0.2},\eta = {0.3},\alpha = {0.5},\beta =$ ${0.6},\theta = {0.4},\delta = {0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.2},{\mu }_{1} =$ ${0.2},{\mu }_{2} = {0.2}$ and calculate ${R}_{0} = 3 > 1$ , the equilibrium point ${E}^{ * }$ is stable, as shown in Fig. 3.
+
+
+
+Fig 3. Stability of equilibrium point ${E}^{ * }$ .
+
+In Fig. 3, considering the media network layer, due to the small number of new media entering the communication system and the transformation of some unaffected media into affected media, the number of unaffected media gradually decreases and tends to stabilize after a period of time. Originally, the number of affected media increased due to the transformation of some unaffected media into affected media. Over time, most of the affected media were transformed into rumor refuting media, so the number of affected media decreased and stabilized. As the affected media changed into rumor refuting media, the number of rumor refuting media increased and gradually stabilized.
+
+Fig. 3 illustrates that, within the individual interpersonal network layer, the number of ignorant individuals begins to decline. Initially, the low influx of new individuals and a fixed rate of departures contribute to this decrease. Additionally, some ignorant individuals transition to become communicators, while others become immune or rumor refuters. Consequently, the number of communicators increases as ignorant individuals transform into communicators. Over time, as communicators transition to immune individuals or rumor refuters, the number of communicators gradually decreases and eventually stabilizes. As more communicators and ignorant individuals become rumor refuters, the number of rumor refuters rises and stabilizes. Simultaneously, with some ignorant individuals, communicators, and rumor refuters becoming immune, the number of immune individuals significantly increases and gradually stabilizes. Ultimately, the model reaches a steady state, with each groups number stabilizing over time.
+
+Fig. 4 to Fig. 7 depict ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\eta = {0.3},\theta =$ ${0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.5},{\mu }_{1} = {0.2},{\mu }_{2} = {0.2}$ , the evolution of the density of $X\left( t\right) , Y\left( t\right)$ and $S\left( t\right)$ with different parameters.
+
+Fig. 4 and Fig. 5 describe the effect of parameter $\lambda$ on the density change of $X\left( t\right)$ and $Y\left( t\right)$ respectively. Parameter $\lambda$ represents the probability that the unaffected media will be transformed into the affected media. It can be seen from the figure that the parameter $\lambda$ has a negative correlation with the density of $X\left( t\right)$ and a positive correlation with the density of $Y\left( t\right)$ . That is, with the increase of the parameter $\lambda$ , the rate of transformation from unaffected media to affected media increases. The number of unaffected media decreases gradually, and the number of affected media increases gradually, accelerating the spread of rumors in the media network layer.
+
+
+
+Fig 4. Density of $X\left( t\right)$ under the parameter $\lambda$ .
+
+
+
+Fig 5. Density of $Y\left( t\right)$ under the parameter $\lambda$ .
+
+Fig. 6 and Fig. 7 describe the influence of parameters $\alpha$ and $\beta$ on the density change of $S\left( t\right)$ respectively. Parameter $\alpha$ represents the probability that the unknown person will become a spreader by accessing the affected media, and parameter $\beta$ represents the probability that the unknown person will become a spreader by contacting spreaders. It can be seen from the figure that the density of $S\left( t\right)$ decreases with the increase of parameters $\alpha$ and $\beta$ . Namely, with the increase of the propagation rate of individual network layer and double-layer network interaction, the number of unknowns gradually decreases, which accelerates the spread of rumors in the double-layer network.
+
+
+
+Fig 6. Density of $S\left( t\right)$ under the parameter $\alpha$ .
+
+
+
+Fig 7. Density of $S\left( t\right)$ under the parameter $\beta$ .
+
+Fig. 8 and Fig. 9 describe when ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\eta =$ ${0.3},\theta = {0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.5},{\mu }_{1} = {0.2},{\mu }_{2} =$ 0.2, the evolution of the density of $I\left( t\right)$ with different parameters.
+
+Fig. 8 and Fig. 9 describe the influence of parameters $\alpha$ and $\beta$ on the density change of $I\left( t\right)$ respectively. Considering the meaning of parameters $\alpha$ and $\beta$ , it is easy to know that the values of parameters $\alpha$ and $\beta$ are positively correlated with the density of $I\left( t\right)$ . As can be seen from the figure, the density of $I\left( t\right)$ increases with the increase of parameters $\alpha$ and $\beta$ . That is, the increasing number of communicators promotes the expansion of the scale of communication, which is not conducive to the control of rumors.
+
+## V. CONCLUSION
+
+At present, many scholars have separately studied the influence of media refutation or individual refutation on the spread of rumors. We believe that considering these two effects comprehensively is better than considering one of them alone. This paper integrates both media refutation and individual refutation into the analysis and introduces a novel ${XYZ} - {SIDR}$ two-tier rumor propagation model, further demonstrates the existence and stability of equilibrium points within the model. The research results show that this two-layer network model is more effective in controlling the spread of rumors.
+
+
+
+Fig 8. Density of $I\left( t\right)$ under the parameter $\alpha$ .
+
+
+
+Fig 9. Density of $I\left( t\right)$ under the parameter $\beta$ .
+
+Theoretical analysis indicates that integrating both media and individual rumor refutation exerts a more significant and broader impact on rumor propagation. We suggest strengthening the dissemination of rumor refutation information through the official media rather than relying solely on individuals to control the spread of rumor. The research conclusion can help relevant departments formulate effective measures to control the spread of rumors. On the other hand, the model established in this paper can also be analogically applied to the study of infectious disease model.
+
+## REFERENCES
+
+[1] W. Jinling, J. Haijun, H. Cheng, Y. Zhiyong and L. Jiarong,"Stability and Hopf bifurcation analysis of multi-lingual rumor spreading model with nonlinear inhibition mechanism," Chaos, Solitons & Fractals, vol. 153, pp. 111464, December 2021.
+
+[2] L. Qiming, L. Tao and S. Meici, "The analysis of an SEIR rumor propagation model on heterogeneous network," Physica A: Statistical Mechanics and its Applications, vol.469, PP. 372-380, March 2017.
+
+[3] H. Yuhan, P. Qiuhui, H. Wenbing and H. Mingfeng, "Rumor spreading
+
+model considering the proportion of wisemen in the crowd," Physica A: Statistical Mechanics and its Applications, vol. 505, pp. 1084-1094, September 2018.
+
+[4] W. Juan, L. Chao and X. Chengyi, "Improved centrality indicators to characterize the nodal spreading capability in complex networks," Applied Mathematics and Computation, vol. 334, pp. 388-400, October 2018.
+
+[5] K. Eismann, "Diffusion and persistence of false rumors in social media networks: implications of searchability on rumor self-correction on Twitter," Journal of Business Economics, vol. 91, pp. 1299-1329, February 2021.
+
+[6] W. Jinling, J. Haijun, M. ianlong and H. Cheng, "Global dynamics of the multi-lingual SIR rumor spreading model with cross-transmitted mechanism," Chaos, Solitons & Fractals, vol. 126, pp. 148-157, September 2019.
+
+[7] D. Xuefan, L. Yijun, W. Chao, L. Ying and T. Daisheng, "A double-identity rumor spreading model," Physica A: Statistical Mechanics and its Applications, vol. 528, pp. 121479 August 2019.
+
+[8] K. Afassinou, "Analysis of the impact of education rate on the rumor spreading mechanism," Physica A: Statistical Mechanics and Its Applications, vol. 414, pp. 43-52, November 2014.
+
+[9] Z. Linhe, Y. Yang, G. Gui and Z. Zhengdi, "Modeling the dynamics of rumor diffusion over complex networks," Information Sciences, vol. 562, pp. 240-258, July 2021.
+
+[10] Z. Linhe and W. Bingxu, "Stability analysis of a SAIR rumor spreading model with control strategies in online social networks," Information Sciences, vol. 526, pp. 1-19, July 2020.
+
+[11] A. Abta, H. Laarabi, M. Rachik, H. T. Alaoui and S. Boutayeb, "Optimal control of a delayed rumor propagation model with saturated control functions and ${L}^{1}$ -type objectives," Social Network Analysis and Mining, vol. 10, August 2020.
+
+[12] X. Jiuping, T. Weiyao, Z. Yi and W. Fengjuan, "A dynamic dissemination model for recurring online public opinion," Nonlinear Dynamics, vol. 99, pp. 12691293, November 2019.
+
+[13] C. Yingying, H. Liangan and Z. Laijun, "Rumor spreading in complex networks under stochastic node activity," Physica A: Statistical Mechanics and its Applications, vol. 559, pp. 125061, December 2020.
+
+[14] Z. Linhe, L. Wenshan and Z. Zhengdi, "Delay differential equations modeling of rumor propagation in both homogeneous and heterogeneous networks with a forced silence function," Applied Mathematics and Computation, vol. 370, pp. 124925, April 2020.
+
+[15] Y. Fulian , Z. Xiaowei, S. Xueying, X. Xinyu, P. Yanyan and W. Jianhong, "Modeling and quantifying the influence of opinion involving opinion leaders on delayed information propagation dynamics," Applied Mathematics Letters, vol. 121, pp. 107356, November 2021.
+
+[16] Z. Hongyong and Z. Linhe, "Dynamic Analysis of a ReactionDiffusion Rumor Propagation Model," International Journal of Bifurcation and Chaos, vol. 26, pp. 1650101, 2016.
+
+[17] Z. Linhe, W. Xuewei, A. Zhengdi and S. Shuling, "Global Stability and Bifurcation Analysis of a Rumor Propagation Model with Two Discrete Delays in Social Networks," International Journal of Bifurcation and Chaos, vol. 30, pp. 2050175, 2020.
+
+[18] Y. Shuzhen, Y. Zhiyong, J. Haijun and Y. Shuai, "The dynamics and control of 2I2SR rumor spreading models in multilingual online social networks," Information Sciences, vol. 581, pp. 18-41, December 2021.
+
+[19] M. Ghosh, S. Das and P. Das , "Dynamics and control of delayed rumor propagation through social networks," Journal of Applied Mathematics and Computing, vol. 68, pp. 1-30, November 2021.
+
+[20] Z. Linhe and H. Le, "Pattern formation in a reactiondiffusion rumor propagation system with Allee effect and time delay," Nonlinear Dynamics, vol. 107, pp. 3041-3063, January 2022.
+
+[21] C. Yingying, H. Liang'an and Z. Laijun, "Dynamical behaviors and control measures of rumor-spreading model in consideration of the infected media and time delay," Information Sciences, vol. 564, pp. 237- 253, July 2021.
+
+[22] L. Jiarong, J. Haijun, Y. Zhiyong and H. Cheng, "Dynamical analysis of rumor spreading model in homogeneous complex networks," Applied Mathematics and Computation, vol. 359, pp. 374-385, October 2019.
+
+[23] J. Wenjun, L. Yi, Z. Xiaoqin, Z. Juping and J. Zhen, "A rumor spreading pairwise model on weighted networks," Physica A: Statistical Mechanics and its Applications, vol. 585, pp. 126451, January 2022.
+
+[24] Y. Lan, L. Zhiwu and A. Giua , "Containment of rumor spread in complex social networks," Information Sciences, vol. 506, pp. 113-130, January 2020.
+
+[25] H. Liangan, D. Fan and C. Yingying, "Dynamic analysis of a S I b I n I u, rumor spreading model in complex social network," Physica A: Statistical Mechanics and its Applications, vol. 523, pp. 924-932, June 2019.
+
+[26] P. van den Driessche and J. Watmough, "Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission," Mathematical Biosciences, vol. 180, pp. 2948, December 2002.
+
+[27] J. M. Heffernan, R. J. Smith and L. M. Wahl, "Perspectives on the basic reproductive ratio," Journal of the Royal Society Interface, vol. 2, pp. 281293, 2005.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5656ff95e8e3c7eb084c4d89196b126939cd616f
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,464 @@
+§ DYNAMICAL ANALYSIS OF RUMOR PROPAGATION MODEL CONSIDERING MEDIA REFUTATION AND INDIVIDUAL REFUTATION*
+
+${1}^{\text{ st }}$ Wenqi Pan
+
+College of Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+panwenqi07@163.com
+
+${2}^{\text{ nd }}$ Li-Ying Hao*
+
+College of Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+haoliying_0305@163.com
+
+Abstract-The factor of refutation significantly impacts the spread of rumors. Common methods of refuting rumors include media intervention and individual efforts. While many scholars have explored the effects of these factors separately, few studies have comprehensively examined both simultaneously. This model integrates the influence of both media and individual refutation on the rumor propagation process. We propose a novel two-tier network model for rumor spread. We demonstrated the existence and stability of equilibrium points within the model. Theoretical analysis demonstrates that authoritative media refutations exert a broader and more substantial influence on rumor dissemination compared to individual refutations.
+
+Index Terms-rumor propagation, rumor refuting medias, rumor refuters, stability
+
+§ I. INTRODUCTION
+
+Rumor refers to the speech fabricated without corresponding factual basis and with a certain purpose and promoted its dissemination by some means. With the exponential growth of technology and the widespread adoption of internet-based social networks, misinformation and harmful rumors have the potential to swiftly propagate across online platforms, posing significant threats to social cohesion, stability, as well as disrupting people's daily lives and productive activities. For example, the panic of buying salt caused by the Fukushima Daiichi Nuclear Disaster [1], there was a rumor that SHL-C could prevent COVID-19, which caused great harm to the public's psychology and body, and seriously disturbed the normal order of the society.
+
+The propagate of rumors has attracted the attention and research of many scholars. Some scholars compared the disseminate of rumors with the propagate of infectious diseases in humans, and applied the infectious disease model to the disseminate of rumors [2]-[5]. Considering the influence of different propagation mechanisms on rumor propagation, many scholars have studied cross propagation mechanism [7] and education mechanism [6]. Komi [8] established rumor propagation model based on population education and forgetting mechanism, and found that educated ignorant people are less likely to be transformed into disseminators and more likely to be transformed into suppressors than uneducated ignorant people.
+
+At the same time, many scholars considered the influence of different function methods [9]-[11] in the research process. Zhu et al. [14] proposed a rumor propagation model in homogeneous and heterogeneous networks, and comprehensively studied the influence of forced silence function, time delay and network topology on rumor propagation in social networks. The influence of time delay on the propagation process has also been studied by many scholars [15]-[18] on rumor propagation process in existing research. Cheng et al. [21] established an improved ${XY} - {ISR}$ rumor propagation model on the basis of interactive system, comprehensively discussed the influence of different delays on rumor propagation, further proposed control strategies such as deleting posts, popular science education and immunotherapy.
+
+With the complexity of the network environment, some scholars have comprehensively considered the influence of various factors on rumor propagation on the complex network [22]-[24]. Considering the reaction of the ignorant when hearing the rumor for the first time, Huo et al. [25] divided the individuals in the network into four groups: the ignorant, the trustworthy, the spreader and the uninterested, and proposed ${SIbInIu}$ rumor propagation model in the complex network. The theoretical analysis and simulation results show that the loss rate and suppression rate have a negative impact on the final rumor spread scale.
+
+In the existing literature, it is not common to comprehensively consider the impact of media refutation and individual refutation on the two-tier network rumor propagation model. Based on the actual assumptions, we believe that the rumor refutation effect of comprehensive consideration of the two is better than that of single consideration. This paper mainly make a dynamic analysis on the rumor propagation considering the rumor refutation effect of these two factors.
+
+The rest of this paper is distributed as follows. We propose a two-tier network rumor propagation model in section 1. Section 2 describes a two-tier network rumor propagation model considering both rumor refuting media and rumor refuter groups. In section 3, we discuss the existence and stability conditions of the equilibrium points. Finally, the feasibility of the results presented in this paper was confirmed through numerical simulations.
+
+This work was funded by the National Natural Science Foundation of China (51939001, 52171292), Dalian Outstanding Young Talents Program (2022RJ05).
+
+§ II. TWO-TIER NETWORK RUMOR PROPAGATION MODEL
+
+In the two-tier rumor propagation model constructed in this paper, the media network model is composed of networks with $M$ media websites, and the personal friendship network model is composed of networks with $N$ personal friendship websites.
+
+In the network layer of media websites, media can be divided into three states: vulnerable media without rumor information (represented by $X$ ), affected media with rumor information (represented by $Y$ ) and rumor refuting media with rumor refuting information (represented by $Z$ ). When communicators visit the vulnerable media, they will release or leave rumors on the media network, so that the vulnerable media will be affected and become the affected media. When the rumor refuters visit the affected media, they will release or leave rumor refutation information on the media network to make the affected media become rumor refutation media.
+
+In the personal network layer, individuals are categorized into four distinct groups: those who have never heard of the rumors (denoted by $S$ ), those who actively spread rumors (denoted by $I$ ), those who do not believe in the rumors but disseminate refutation information (denoted by $D$ ), and those who neither believe in nor propagate any information (denoted by $R$ ). In the process of network node interaction, after visiting the vulnerable media, the disseminator spreads rumor information on the media website, so that the vulnerable media is infected and evolved into the affected media. When an ignorant person visits the affected media, affected by the rumor information, the ignorant person becomes a disseminator with a certain probability. Thus, rumors can be spread not only between people, but also between individuals and online media. The basic assumptions of this paper are as follows:
+
+Hypothesis 1: In the media network layer, considering that the media website has a certain registration rate and cancellation rate, the number of vulnerable media entering the communication system per unit time is ${\Lambda }_{1}$ . Moreover, there will be benign competition among the media. The three types of media websites $X,Y$ and $Z$ may move out of the communication system with a certain probability ${\mu }_{1}$ . When communicators visit vulnerable media and publish their own views and comments, the rate of conversion to affected media is $\lambda$ . When the rumor refuter visits the affected media and publishes rumor refutation information on it, the affected media will change into rumor refutation media with a certain probability $\eta$ .
+
+Hypothesis 2: In the personal interpersonal network layer, assume that the rate at which individuals who are unaware of rumors enter the communication system is ${\Lambda }_{2}$ . Those who question the rumor but neither spread rumor information nor disseminate refutation will transition to an immune state at a rate of ${\xi }_{2}$ . Individuals who initially spread rumors but later find the information untrue may become rumor disclaimers with probability $\delta$ . If these communicators lose interest in rumors and cease both rumor propagation and refutation, they will transition to an immune state with probability $\theta$ . Rumor disclaimers affected by the environment or who lose interest in refutation will also become immune with probability $\phi$ . Additionally, individual groups may exit the rumor spreading network due to migration at a rate ${\mu }_{2}$ .
+
+Hypothesis 3: In the interaction of offline individuals, the ignorant will become the disseminator at a certain rate $\alpha$ after contacting the disseminator. If ignorant person believe and propagate rumors after visiting the affected media, they will become disseminators at a certain rate $\beta$ . It is assumed that after the unknown person contacts the rumor information (including contact with people and knowing the rumor information from the media), they realize that the rumor information is untrue due to them own experience or discrimination ability. If an individual who is initially unaware of the rumors chooses to disseminate rumor refutation information, they will transition to the status of a rumor refuter at a rate of ${\xi }_{1}$ .
+
+Based on the above analysis, the rumor propagation process of ${XYZ} - {SIDR}$ model established in this paper is shown in Fig. 1.
+
+The meanings of symbols in Fig. 1 are shown in the following table. I.
+
+TABLE I
+
+DESCRIPTION OF PARAMETERS IN THE MODEL
+
+max width=
+
+$\mathbf{{Parameter}}$ Description
+
+1-2
+${\Lambda }_{1}$ The number of susceptible media entering the communication system per unit time.
+
+1-2
+${\Lambda }_{2}$ The number of ignorant individuals entering the communication system per unit time.
+
+1-2
+$\lambda$ The contact rate of susceptible medias with spreaders.
+
+1-2
+$\eta$ The probability of affected media becoming rumor refuting media.
+
+1-2
+$\alpha$ Rumor propagation rate of offline personal interaction.
+
+1-2
+$\beta$ Rumor propagation rate under two-tier network interaction.
+
+1-2
+$\delta$ The probability of propagating individuals becoming rumor refuting individuals.
+
+1-2
+$\theta$ The probability of propagating individuals becoming immune individuals.
+
+1-2
+${\xi }_{1}$ The rate of ignorant individuals becoming rumor refuting individuals.
+
+1-2
+${\xi }_{2}$ The rate of ignorant individuals becoming immune individuals.
+
+1-2
+$\phi$ The probability of rumor refuting individuals becoming immune individuals.
+
+1-2
+${\mu }_{1}$ The rate at which medias in the network move out of the propagation system.
+
+1-2
+${\mu }_{2}$ Migration rate of individuals in personal friendship network layer.
+
+1-2
+
+Based on the above analysis, we participated in the construction of an ${XYZ} - {SIDR}$ rumor propagation model. Then,
+
+$$
+\left\{ \begin{array}{l} {X}^{\prime } = {\Lambda }_{1} - {\lambda XI} - {\mu }_{1}X, \\ {Y}^{\prime } = {\lambda XI} - {\eta Y} - {\mu }_{1}Y, \\ {Z}^{\prime } = {\eta Y} - {\mu }_{1}Z, \\ {S}^{\prime } = {\Omega }_{2} - {\alpha SY} - {\beta SI} - \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {I + Y}\right) S - {\mu }_{2}S, \\ {I}^{\prime } = {\alpha SY} + {\beta SI} - \left( {\theta + \delta }\right) I - {\mu }_{2}I, \\ {D}^{\prime } = {\xi }_{1}S\left( {I + Y}\right) + {\delta I} - {\phi D} - {\mu }_{2}D, \\ {B}^{\prime } = {\xi }_{2}S\left( {I + Y}\right) + {\theta I} + {\delta D} - {\mu }_{2}B, \end{array}\right. \tag{1}
+$$
+
+ < g r a p h i c s >
+
+Fig 1. Schematic representation of the ${XYZ} - {SIDR}$ rumor spreading model
+
+Since the model represents the process of rumor propagation, the parameters involved are non negative, and the initial conditions are met:
+
+$$
+X\left( 0\right) = {X}_{0} \geq 0,Y\left( 0\right) = {Y}_{0} \geq 0,Z\left( 0\right) = {Z}_{0} \geq 0,
+$$
+
+$$
+S\left( 0\right) = {S}_{0} \geq 0,I\left( 0\right) = {I}_{0} \geq 0,D\left( 0\right) = {D}_{0} \geq 0\text{ , } \tag{2}
+$$
+
+$$
+R\left( 0\right) = {R}_{0} \geq 0\text{ . }
+$$
+
+§ III. MODEL ANALYSIS AND CALCULATION
+
+§ A.THE BASIC REPRODUCTION NUMBER ${R}_{0}$
+
+For system (1), the basic regeneration number ${R}_{0}$ is calculated as follows:
+
+Let $\mathcal{X} = {\left( I,Y,R,D,S,X,Z\right) }^{T}$ , equation (1) can be written as $\frac{d\mathcal{X}}{dt} = \mathcal{F}\left( \mathcal{X}\right) - \mathcal{V}\left( \mathcal{X}\right)$ .
+
+$$
+\mathcal{F}\left( \mathcal{X}\right) = \left( \begin{matrix} {\alpha SY} + {\beta SI} \\ {\lambda XI} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{matrix}\right) , \tag{3}
+$$
+
+$$
+\mathcal{V}\left( \mathcal{X}\right) = \left( \begin{matrix} {\theta I} + {\delta I} + {\mu }_{2}I \\ {\eta Y} + {\mu }_{1}Y \\ - {\xi }_{2}{SI} - {\xi }_{2}{SY} - {\theta I} - {\phi D} + {\mu }_{2}R \\ - {\xi }_{1}{SI} - {\xi }_{1}{SY} - {\delta I} + {\phi D} + {\mu }_{2}D \\ {H}_{1} \\ - {\Lambda }_{1} + {\lambda SI} + {\mu }_{1}X \\ - {\eta Y} + {\mu }_{1}Z \end{matrix}\right) \tag{4}
+$$
+
+where ${H}_{1} = - {\Lambda }_{2} + {\alpha SY} + {\beta SI} + {\xi }_{1}{SI} + {\xi }_{1}{SY} + {\xi }_{2}{SI} +$ ${\xi }_{2}{SY} + {\mu }_{2}S$ .
+
+Therefore
+
+$$
+F = \left( \begin{matrix} \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 \\ \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) , \tag{5}
+$$
+
+$$
+V = \left( \begin{matrix} \theta + \delta + {\mu }_{2} & 0 & 0 & 0 \\ 0 & \eta + {\mu }_{1} & 0 & 0 \\ - {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} - \theta & - {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & {\mu }_{2} & - \phi \\ - {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} - \delta & - {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & \phi + {\mu }_{2} \end{matrix}\right) \tag{6}
+$$
+
+By calculation we can get
+
+$$
+F{V}^{-1} = \left( \begin{matrix} \frac{\beta {\Lambda }_{2}}{{\mu }_{2}\left( {\theta + \delta + {\mu }_{2}}\right) } & \frac{\alpha {\Lambda }_{2}}{{\mu }_{2}\left( {\eta + {\mu }_{1}}\right) } & 0 & 0 \\ \frac{\lambda {\Lambda }_{1}}{{\mu }_{1}\left( {\theta + \delta + {\mu }_{2}}\right) } & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \tag{7}
+$$
+
+Hence, according to reference [27], the basic reproduction number of system (1) is the spectral radius of matrix $F{V}^{-1}$ as follows:
+
+$$
+{R}_{0} = \frac{\beta {\Lambda }_{2}}{{\mu }_{2}\left( {\theta + \delta + {\mu }_{2}}\right) } \tag{8}
+$$
+
+§ B. EXISTENCE OF EQUILIBRIUM
+
+According to the system dynamics equation (1), we can calculate the equilibrium $E = \left( {X,Y,Z,S,I,D,R}\right)$ . It is easy to see that the positive equilibrium points of system (1) are ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ and ${E}^{ * } =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ , and the rumor free equilibrium point ${E}_{0}$ always exists.
+
+Theorem 1 The equilibrium point ${E}^{ * }\; =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ exists if ${R}_{0} > 1$ and $\left( {\theta + \delta + {\mu }_{2}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) > {\beta \lambda }{\Lambda }_{2}.$
+
+Proof The rumors about system (1) have a balance point that satisfies:
+
+$$
+\left\{ \begin{array}{l} {\Lambda }_{1} - {\lambda XI} - {\mu }_{1}X = 0, \\ {\lambda XI} - {\eta Y} - {\mu }_{1}Y = 0, \\ {\eta Y} - {\mu }_{1}Z = 0, \\ {\Lambda }_{2} - {\alpha SY} - {\beta SI} - \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {I + Y}\right) S - {\mu }_{2}S = 0, \\ {\alpha SY} + {\beta SI} - \left( {\theta + \delta }\right) I - {\mu }_{2}I = 0, \\ {\xi }_{1}S\left( {I + Y}\right) + {\delta I} - {\phi D} - {\mu }_{2}D = 0, \\ {\xi }_{2}S\left( {I + Y}\right) + {\theta I} + {\phi D} - {\mu }_{2}R = 0. \end{array}\right. \tag{9}
+$$
+
+According to formula (9), ${X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{D}^{ * },{R}^{ * }$ are represented by ${I}^{ * }$ respectively and brought into the fifth equation to get
+
+$$
+a{I}^{2} + {bI} + c = 0 \tag{10}
+$$
+
+Where
+
+$$
+a = \lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) ,
+$$
+
+$$
+b = \left( {\theta + \delta + {\mu }_{2}}\right) \left\lbrack {\left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack
+$$
+
+$$
++ \lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) - {\beta \lambda }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \text{ , }
+$$
+
+$$
+c = \left( {\eta + {\mu }_{1}}\right) \left\lbrack {{\mu }_{2}\lambda \left( {\theta + \delta + {\mu }_{2}}\right) - {\mu }_{1}\beta {\Lambda }_{2}}\right\rbrack - {\alpha \lambda }{\Lambda }_{1}{\Lambda }_{2}.
+$$
+
+(11)
+
+It can be obtained by calculation that
+
+$$
+\Delta = {b}^{2} - {4ac}
+$$
+
+$$
+= {\left\lbrack \lambda {\Lambda }_{1}\left( \alpha + {\xi }_{2}\right) + \left( \eta + {\mu }_{1}\right) \left( {\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda \right) \right\rbrack }^{2}
+$$
+
+$$
+* {\left( \theta + \delta + {\mu }_{2}\right) }^{2} + {\left\lbrack \beta \lambda {\Lambda }_{2}\left( \eta + {\mu }_{1}\right) \right\rbrack }^{2} + {4\alpha }{\lambda }^{2}{\Lambda }_{1}{\Lambda }_{2}\left( {\beta + {\xi }_{1}}\right)
+$$
+
+$$
+* \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) - {2\beta }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right)
+$$
+
+$$
+* \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack
+$$
+
+$$
+- {4\lambda }\left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right)
+$$
+
+$$
+* \left\lbrack {{\mu }_{2}\lambda \left( {\theta + \delta + {\mu }_{2}}\right) - \beta {\mu }_{1}{\Lambda }_{2}}\right\rbrack
+$$
+
+(12)
+
+According to the discriminant calculation, when ${R}_{0} > 1$ and
+
+$\left( {\theta + \delta + {\mu }_{2}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) > {\beta \lambda }{\Lambda }_{2}$ , the negative solution is omitted:
+
+$$
+{I}^{ * } = \frac{{\beta \lambda }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) - {H}_{2}\left( {\theta + \delta + {\mu }_{2}}\right) + \sqrt{\Delta }}{{2\lambda }\left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) } \tag{13}
+$$
+
+where ${H}_{2} = \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack$ .
+
+Therefore ${E}^{ * } = \left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ , where
+
+$$
+{X}^{ * } = \frac{{\Lambda }_{1}}{\lambda {I}^{ * } + {\mu }_{1}}, \tag{14}
+$$
+
+$$
+{Y}^{ * } = \frac{\lambda {\Lambda }_{1}{I}^{ * }}{\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }, \tag{15}
+$$
+
+$$
+{Z}^{ * } = \frac{{\lambda \eta }{\Lambda }_{1}{I}^{ * }}{{\mu }_{1}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }, \tag{16}
+$$
+
+$$
+{S}^{ * } = \frac{{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }{T}, \tag{17}
+$$
+
+$$
+{D}^{ * } = \frac{\lambda {\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) {I}^{*2} + \left\lbrack {\lambda {\Lambda }_{1}{\Lambda }_{2} + {\mu }_{1}\left( {\eta + {\mu }_{1}}\right) }\right\rbrack {I}^{ * }}{\left( {\phi + {\mu }_{2}}\right) T}, \tag{18}
+$$
+
+$$
+{R}^{ * } = \frac{{\xi }_{2}{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) {H}_{3} + \theta {H}_{4}}{\left( {{\mu }_{2} - \phi }\right) {H}_{4}} \tag{19}
+$$
+
+where ${H}_{3} = \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) \left\lbrack {\lambda {\Lambda }_{1} + \left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }\right\rbrack ,{H}_{4} =$ $\lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) {I}^{*2} + \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + }\right. }\right.$ $\left. \left. {{\mu }_{2}\lambda }\right) \right\rbrack {I}^{ * } + {\mu }_{2}\lambda \left( {\eta + {\mu }_{1}}\right)$ .
+
+§ C. STABILITY OF EQUILIBRIUM
+
+Theorem 2 The equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is locally asymptotically stable if ${R}_{0} < 1$ . And the equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is unstable if ${R}_{0} > 1$ .
+
+Proof The Jacobian matrix of system (1) at
+
+${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is
+
+$J\left( {E}_{0}\right) =$
+
+$$
+\left( \begin{matrix} - {\mu }_{1} & 0 & 0 & 0 & - \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & - \eta - {\mu }_{1} & 0 & 0 & \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & \eta & - {\mu }_{1} & 0 & 0 & 0 & 0 \\ 0 & {H}_{5} & 0 & - {\mu }_{2} & {H}_{6} & 0 & 0 \\ 0 & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {H}_{7} & 0 & 0 \\ 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} + \theta & - {\mu }_{2} & - {\mu }_{2} \\ 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} & 0 & 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} + \delta & 0 & {H}_{8} \end{matrix}\right)
+$$
+
+where ${H}_{5} = - \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{H}_{6} = - \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}}$ ,
+
+${H}_{7} = \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) ,{H}_{8} = - \left( {\phi + {\mu }_{2}}\right) .$
+
+The characteristic equation of matrix $J\left( {E}_{0}\right)$ is
+
+$\left| {J\left( {E}_{0}\right) - {hE}}\right| =$
+
+$$
+\left. \begin{matrix} {T}_{1} & 0 & 0 & 0 & - \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & {T}_{1} & 0 & 0 & \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & \eta & {T}_{4} & 0 & 0 & 0 & 0 \\ 0 & {T}_{2} & 0 & {T}_{5} & {T}_{3} & 0 & 0 \\ 0 & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {T}_{4} & 0 & 0 \\ 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {T}_{5} & - {\mu }_{2} - h & - {\mu }_{2} \\ 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} & 0 & 0 & {T}_{6} & 0 & - \left( {\phi + {\mu }_{2}}\right) - h \end{matrix}\right|
+$$
+
+$= {\left( {\mu }_{1} + h\right) }^{2}{\left( {\mu }_{2} + h\right) }^{2}\left( {\phi + {\mu }_{2} + h}\right) \left( {\eta + {\mu }_{1} + h}\right) \left\lbrack {\beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - (\theta + }\right.$
+
+$$
+\left. {\left. {\delta + {\mu }_{2}}\right) - h}\right\rbrack = 0
+$$
+
+Where ${T}_{1} = - {\mu }_{1} - h,{T}_{2} = - \eta - {\mu }_{1} - h,{T}_{3} =$ $- \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{T}_{4} = - {\mu }_{1} - h,{T}_{5} = - {\mu }_{2} - h,{T}_{6} =$ $- \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{T}_{7} = \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{T}_{8} =$ $\beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{T}_{9} = {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} + \delta .$
+
+Therefore, the characteristic root corresponding to the characteristic equation of $J\left( {E}_{0}\right)$ is:
+
+$$
+{h}_{01} = - {\mu }_{1} < 0,{h}_{02} = - {\mu }_{2} < 0,{h}_{03} = - \left( {\phi + {\mu }_{2}}\right) < 0,
+$$
+
+$$
+{h}_{04} = - \left( {\eta + {\mu }_{1}}\right) < 0,{h}_{05} = \frac{\theta + \delta + {\mu }_{2}}{{\mu }_{2}}\left( {{R}_{0} - 1}\right) < 0
+$$
+
+(20)
+
+According to Routh-Hurwitz stability criterion, the equilibrium point
+
+${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is locally asymptotically stable if ${R}_{0} < 1$ .
+
+And the equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is unstable if ${R}_{0} > 1$ .
+
+Theorem 3 The equilibrium point ${E}^{ * }\; =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ is locally asymptotically stable if ${R}_{0} > 1$ and $\beta {\Lambda }_{2} < {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) \left( {\theta + \delta + {\mu }_{2}}\right)$ , otherwise, the equilibrium point ${E}^{ * }$ is unstable.
+
+Proof The Jacobian matrix at ${E}^{ * } =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ is
+
+$J\left( {E}^{ * }\right) =$
+
+$$
+\left( \begin{matrix} {A}_{1} & 0 & 0 & 0 & - \lambda {X}^{ * } & 0 & 0 \\ \lambda {I}^{ * } & {A}_{2} & 0 & 0 & \lambda {X}^{ * } & 0 & 0 \\ 0 & \eta & - {\mu }_{1} & 0 & 0 & 0 & 0 \\ 0 & {A}_{3} & 0 & {A}_{4} & {A}_{8} & 0 & 0 \\ 0 & \alpha {S}^{ * } & 0 & {A}_{5} & {A}_{9} & 0 & 0 \\ 0 & {\xi }_{2}{S}^{ * } & 0 & {A}_{6} & {\xi }_{2}{S}^{ * } + \theta & - {\mu }_{2} & - {\mu }_{2} \\ 0 & {\xi }_{1}{S}^{ * } & 0 & {A}_{7} & {\xi }_{1}{S}^{ * } + \delta & 0 & {A}_{10} \end{matrix}\right)
+$$
+
+Where ${A}_{1} = \lambda {I}^{ * } - {\mu }_{1},{A}_{2} = - \eta - {\mu }_{1},{A}_{3} =$
+
+$$
+- \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * },{A}_{4} = - \alpha {Y}^{ * } - \beta {I}^{ * },{A}_{5} = \alpha {Y}^{ * } + \beta {I}^{ * }\text{ , }
+$$
+
+$$
+{A}_{6} = {\xi }_{2}\left( {{I}^{ * } + {Y}^{ * }}\right) ,\;{A}_{7} = {\xi }_{1}\left( {{I}^{ * } + {Y}^{ * }}\right) ,\;{A}_{8} =
+$$
+
+$$
+- \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * },{A}_{9} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) ,{A}_{10} =
+$$
+
+$- \left( {\phi + {\mu }_{2}}\right)$ .
+
+The characteristic equation of matrix $J\left( {E}^{ * }\right)$ is
+
+$\left| {J\left( {E}^{ * }\right) - {hE}}\right| =$
+
+$$
+\left| \begin{matrix} {B}_{1} & 0 & 0 & 0 & - \lambda {X}^{ * } & 0 & 0 \\ \lambda {I}^{ * } & {B}_{2} & 0 & 0 & \lambda {X}^{ * } & 0 & 0 \\ 0 & \eta & - {\mu }_{1} - h & 0 & 0 & 0 & 0 \\ 0 & {B}_{3} & 0 & {B}_{3} & {B}_{7} & 0 & 0 \\ 0 & \alpha {S}^{ * } & 0 & {B}_{4} & {B}_{8} & 0 & 0 \\ 0 & {\xi }_{2}{S}^{ * } & 0 & {B}_{5} & {\xi }_{2}{S}^{ * } + \theta & {B}_{9} & - {\mu }_{2} \\ 0 & {\xi }_{1}{S}^{ * } & 0 & {B}_{6} & {\xi }_{1}{S}^{ * } + \delta & 0 & {B}_{10} \end{matrix}\right|
+$$
+
+Where ${B}_{1} = \lambda {I}^{ * } - {\mu }_{1} - h,{B}_{2} = - \eta - {\mu }_{1} - h,{B}_{3} =$ $- \alpha {Y}^{ * } - \beta {I}^{ * } - h,{B}_{4} = \alpha {Y}^{ * } + \beta {I}^{ * },{B}_{5} = {\xi }_{2}\left( {{I}^{ * } + }\right.$ $\left. {Y}^{ * }\right) ,{B}_{6} = {\xi }_{1}\left( {{I}^{ * } + {Y}^{ * }}\right) ,{B}_{7} = - \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * }$ , ${B}_{8} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{B}_{9} = - {\mu }_{2} - h,{B}_{10} =$ $- \left( {\phi + {\mu }_{2}}\right) - h$ .
+
+Thus, we can obtain
+
+$$
+\left| {J\left( {E}^{ * }\right) - {hE}}\right| = \left( {{\mu }_{1} + h}\right) \left( {{\mu }_{2} + h}\right) \left( {\phi + {\mu }_{2} + h}\right) \left( {\eta + {\mu }_{1} + }\right.
+$$
+
+$\left. h\right) \left( {\lambda {I}^{ * } + {\mu }_{1} + h}\right) G$ .
+
+Where $G = - \left\lbrack {\alpha {Y}^{ * } + \beta {I}^{ * } + \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {{I}^{ * } + {Y}^{ * }}\right) + {\mu }_{2}}\right\rbrack - h$ .
+
+Therefore, the characteristic root corresponding to the characteristic equation of $J\left( {E}^{ * }\right)$ is:
+
+$$
+{h}_{01} = - {\mu }_{1} < 0,{h}_{02} = - {\mu }_{2} < 0, \tag{21}
+$$
+
+$$
+{h}_{03} = - \left( {\phi + {\mu }_{2}}\right) < 0,{h}_{04} = - \left( {\eta + {\mu }_{1}}\right) < 0, \tag{22}
+$$
+
+$$
+{h}_{05} = - \left\lbrack {\alpha {Y}^{ * } + \beta {I}^{ * } + \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {{I}^{ * } + {Y}^{ * }}\right) + {\mu }_{2}}\right\rbrack < 0, \tag{23}
+$$
+
+$$
+{h}_{06} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) . \tag{24}
+$$
+
+Then, we take ${S}^{ * }$ into ${h}_{06}$ ,
+
+${h}_{06} = \frac{\beta {\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }{\lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) {I}^{*2} + {C}_{1} + {\mu }_{2}\lambda \left( {\eta + {\mu }_{1}}\right) } - \left( {\theta + \delta + {\mu }_{2}}\right) ,$
+
+where ${C}_{1} = \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack {I}^{ * }$ .
+
+§ IV. NUMERICAL SIMULATION
+
+In this section, we will assign reasonable values to the parameters in system (1) as established in this paper, and verify the results of our theoretical analysis through numerical simulations. On the one hand, we combine some similar examples in reality. On the other hand, the parameter values in relevant literature are referred to.
+
+Order ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\lambda = {0.01},\eta = {0.3},\alpha = {0.01},\beta =$ ${0.01},\theta = {0.2},\delta = {0.2},\phi = {0.15},{\xi }_{1} = {0.1},{\xi }_{2} = {0.1},{\mu }_{1} =$ ${0.2},{\mu }_{2} = {0.2}$ . Calculated ${R}_{0} = {0.8333} < 1$ , then the no rumor propagation equilibrium point ${E}_{0}$ is stable.
+
+ < g r a p h i c s >
+
+Fig 2. Stability of equilibrium point ${E}_{0}$ .
+
+Fig. 2 shows when ${R}_{0} = {0.0833} < 1$ , the density of each subclass in the model changes with time. At first, the number of unaffected media and unknowns gradually decreased at a similar rate, and finally stabilized. Due to the limited number of moving in and the large number of moving out, the number of affected media and communicators gradually decreases at a similar rate and finally becomes 0 . The number of rumor refuting media and rumor refuters first increased with the increase of the number of affected media and disseminators. It gradually decreases over time and finally becomes 0 . The number of immunized persons increased with the increase of the number of communicators and rumor refuters. The growth rate gradually slowed down and finally stabilized. Namely, the rumor disappears and reaches a stable equilibrium point, and there is no rumor.
+
+Let ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\lambda = {0.2},\eta = {0.3},\alpha = {0.5},\beta =$ ${0.6},\theta = {0.4},\delta = {0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.2},{\mu }_{1} =$ ${0.2},{\mu }_{2} = {0.2}$ and calculate ${R}_{0} = 3 > 1$ , the equilibrium point ${E}^{ * }$ is stable, as shown in Fig. 3.
+
+ < g r a p h i c s >
+
+Fig 3. Stability of equilibrium point ${E}^{ * }$ .
+
+In Fig. 3, considering the media network layer, due to the small number of new media entering the communication system and the transformation of some unaffected media into affected media, the number of unaffected media gradually decreases and tends to stabilize after a period of time. Originally, the number of affected media increased due to the transformation of some unaffected media into affected media. Over time, most of the affected media were transformed into rumor refuting media, so the number of affected media decreased and stabilized. As the affected media changed into rumor refuting media, the number of rumor refuting media increased and gradually stabilized.
+
+Fig. 3 illustrates that, within the individual interpersonal network layer, the number of ignorant individuals begins to decline. Initially, the low influx of new individuals and a fixed rate of departures contribute to this decrease. Additionally, some ignorant individuals transition to become communicators, while others become immune or rumor refuters. Consequently, the number of communicators increases as ignorant individuals transform into communicators. Over time, as communicators transition to immune individuals or rumor refuters, the number of communicators gradually decreases and eventually stabilizes. As more communicators and ignorant individuals become rumor refuters, the number of rumor refuters rises and stabilizes. Simultaneously, with some ignorant individuals, communicators, and rumor refuters becoming immune, the number of immune individuals significantly increases and gradually stabilizes. Ultimately, the model reaches a steady state, with each groups number stabilizing over time.
+
+Fig. 4 to Fig. 7 depict ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\eta = {0.3},\theta =$ ${0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.5},{\mu }_{1} = {0.2},{\mu }_{2} = {0.2}$ , the evolution of the density of $X\left( t\right) ,Y\left( t\right)$ and $S\left( t\right)$ with different parameters.
+
+Fig. 4 and Fig. 5 describe the effect of parameter $\lambda$ on the density change of $X\left( t\right)$ and $Y\left( t\right)$ respectively. Parameter $\lambda$ represents the probability that the unaffected media will be transformed into the affected media. It can be seen from the figure that the parameter $\lambda$ has a negative correlation with the density of $X\left( t\right)$ and a positive correlation with the density of $Y\left( t\right)$ . That is, with the increase of the parameter $\lambda$ , the rate of transformation from unaffected media to affected media increases. The number of unaffected media decreases gradually, and the number of affected media increases gradually, accelerating the spread of rumors in the media network layer.
+
+ < g r a p h i c s >
+
+Fig 4. Density of $X\left( t\right)$ under the parameter $\lambda$ .
+
+ < g r a p h i c s >
+
+Fig 5. Density of $Y\left( t\right)$ under the parameter $\lambda$ .
+
+Fig. 6 and Fig. 7 describe the influence of parameters $\alpha$ and $\beta$ on the density change of $S\left( t\right)$ respectively. Parameter $\alpha$ represents the probability that the unknown person will become a spreader by accessing the affected media, and parameter $\beta$ represents the probability that the unknown person will become a spreader by contacting spreaders. It can be seen from the figure that the density of $S\left( t\right)$ decreases with the increase of parameters $\alpha$ and $\beta$ . Namely, with the increase of the propagation rate of individual network layer and double-layer network interaction, the number of unknowns gradually decreases, which accelerates the spread of rumors in the double-layer network.
+
+ < g r a p h i c s >
+
+Fig 6. Density of $S\left( t\right)$ under the parameter $\alpha$ .
+
+ < g r a p h i c s >
+
+Fig 7. Density of $S\left( t\right)$ under the parameter $\beta$ .
+
+Fig. 8 and Fig. 9 describe when ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\eta =$ ${0.3},\theta = {0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.5},{\mu }_{1} = {0.2},{\mu }_{2} =$ 0.2, the evolution of the density of $I\left( t\right)$ with different parameters.
+
+Fig. 8 and Fig. 9 describe the influence of parameters $\alpha$ and $\beta$ on the density change of $I\left( t\right)$ respectively. Considering the meaning of parameters $\alpha$ and $\beta$ , it is easy to know that the values of parameters $\alpha$ and $\beta$ are positively correlated with the density of $I\left( t\right)$ . As can be seen from the figure, the density of $I\left( t\right)$ increases with the increase of parameters $\alpha$ and $\beta$ . That is, the increasing number of communicators promotes the expansion of the scale of communication, which is not conducive to the control of rumors.
+
+§ V. CONCLUSION
+
+At present, many scholars have separately studied the influence of media refutation or individual refutation on the spread of rumors. We believe that considering these two effects comprehensively is better than considering one of them alone. This paper integrates both media refutation and individual refutation into the analysis and introduces a novel ${XYZ} - {SIDR}$ two-tier rumor propagation model, further demonstrates the existence and stability of equilibrium points within the model. The research results show that this two-layer network model is more effective in controlling the spread of rumors.
+
+ < g r a p h i c s >
+
+Fig 8. Density of $I\left( t\right)$ under the parameter $\alpha$ .
+
+ < g r a p h i c s >
+
+Fig 9. Density of $I\left( t\right)$ under the parameter $\beta$ .
+
+Theoretical analysis indicates that integrating both media and individual rumor refutation exerts a more significant and broader impact on rumor propagation. We suggest strengthening the dissemination of rumor refutation information through the official media rather than relying solely on individuals to control the spread of rumor. The research conclusion can help relevant departments formulate effective measures to control the spread of rumors. On the other hand, the model established in this paper can also be analogically applied to the study of infectious disease model.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..466ba33b71181f695005b75923e605bfe983d813
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,427 @@
+# Asynchronous Thruster Fault Detection for Unmanned Marine Vehicles under DoS Attacks
+
+Fuxing Wang
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+wfx614328@163.com
+
+Yue Long
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+longyue@uestc.edu.cn
+
+Tieshan Li
+
+School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
+
+tieshanli@126.com
+
+Abstract-This paper investigates a thruster fault detection strategy for unmanned marine vehicles (UMVs) subjected to external disturbances and aperiodic Denial of Service (DoS) attacks. To address the challenge of timely detection of DoS attacks, the UMV and the corresponding filters are modeled within the framework of an asynchronous switched system. Sufficient conditions ensuring the system's exponential stability and prescribed performance are derived using model-dependent average dwell time and piecewise Lyapunov functions. Additionally, the tolerable lower bound of the sleep interval and the upper bound of the attack interval for DoS attacks are established. Solvable conditions for the designed fault detection filters are obtained by leveraging decoupling techniques. Finally, simulations conducted on a UMV validate the effectiveness of the proposed methods.
+
+Index Terms-Unmanned marine vehicles, asynchronous switched system, DoS attacks, fault detection.
+
+## I. INTRODUCTION
+
+In recent years, unmanned marine vehicles (UMVs) have attracted significant attention in marine science and technology due to their wide-ranging applications in marine exploration, environmental monitoring, and resource development [1]. Nevertheless, the operational environment for UMVs is inherently complex, and their reliance on wireless communication networks for communication with shore-based centers makes them vulnerable to external disturbances, equipment malfunctions, cyber-attacks, and other disruptions [2]. The unpredictable nature of potential harm caused by these disturbances or faults, combined with the inherent vulnerabilities of cyberspace, renders UMV systems particularly susceptible to cyber-attacks. These risks can result in system failures and potentially catastrophic accidents [3]. As a result, improving the reliability and security of UMVs has emerged as a crucial area of research and development.
+
+The unpredictable nature of potential harm caused by disturbances or faults to unmanned marine vehicles (UMVs) underscores the critical need for a real-time fault detection (FD) warning mechanism. The core of fault detection methodology involves comparing system performances to identify fault signals. Current research predominantly focuses on model-based fault detection, which has shown significant success in various systems, including continuous-discrete systems [4], T-S fuzzy systems [5], and Markovian jump systems [6]. The primary approach involves generating residual signals through filters or observers and subsequently establishing a fault warning mechanism. For UMVs, several studies have made noteworthy contributions. [7] has explored the design of controllers and FD filters based on observers for networked UMVs, [8] proposed event-triggered fault detection mechanisms for UMVs in networked environments, and [2] utilized T-S fuzzy systems to model UMV systems, particularly addressing fault detection under replay attacks. Despite these advancements, the scope of fault detection research for UMVs remains relatively narrow and lacks comprehensive coverage [9]. Consequently, further investigation into robust and holistic fault detection strategies for UMVs is imperative to enhance their reliability and operational safety [10].
+
+On the other hand, due to the openness of cyberspace, UMV systems are particularly vulnerable to cyber-attacks. Deception attacks and Denial of Service (DoS) attacks are currently common types of attacks [11]. Deception attacks involve sending incorrect or tampered data to the system [12], including replay attacks [13] and false data injection attacks [14]. Compared to deception attacks, DoS attacks cause signal transmission to be unavailable for a period, leaving the system in an open-loop state, which makes it easier to cause severe disruption in system operations. Consequently, numerous studies on DoS attacks have emerged [15], [16].
+
+However, most existing research assumes that Denial of Service (DoS) attacks can be detected promptly, suggesting that the switching of filters corresponding to each subsystem happens simultaneously with the subsystem switching [10], [17]. However, in practical applications, detecting DoS attacks in a timely manner proves challenging, leading to delays. This delay implies that the filter often takes additional time to adjust to the appropriate control mode based on the subsystem mode, resulting in asynchronous filter/subsystem switching [18]. As a result, filters designed for synchronous switching may not provide optimal detection performance in real-world scenarios [19]. Thus, incorporating asynchronous switching into thruster fault detection for unmanned marine vehicles (UMVs) under DoS attacks is of substantial practical significance.
+
+---
+
+This work is supported in part by the National Natural Science Foundation of China under Grants 62273072, 51939001. (Corresponding author: Yue Long)
+
+---
+
+Inspired by the previous discussion, this paper investigates thruster fault detection (FD) for unmanned marine vehicles (UMVs) under Denial of Service (DoS) attacks using an asynchronous switched method to enhance reliability and security. Addressing the challenge of timely DoS attack detection, the paper proposes an asynchronous switched filter specifically designed for thruster fault detection. Furthermore, leveraging model-dependent average dwell time (MDADT) and piecewise Lyapunov functions (PLF), the paper establishes the tolerable lower bound of the sleep interval and the upper bound of the attack interval for DoS attacks. The filter parameters are determined based on linear solvability conditions. The effectiveness of the proposed method is ultimately validated through simulation.
+
+## II. Problem formulation and modeling
+
+### A.UMV Model
+
+Consider the UMV and the following body-fixed equations of motion
+
+$$
+M\dot{\delta }\left( t\right) + {N\delta }\left( t\right) + {R\psi }\left( t\right) = {E\varphi }\left( t\right) , \tag{1}
+$$
+
+$$
+\dot{\psi }\left( t\right) = J\left( {\eta \left( t\right) }\right) \delta \left( t\right) ,
+$$
+
+where $\delta \left( t\right) = {\left\lbrack {\delta }_{u}\left( t\right) ,{\delta }_{v}\left( t\right) ,{\delta }_{r}\left( t\right) \right\rbrack }^{T}$ with ${\delta }_{u}\left( t\right) ,{\delta }_{v}\left( t\right) ,{\delta }_{r}\left( t\right)$ representing the surge, sway and yaw velocities, respectively. $\psi \left( t\right) = {\left\lbrack {x}_{p}\left( t\right) ,{y}_{p}\left( t\right) ,\eta \left( t\right) \right\rbrack }^{T}$ with ${x}_{p}\left( t\right)$ and ${y}_{p}\left( t\right)$ are positions and $\eta \left( t\right)$ is the yaw angle. $\varphi \left( t\right)$ is the control input. $M, N, R$ and $E$ denote inertia, damping, mooring forces and configuration matrices, and $M$ is a symmetric positive-definite and invertible matrix that satisfies $M = {M}^{T} > 0$ ,
+
+$J\left( {\eta \left( t\right) }\right) = \left\lbrack \begin{matrix} \cos \left( {\eta \left( t\right) }\right) & - \sin \left( {\eta \left( t\right) }\right) & 0 \\ \sin \left( {\eta \left( t\right) }\right) & \cos \left( {\eta \left( t\right) }\right) & 0 \\ 0 & 0 & 1 \end{matrix}\right\rbrack .$
+
+Then, by defining $x\left( t\right) = \delta \left( t\right) - {\delta }_{\text{ref }}, A\left( t\right) =$ $- M{\left( t\right) }^{-1}N\left( t\right) ,{B}_{1}\left( t\right) = M{\left( t\right) }^{-1}R$ and ${B}_{2}\left( t\right) = M{\left( t\right) }^{-1}E$ , and taking into account the unavoidable disturbance $\widetilde{d}\left( t\right)$ caused by wind, wave and current, the system (1) can be expressed as
+
+$$
+\left\{ \begin{array}{l} \dot{x}\left( t\right) = {Ax}\left( t\right) + {B}_{1}d\left( t\right) + {B}_{2}\varphi \left( t\right) , \\ y\left( t\right) = {Cx}\left( t\right) , \end{array}\right. \tag{2}
+$$
+
+where $d\left( t\right) = {B}_{1}{\left( t\right) }^{-1}{d}^{ * }\left( t\right) - \psi \left( t\right) + {B}_{1}{\left( t\right) }^{-1}A{\delta }_{\text{ref }}$ and $C =$ $\left\lbrack \begin{array}{lll} 0 & 0 & 1 \end{array}\right\rbrack$ denotes the output matrix.
+
+Consider thruster fault ${\varphi }^{F}\left( t\right) = {\rho \varphi }\left( t\right) + {\sigma f}\left( t\right)$ and assume control inputs $\varphi \left( t\right) = {Kx}\left( t\right)$ are designed,(2) is represented
+
+as
+
+$$
+\left\{ \begin{array}{l} \dot{x}\left( t\right) = \widehat{A}x\left( t\right) + {B}_{1}d\left( t\right) + {B}_{2}\widehat{f}\left( t\right) , \\ y\left( t\right) = {Cx}\left( t\right) , \end{array}\right. \tag{3}
+$$
+
+where $\widehat{A} = A + {B}_{2}K$ and $\widehat{f}\left( t\right) = - \bar{\rho }\varphi \left( t\right) + {\sigma f}\left( t\right)$ .
+
+### B.DoS Attacks Model
+
+Consider the aperiodic dos attacks as follows:
+
+$$
+{A}_{\text{Dos }} = \left\{ \begin{matrix} 0, & t \in \left\lbrack {{t}_{2l},{t}_{{2l} + 1}}\right) \triangleq {\kappa }_{0,{2l}} \\ 1, & t \in \left\lbrack {{t}_{{2l} + 1},{t}_{2\left( {l + 1}\right) }}\right) \triangleq {\kappa }_{1,{2l}} \end{matrix}\right. \tag{4}
+$$
+
+where $t \in \left\lbrack {{t}_{2l},{t}_{{2l} + 1}}\right) \triangleq {\kappa }_{0,{2l}}\;\left( {l \in \mathrm{N},{t}_{2l} \geq 0}\right)$ indicates the ${l}^{th}$ sleep interval with the length ${s}_{l} = {t}_{{2l} + 1} - {t}_{2l}$ , and $t \in \left\lbrack {{t}_{{2l} + 1},{t}_{2\left( {l + 1}\right) }}\right) \triangleq {\kappa }_{1,{2l}}$ indicates the ${l}^{th}$ DoS attacks interval with the length ${d}_{l} = {t}_{2\left( {l + 1}\right) } - {t}_{{2l} + 1}$ .
+
+Due to the communication disruption caused by DoS attacks, the UMV system (3) can be augmented into the following switched system, which has been discretized. The sleeping interval can be expressed as $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ , and the DoS attacks interval can be expressed as $k \in \left\lbrack {{k}_{{2l} + 1},{k}_{2\left( {l + 1}\right) }}\right)$ .
+
+$$
+\left\{ \begin{array}{l} x\left( {k + 1}\right) = {A}_{id}x\left( k\right) + {B}_{1id}d\left( k\right) + {B}_{2id}\widehat{f}\left( k\right) \\ y\left( k\right) = {C}_{d}x\left( k\right) \end{array}\right. \tag{5}
+$$
+
+## C. Asynchronous Switching Filter
+
+In the case of the DoS attacks and thruster faults, the residual signal produced by the switched filter is as follows:
+
+$$
+\left\{ {\begin{array}{l} {x}_{f}\left( {k + 1}\right) = {A}_{fi}{x}_{f}\left( k\right) + {B}_{fi}y\left( k\right) \\ r\left( k\right) = {C}_{fi}{x}_{f}\left( k\right) + {D}_{fi}y\left( k\right) \end{array}\left( {i = 0,1}\right) }\right. \tag{6}
+$$
+
+where ${x}_{f}\left( k\right)$ is the state of the filters, $r\left( k\right)$ is the residual signal of the switched system (5). Define $\widetilde{x}\left( k\right) =$ ${\left\lbrack \begin{array}{ll} {x}^{T}\left( k\right) & {x}_{f}^{T}\left( k\right) \end{array}\right\rbrack }^{T},\varpi \left( k\right) = {\left\lbrack \begin{array}{ll} {d}^{T}\left( k\right) & {f}^{T}\left( k\right) \end{array}\right\rbrack }^{T}$ and the residual evaluation signal $e\left( k\right) = r\left( k\right) - \widehat{f}\left( k\right) ,\left( 6\right)$ is rewritten as (7)
+
+$$
+{\Phi }_{0} : \left\{ {\begin{array}{l} \widetilde{x}\left( {k + 1}\right) = {\widetilde{A}}_{i}\widetilde{x}\left( k\right) + {\widetilde{B}}_{i}\varpi \left( k\right) \\ e\left( k\right) = {\widetilde{C}}_{i}\widetilde{x}\left( k\right) + {\widetilde{D}}_{i}\varpi \left( k\right) \end{array}, k \in \left\lbrack {{k}_{l} + {\varepsilon }_{l},{k}_{l + 1}}\right) }\right.
+$$
+
+$$
+{\Phi }_{1} : \left\{ {\begin{array}{l} \widetilde{x}\left( {k + 1}\right) = {\widetilde{A}}_{ij}\widetilde{x}\left( k\right) + {\widetilde{B}}_{ij}\varpi \left( k\right) \\ e\left( k\right) = {\widetilde{C}}_{ij}\widetilde{x}\left( k\right) + {\widetilde{D}}_{ij}\varpi \left( k\right) \end{array}, k \in \left\lbrack {{k}_{l},{k}_{l} + {\varepsilon }_{l}}\right) }\right.
+$$
+
+where $i \neq j, i \in \{ 0,1\} , j \in \{ 0,1\} ,{\widetilde{A}}_{ij} = \left\lbrack \begin{matrix} {A}_{id} & 0 \\ {B}_{fj}{C}_{d} & {A}_{fj} \end{matrix}\right\rbrack$ , ${\widetilde{B}}_{ij} = \left\lbrack \begin{matrix} {B}_{1i} & {B}_{2i} \\ 0 & 0 \end{matrix}\right\rbrack ,{\widetilde{C}}_{ij} = \left\lbrack \begin{array}{ll} {D}_{fj}{C}_{d} & {C}_{fj} \end{array}\right\rbrack$ and ${\widetilde{D}}_{ij} =$ $\left\lbrack \begin{array}{ll} 0 & - \bar{I} \end{array}\right\rbrack$ .
+
+To better set the stage for the next section, the following definitions are presented.
+
+Definition 1: For any switching signal $\tau \left( k\right)$ and $0 < {k}_{0} \leq$ $k$ , let ${\mathcal{M}}_{\tau , l}\left( {{k}_{0}, k}\right)$ indicate the number of switching times that the ${l}_{th}$ subsystem is activated over $\left\lbrack {{k}_{0}, k}\right)$ . If
+
+$$
+{M}_{\tau , l}\left( {{k}_{0}, k}\right) \leq {N}_{{\mathcal{M}}_{0, l}} + \frac{{N}_{l}\left( {{k}_{0}, k}\right) }{{\lambda }_{l}}
+$$
+
+holds for scalar ${\lambda }_{l} > 0$ and integer ${N}_{{M}_{0, l}} \geq 0$ , then ${\lambda }_{l}$ is called model-dependent average dwell time. ${N}_{l}\left( {{k}_{0}, k}\right)$ is the total running time of the ${l}_{th}$ subsystem over $\left\lbrack {{k}_{0}, k}\right)$ .
+
+Definition 2: Consider asynchronous switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ , and given scalar $\alpha ,\beta$ , and $\gamma$ satisfying $0 < \alpha < 1$ , $\beta > 0$ and $\gamma > 0$ . Under zero initial condition, if the asynchronous switched system is exponentially stable and satisfies $\mathop{\sum }\limits_{{s = {k}_{0}}}^{\infty }{\left( 1 - \alpha \right) }^{s}{e}^{\mathrm{T}}\left( s\right) e\left( s\right) \leq {\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{\infty }{\varpi }^{\mathrm{T}}\left( s\right) \varpi \left( s\right)$ , it is said that the system exhibits exponential stability and has exponential ${H}_{\infty }$ index $\gamma$ .
+
+## III. Main Results
+
+In this section, the stability and ${H}_{\infty }$ performance of asynchronous switched systems (7) will be analyzed, and the sufficient and linearly solvable conditions for the designed switched FD filters are given.
+
+Theorem 1: Consider the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ under DoS attacks, scalars ${\alpha }_{i},{\beta }_{i},\gamma ,{\mu }_{0}$ and ${\mu }_{1}$ satisfying $0 < {\alpha }_{i} < 1,{\beta }_{i} > 0,\gamma > 0,{\mu }_{0} > 1$ and $0 < {\mu }_{1} < 1$ , if there exist symmetric positive-definite matrices ${\mathcal{P}}_{i}$ satisfying the following conditions
+
+$$
+{\widetilde{A}}_{i}^{T}{\mathcal{P}}_{i}{\widetilde{A}}_{i} - {\mathcal{P}}_{i} + {\alpha }_{i}{\mathcal{P}}_{i} < 0, \tag{8}
+$$
+
+$$
+{\widetilde{A}}_{ij}^{T}{\mathcal{P}}_{i}{\widetilde{A}}_{ij} - {\mathcal{P}}_{i} - {\beta }_{i}{\mathcal{P}}_{i} < 0, \tag{9}
+$$
+
+$$
+{\mathcal{P}}_{i} \leq {\mu }_{i}{\mathcal{P}}_{j} \tag{10}
+$$
+
+$$
+{\tau }_{D} < \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{\ln {\widetilde{\alpha }}_{1}},{\tau }_{F} > - \frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{\ln {\widetilde{\alpha }}_{0}}, \tag{11}
+$$
+
+the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable with the exponential ${H}_{\infty }$ performance, where $i \neq j,{\widetilde{\alpha }}_{i} = 1 - {\alpha }_{i},{\widetilde{\beta }}_{i} = 1 + {\beta }_{i},{\phi }_{i} = \frac{{\widetilde{\beta }}_{i}}{{\widetilde{\alpha }}_{i}}$ and ${\varepsilon }_{M}$ denotes the maximum time that the filter lags the subsystem.
+
+Proof: The piecewise Lyapunov function for the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are given as follows
+
+$$
+{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) = {\widetilde{x}}^{T}\left( k\right) {\mathcal{P}}_{i}\widetilde{x}\left( k\right) . \tag{12}
+$$
+
+When $\varpi \left( k\right) = 0$ and $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ , it can be obtained
+
+$$
+\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq \left\{ \begin{array}{l} {\widetilde{\alpha }}_{i}^{k - {k}_{2l} - {\varepsilon }_{2l}}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( {{k}_{2l} + {\varepsilon }_{2l}}\right) }\right) , k \in {\Gamma }^{ + } \\ {\widetilde{\beta }}_{i}^{k - {k}_{2l}}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( {k}_{2l}\right) }\right) , k \in {\Gamma }^{ - } \end{array}\right. \tag{13}
+$$
+
+where ${\widetilde{\alpha }}_{i} = 1 - {\alpha }_{i}$ and ${\widetilde{\beta }}_{i} = 1 + {\beta }_{i}$ . And when $k \in {\mathcal{T}}^{ + }\left( {{k}_{2l},{k}_{{2l} + 1}}\right)$ , from (8) and (11), it can be derived
+
+$$
+\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l} - {\varepsilon }_{2l}}{\mathcal{V}}_{0}\left( {\widetilde{x}\left( {{k}_{2l} + {\varepsilon }_{2l}}\right) }\right)
+$$
+
+$$
+\leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l} - {\varepsilon }_{2l}} \cdot {\widetilde{\beta }}_{0}^{{\varepsilon }_{2l}} \cdot {\mathcal{V}}_{0}\left( {\widetilde{x}\left( {k}_{2l}\right) }\right)
+$$
+
+$$
+< \cdots
+$$
+
+$$
+\leq \theta \exp \left\{ {\max \left( {\frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{{\tau }_{F}} + {v}_{0}, - \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{{\tau }_{D}} + {v}_{1}}\right) }\right.
+$$
+
+$$
+\left. \left( {{\Xi }_{F}\left( {{k}_{0}, k}\right) + {\Xi }_{D}\left( {{k}_{0}, k}\right) }\right) \right\} \mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right)
+$$
+
+(14)
+
+where $\theta = \exp \left\lbrack {\left( {{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}\right) {\xi }_{F} - \left( {{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}\right) {\xi }_{D}}\right\rbrack$ , $\omega = \max \left\{ {-\frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{{\tau }_{F}} - \ln {\widetilde{\alpha }}_{0},\frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{{\tau }_{D}} - \ln {\widetilde{\alpha }}_{1}}\right\} ,$ ${\chi }_{0} = {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0},{\chi }_{1} = {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1},{v}_{i} = \ln {\widetilde{\alpha }}_{i}.$
+
+From (11), it has $\omega > 0$ . Then, it is clear that $\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right)$ converges to zero when $k \rightarrow \infty$ . Therefore, the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable when (8) and (11) hold.
+
+Next, if $\varpi \left( k\right) \neq 0$ for $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ and zero initial conditions, (??) is derived as follows
+
+$$
+\Delta {\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) < \left\{ \begin{array}{l} - {\alpha }_{i}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) - \Upsilon \left( k\right) , k \in {\Gamma }^{ + } \\ {\beta }_{i}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) - \Upsilon \left( k\right) , k \in {\Gamma }^{ - } \end{array}\right. \tag{15}
+$$
+
+where $i = 0,1,\Upsilon \left( k\right) = {e}^{T}\left( k\right) e\left( k\right) - {\gamma }^{2}{\varpi }^{T}\left( k\right) \varpi \left( k\right)$ . When $k \in {\mathcal{T}}^{ + }\left( {{k}_{2l},{k}_{{2l} + 1}}\right)$ , it can have the following inequality in the similar way from (10) and (15)
+
+$$
+\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\cdots {\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}.
+$$
+
+$$
+{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0}, k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots {\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}.
+$$
+
+$$
+{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0}, k}\right) }\mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right) - {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\ldots
+$$
+
+$$
+{\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0}, k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots
+$$
+
+$$
+{\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0}, k}\right) }\mathop{\sum }\limits_{{s = {k}_{0} + {\Delta }_{0}}}^{{{k}_{1} - 1}}{\widetilde{\alpha }}_{0}^{{k}_{1} - s - 1}\Upsilon \left( s\right)
+$$
+
+$$
+- {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\cdots {\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}.
+$$
+
+$$
+{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0}, k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots {\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0}, k}\right) }
+$$
+
+$$
+\mathop{\sum }\limits_{{s = {k}_{0}}}^{{{\hslash }_{0} - 1}}\left( {{\widetilde{\alpha }}^{{k}_{1} - {\hslash }_{0}}{\phi }_{0}^{{\hslash }_{0} - s - 1}\Upsilon \left( s\right) }\right) - \mathop{\sum }\limits_{{s = {\hslash }_{2l}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}\Upsilon \left( s\right)
+$$
+
+$$
+- \mathop{\sum }\limits_{{s = {k}_{2l}}}^{{{\hslash }_{2l} - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\phi }_{0}^{{\hslash }_{2l} - s - 1}\Upsilon \left( s\right)
+$$
+
+(16)
+
+Since ${\varepsilon }_{M} = \max \left\{ {\varepsilon }_{i}\right\}$ and $1 < {\phi }_{0}^{{k}_{2l} + {\varepsilon }_{2l} - s - 1} < {\phi }_{0}^{{\varepsilon }_{M} - 1}$ , under zero initial conditions $\mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right) = 0$ and $\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \geq 0$ and according to the Definition 1, it can get
+
+$$
+\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\widetilde{\alpha }}_{0}^{{\Xi }_{F}\left( {{k}_{0}, s}\right) }{\widetilde{\alpha }}_{1}^{{\Xi }_{D}\left( {{k}_{0}, s}\right) }{e}^{T}\left( s\right) e\left( s\right) \leq \tag{17}
+$$
+
+$$
+{\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}}{\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\theta }_{0}^{{\varepsilon }_{M} - 1}{\varpi }^{T}\left( s\right) \varpi \left( s\right) .
+$$
+
+The accumulated sum of (17) over $\lbrack k,\infty )$ is given by
+
+$$
+\mathop{\sum }\limits_{{k = {k}_{0}}}^{\infty }\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\widetilde{\alpha }}^{s - {k}_{0}}{e}^{T}\left( s\right) e\left( s\right) \leq {\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}} \tag{18}
+$$
+
+$$
+{\gamma }^{2}\mathop{\sum }\limits_{{k = {k}_{0}}}^{\infty }\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\theta }_{0}^{{\varepsilon }_{M} - 1}{\varpi }^{T}\left( s\right) \varpi \left( s\right)
+$$
+
+which is equivalent to
+
+$$
+\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}^{s - {k}_{0}}{e}^{T}\left( s\right) e\left( s\right) \leq {\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}} \tag{19}
+$$
+
+$$
+{\theta }_{0}^{{\varepsilon }_{M} - 1}{\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\varpi }^{T}\left( s\right) \varpi \left( s\right) .
+$$
+
+Thus, the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are finally shown to be exponentially asymptotically stable and satisfy the exponential ${H}_{\infty }$ performance index ${\gamma }_{s} =$
+
+$\max \left\{ {\sqrt{{\left( {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0}\right) }^{{\xi }_{F}}{\left( {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1}\right) }^{{\xi }_{D}}{\theta }_{0}^{{\varepsilon }_{M} - 1}} \cdot \gamma }\right\}$ , which completes the proof.
+
+Due to the presence of numerous unknown matrix couplings, it is typically difficult to obtain filter gains from Theorem 1. Then, the linear solvability conditions of the designed filters are proposed in Theorem 2.
+
+Theorem 2: Consider the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ , under DoS attacks with ${\tau }_{F}$ and ${\tau }_{D}$ , scalar ${\alpha }_{i},{\beta }_{i},\gamma ,{\mu }_{0}$ and ${\mu }_{1}$ satisfying $0 < {\alpha }_{i} < 1,{\beta }_{i} > 0,\gamma > 0,{\mu }_{0} > 1$ and $0 < {\mu }_{1} < 1$ . If there exist symmetric positive-definite matrices ${\mathcal{P}}_{i1},{\mathcal{P}}_{i3}$ , matrices ${\mathcal{P}}_{i2},{\mathcal{G}}_{i},{\mathcal{Q}}_{i},{\mathcal{R}}_{i},{\mathcal{A}}_{Fi},{\mathcal{B}}_{Fi},{\mathcal{C}}_{Fi},{\mathcal{D}}_{Fi}$ , scalar $\gamma , i, j, i \neq j$ satisfying the following conditions
+
+$$
+\left\lbrack \begin{matrix} {\Pi }_{i}^{11} & {\Pi }_{i}^{12} & 0 & {\Pi }_{i}^{14} & {\mathcal{A}}_{Fi} & {\Pi }_{i}^{16} & {\Pi }_{i}^{17} \\ * & {\Pi }_{i}^{22} & 0 & {\Pi }_{i}^{24} & {\mathcal{A}}_{Fi} & {\Pi }_{i}^{26} & {\Pi }_{i}^{27} \\ * & * & - I & {\Pi }_{i}^{34} & {\mathcal{C}}_{Fi} & 0 & - I \\ * & * & * & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i1} & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i2} & 0 & 0 \\ * & * & * & * & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i3} & 0 & 0 \\ * & * & * & * & * & - {\gamma }^{2}I & 0 \\ * & * & * & * & * & * & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0,
+$$
+
+(20)
+
+$$
+\left\lbrack \begin{matrix} {\Pi }_{ij}^{11} & {\Pi }_{ij}^{12} & 0 & {\Pi }_{ij}^{14} & {\mathcal{A}}_{Fj} & {\Pi }_{ij}^{16} & {\Pi }_{ij}^{17} \\ * & {\Pi }_{i}^{22} & 0 & {\Pi }_{ij}^{24} & {\mathcal{A}}_{Fj} & {\Pi }_{ij}^{26} & {\Pi }_{ij}^{27} \\ * & * & - I & {\Pi }_{ij}^{34} & {\mathcal{C}}_{Fj} & 0 & - I \\ * & * & * & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i1} & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i2} & 0 & 0 \\ * & * & * & * & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i3} & 0 & 0 \\ * & * & * & * & * & - {\gamma }^{2}I & 0 \\ * & * & * & * & * & * & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0
+$$
+
+(21)
+
+$$
+\left\lbrack \begin{matrix} {\Omega }^{11} & {\Omega }^{12} & {\mathcal{G}}_{i}^{T} & {\mathcal{R}}_{i} \\ & {\Omega }^{22} & {\mathcal{Q}}_{i}^{T} & {\mathcal{R}}_{i} \\ & * & - {\mu }_{i}{\mathcal{P}}_{j1} & - {\mu }_{i}{\mathcal{P}}_{j2} \\ & * & * & - {\mu }_{i}{\mathcal{P}}_{j3} \end{matrix}\right\rbrack \leq 0 \tag{22}
+$$
+
+$$
+{\tau }_{D} < \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{\ln {\widetilde{\alpha }}_{1}},{\tau }_{F} > - \frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{\ln {\widetilde{\alpha }}_{0}}, \tag{23}
+$$
+
+the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable and and satisfy the exponential ${H}_{\infty }$ performance index ${\gamma }_{s} = \max \left\{ {\sqrt{{\left( {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0}\right) }^{{\xi }_{F}}{\left( {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1}\right) }^{{\xi }_{D}}{\theta }_{0}^{{\varepsilon }_{M} - 1}} \cdot \gamma }\right\}$ , where $\widetilde{\alpha } = 1 - \alpha ,\widetilde{\beta } = 1 + \beta .{\Pi }_{i}^{11} = {\mathcal{P}}_{i1} - {\dot{\mathcal{G}}}_{i} - {\mathcal{G}}_{i} - {\mathcal{G}}_{i}^{T},$ ${\Pi }_{i}^{12} = {\mathcal{P}}_{i2} - {\mathcal{Q}}_{i} - {\mathcal{R}}_{i},{\Pi }_{i}^{14} = {\mathcal{G}}_{i}{}^{T}{A}_{id} + {\mathcal{B}}_{Fi}{C}_{d},$ ${\Pi }_{i}^{16} = {\mathcal{G}}_{i}{}^{T}{B}_{1i},{\Pi }_{i}^{17} = {\mathcal{G}}_{i}{}^{T}{B}_{2i},{\Pi }_{i}^{22} = {\mathcal{P}}_{i3} - {\mathcal{R}}_{i} - {\mathcal{R}}_{i}{}^{T},$ ${\Pi }_{i}^{24} = {\mathcal{Q}}_{i}{}^{T}{A}_{id} + {\mathcal{B}}_{Fi}{C}_{d},{\Pi }_{i}^{26} = {\mathcal{Q}}_{i}{}^{T}{B}_{1i},{\Pi }_{i}^{27} = {\mathcal{Q}}_{i}{}^{T}{B}_{2i},$ ${\Pi }_{i}^{34} = {\mathcal{D}}_{Fi}{C}_{d},{\Pi }_{ij}^{11} = {\mathcal{P}}_{i1} - {\mathcal{G}}_{j} - {\mathcal{G}}_{j}{}^{T},{\Pi }_{ij}^{12} = {\mathcal{P}}_{i2} - {\mathcal{Q}}_{j} - {\mathcal{R}}_{j},$ ${\Pi }_{ij}^{14} = {\mathcal{G}}_{j}{}^{T}{A}_{id} + {\mathcal{B}}_{Fj}{C}_{d},{\Pi }_{ij}^{16} = {\mathcal{G}}_{j}{}^{T}{B}_{1i},{\Pi }_{ij}^{17} = {\mathcal{G}}_{j}{}^{T}{B}_{2i},$ ${\Pi }_{ij}^{22} = {\mathcal{P}}_{i3} - {\mathcal{R}}_{j} - {\mathcal{R}}_{j}{}^{T},{\Pi }_{ij}^{24} = {\mathcal{Q}}_{j}{}^{T}{A}_{id} + {\mathcal{B}}_{Fj}{C}_{d},$ ${\Pi }_{ij}^{26} = {Q}_{j}{}^{T}{B}_{1i},{\Pi }_{ij}^{27} = {Q}_{j}{}^{T}{B}_{2i},{\Pi }_{ij}^{34} = {D}_{Fj}{C}_{d},$ ${\Omega }^{11} = {\mathcal{P}}_{i1} - {\mu }_{i}\left( {{\mathcal{G}}_{i} + {\mathcal{G}}_{i}{}^{T}}\right) ,{\Omega }^{12} = {\mathcal{P}}_{i2} - {\mu }_{i}{\mathcal{Q}}_{i} -$ ${\mu }_{i}{\mathcal{R}}_{i}{}^{T},{\Omega }^{22} = {\mathcal{P}}_{i3} - {\mu }_{i}\left( {{\mathcal{R}}_{i} + {\mathcal{R}}_{i}{}^{T}}\right) .$
+
+In addition, if there is a solution to (20)-(23), then the filter gain can be obtained
+
+$$
+\left\lbrack \begin{matrix} {\mathcal{A}}_{fi} & {\mathcal{B}}_{fi} \\ {\mathcal{C}}_{fi} & {\mathcal{D}}_{fi} \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} {\mathcal{R}}_{i}{}^{-1} & 0 \\ 0 & I \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {\mathcal{A}}_{Fi} & {\mathcal{B}}_{Fi} \\ {\mathcal{C}}_{Fi} & {\mathcal{D}}_{Fi} \end{matrix}\right\rbrack . \tag{24}
+$$
+
+Proof: Based on the Project Lemma and the Schur complement Lemma, pre- and post-multiplying (8), one can deduce that (8) and (20) are equivalent. Similarly, pre- and postmultiplying (9) implies that (9) and (21) are equivalent. Theorem 2 is proved.
+
+For the purpose of fault detection, the residual is obtained from the difference between the measured value and its estimated value. Design the following residual estimation function
+
+$$
+{\mathcal{J}}_{r}\left( k\right) = \sqrt{\frac{1}{k}\mathop{\sum }\limits_{{s = 1}}^{k}{r}^{T}\left( s\right) r\left( s\right) }. \tag{25}
+$$
+
+And select threshold value of (25) as
+
+$$
+{\mathcal{J}}_{th} = \mathop{\sup }\limits_{\substack{{d\left( k\right) \in {l}_{2}} \\ {f\left( k\right) = 0} }}{\mathcal{J}}_{r}\left( k\right) . \tag{26}
+$$
+
+Therefore, the fault detection logical relationship is
+
+$$
+\left\{ \begin{matrix} \begin{Vmatrix}{{\mathcal{J}}_{r}\left( k\right) }\end{Vmatrix} > {\mathcal{J}}_{th} & \text{ Alarm } \\ \begin{Vmatrix}{{\mathcal{J}}_{r}\left( k\right) }\end{Vmatrix} \leq {\mathcal{J}}_{th} & \text{ No-alarm. } \end{matrix}\right. \tag{27}
+$$
+
+## IV. Simulation
+
+This section intends to demonstrate the effectiveness of asynchronous FD strategy for networked UMV under DoS attacks. By choosing matrices $M, N$ and $R$ in system (1) as [20]. Let ${\alpha }_{0} = {0.09},{\beta }_{0} = {0.05},{\alpha }_{1} = {0.11},{\beta }_{1} = {0.03}$ , ${\mu }_{0} = {1.4},{\mu }_{1} = {0.45},{\varepsilon }_{M} = 2,\sigma = 1$ and $\gamma = {44}$ . Then, from (11) the MDADT satisfies ${\tau }_{D} < {4.34}$ and ${\tau }_{F} > {6.60}$ . The UMV fault detection filter gain under DoS attacks can be calculated by Theorem 2.
+
+To demonstrate the practicability of FD filters designed for networked UMV under DoS attacks, the following simulations are performed to verify it. Firstly, UMV are suffered from thruster faults, external disturbances and DoS attacks. One possible sequences of DoS attacks are depicted in Fig. 1, where 1 denotes that attacks have occurred and 0 denotes the sleep state with no attack. Because of the existence of DoS attacks, which in turn leads to asynchronous switching between the filter and the primary system, then the switching sequence between the filter and the subsystem is shown in Fig. 2.
+
+
+
+Fig. 1. DoS attacks sequences.
+
+
+
+Fig. 2. Switching sequences.
+
+The external disturbance $d\left( k\right)$ is given as the following form
+
+$$
+d\left( k\right) = \left\{ {\begin{array}{l} {d}_{1}\left( k\right) = {12}\sin \left( k\right) \exp \left( {-{0.15k}}\right) \\ {d}_{2}\left( k\right) = {15}\sin \left( {0.73k}\right) , k \in \left\lbrack {5,{37}}\right\rbrack \\ {d}_{3}\left( k\right) = 9\sin \left( {0.2k}\right) , k \in \left\lbrack {{11},{45}}\right\rbrack \end{array}.}\right.
+$$
+
+Case 1: Use DoS attacks sequence 1, and the fault signals ${f}^{1}\left( k\right)$ takes the following form
+
+$$
+{f}^{1}\left( k\right) = \left\{ {\begin{array}{l} {f}_{1}\left( k\right) = 2\sin \left( {0.2k}\right) \\ {f}_{2}\left( k\right) = \cos \left( {0.1k}\right) \\ {f}_{3}\left( k\right) = {0.8}\sin \left( {0.15k}\right) \end{array}, k \in \left\lbrack {{25},{35}}\right\rbrack .}\right.
+$$
+
+Under the DoS attack sequence and the faults ${f}^{1}\left( k\right)$ , the curves of the residual signal $\parallel r\left( k\right) {\parallel }_{2}$ and the REF signal are depicted in Fig. 3 and Fig. 4, respectively. In the absence of faults, the threshold value is chosen depending on the maximum value of the REF signal: ${\mathcal{J}}_{th} = {0.215}$ . When $t$ $= {25.11}\mathrm{\;s}$ , the fault signal is detected in time.
+
+
+
+Fig. 3. The residual signal $\parallel r\left( k\right) {\parallel }_{2}$ in Case 1.
+
+
+
+Fig. 4. The REF signal in Case 1.
+
+Case 2: In order to further verify the sensitivity of the FD filter to the faults, a fault with a smaller amplitude than case 1 but with the same frequency is selected for verification, and the DoS attack sequence is still used. The fault form of ${f}^{2}\left( k\right)$ is shown as follows
+
+$$
+{f}^{2}\left( k\right) = \left\{ {\begin{array}{l} {f}_{1}\left( k\right) = {0.4}\sin \left( {0.2k}\right) \\ {f}_{2}\left( k\right) = {0.2}\cos \left( {0.1k}\right) \\ {f}_{3}\left( k\right) = {0.16}\sin \left( {0.15k}\right) \end{array}, k \in \left\lbrack {{25},{35}}\right\rbrack .}\right.
+$$
+
+Under the DoS attack sequence and the faults ${f}^{2}\left( k\right)$ , the curves of the residual signal $\parallel r\left( k\right) {\parallel }_{2}$ and the REF signal are depicted in Fig. 5 and Fig. 6, respectively. Fig. 6 indicates that the threshold for fault detection becomes smaller than in Case 1: ${\mathcal{J}}_{th} = {0.067}$ . And when $t = {25.27s}$ , the fault signal is detected in time. In contrast to Case 1, the residual amplitude and the REF signal are significantly reduced. This shows that the fault amplitude has a non-negligible effect on the system.
+
+
+
+Fig. 5. The residual signal $\parallel r\left( k\right) {\parallel }_{2}$ in Case 2.
+
+
+
+Fig. 6. The REF signal in Case 2.
+
+## V. CONCLUSION
+
+To solve the problem that DoS attacks cannot be detected in time, this paper designs an exponential convergent ${H}_{\infty }$ filters based on an asynchronous switched method for UMVs under DoS attacks, which solves the issue that the filters' switching frequently lags behind subsystems in practical applications. On the basis of the MDADT and the PLF, one criterion on the tolerability of the MDADT is derived to maintain exponential ${H}_{\infty }$ performance. Sufficient conditions for the designed FD filter to exist are described by LMIs, and the filter gain and the related parameters of MDADT can be derived by solving these LMIs. Finally, the effectiveness of the designed filter is verified by numerical simulation.
+
+## REFERENCES
+
+[1] L. Ma, Y.-L. Wang, and Q.-L. Han, "Event-triggered dynamic positioning for mass-switched unmanned marin vehicles in network environments," IEEE Transactions on Cybernetics, no. 5, pp. 3159-3171, MAY 2022.
+
+[2] Q. Liu, Y. Long, T. Li, J. H. Park, and C. P. Chen, "Fault detection for unmanned marine vehicles under replay attack," IEEE Transactions on Fuzzy Sysems, vol. 31, no. 5, pp. 1716-1728, MAY 2023.
+
+[3] B. S. Park and S. J. Yoo, "Fault detection and accommodation of saturated actuators for underactuated surface vessels in the presence of nonlinear uncertainties," Nonlinear Dynamics, vol. 85, no. 2, pp. 1067- 1077, JUL 2016.
+
+[4] Z. Duan, F. Ding, J. Liang, and Z. Xiang, "Observer-based fault detection for continuous-discrete systems in T-S fuzzy model," Nonlinear Analysis-Hybrid Systems, vol. 50, p. 101379, NOV 2023.
+
+[5] X.-L. Wang, G.-H. Yang, and D. Zhang, "Event-triggered fault detection observer design for T-S fuzzy systems," IEEE Transactions on Fuzzy Systems, vol. 29, no. 9, pp. 2532-2542, SEP 2021.
+
+[6] X. Yao, L. Wu, and W. X. Zheng, "Fault detection filter design for Markovian jump singular systems with intermittent measurements," IEEE Transactions on Signal Processing, vol. 59, no. 7, pp. 3099-3109, JUL 2011.
+
+[7] Y.-L. Wang and Q.-L. Han, "Network-based fault detection filter and controller coordinated design for unmanned surface vehicles in network environments," IEEE Transactions on Industrial Informatics, vol. 12, no. 5, pp. 1753-1765, OCT 2016.
+
+[8] X. Wang, Z. Fei, H. Gao, and J. Yu, "Integral-based event-triggered fault detection filter design for unmanned surface vehicles," IEEE Transactions on Industrial Informatics, vol. 15, no. 10, pp. 5626-5636, OCT 2019.
+
+[9] X.-N. Yu, L.-Y. Hao, and X.-L. Wang, "Fault tolerant control for an unmanned surface vessel based on integral sliding mode state feedback control," International Journal of Control Automation and Systems, vol. 20, no. 8, pp. 2514-2522, AUG 2022.
+
+[10] N. Wang, H. He, Y. Hou, and B. Han, "Model-free visual servo swarming of manned-unmanned surface vehicles with visibility maintenance and collision avoidance," IEEE Transactions on Intelligent Transportation Systems, SEP 2023.
+
+[11] S. Chen, Y. Chen, C. Pan, I. Ali, J. Pan, and W. He, "Distributed adaptive platoon secure control on unmanned vehicles system for lane change under compound attacks," IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 11, pp. 12637-12647, NOV 2023.
+
+[12] D. Ding, Z. Wang, Q.-L. Han, and G. Wei, "Security control for discrete-time stochastic nonlinear systems subject to deception attacks," IEEE Transactions on Systems Man and Cybernetics-Systems, vol. 48, no. 5, pp. 779-789, MAY 2018.
+
+[13] L. Zhao and G.-H. Yang, "Cooperative adaptive fault-tolerant control for multi-agent systems with deception attacks," Journal of the Franklin Institute-Engineering and Applied Mathematics, vol. 357, no. 6, pp. 3419-3433, APR 2020.
+
+[14] Y. Zhao, Z. Chen, C. Zhou, Y.-C. Tian, and Y. Qin, "Passivity-based robust control against quantified false data injection attacks in cyber-physical systems," IEEE-CAA Journal of Automatica Sinica, vol. 8, no. 8, pp. 1440-1450, AUG 2021.
+
+[15] Z. Ye, D. Zhang, and Z.-G. Wu, "Adaptive event-based tracking control of unmanned marine vehicle systems with DoS attack," Journal of the Franklin Institute-Engineering and Applied Mathematics, vol. 358, no. 3, pp. 1915-1939, FEB 2021.
+
+[16] X. Sun, G. Wang, Y. Fan, D. Mu, and B. Qiu, "A formation autonomous navigation system for unmanned surface vehicles with distributed control strategy," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 5, pp. 2834-2845, MAY 2021.
+
+[17] D. Zhang, Z. Ye, P. Chen, and Q.-G. Wang, "Intelligent event-based output feedback control with Q-learning for unmanned marine vehicle systems," Control Engineering Practice, vol. 105, p. 104616, Dec. 2020.
+
+[18] M. Liu, J. Yu, and Y. Liu, "Dynamic event-triggered asynchronous fault detection for markov jump systems with partially accessible hidden information and subject to aperiodic DoS attacks," Applied Mathematics and Computation, vol. 431, p. 127317, OCT 15 2022.
+
+[19] D. Du, B. Jiang, P. Shi, and H. R. Karimi, "Fault detection for continuous-time switched systems under asynchronous switching," International Journal of Robust and Nonlinear Control, vol. 24, no. 11, pp. 1694-1706, JUL 2014.
+
+[20] N. E. Kahveci and P. A. Ioannou, "Adaptive steering control for uncertain ship dynamics and stability analysis," Automatica, vol. 49, no. 3, pp. 685-697, Mar. 2013.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bd1030de100ef5d76f8563546c45ecd34cf82375
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,381 @@
+§ ASYNCHRONOUS THRUSTER FAULT DETECTION FOR UNMANNED MARINE VEHICLES UNDER DOS ATTACKS
+
+Fuxing Wang
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+wfx614328@163.com
+
+Yue Long
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+longyue@uestc.edu.cn
+
+Tieshan Li
+
+School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
+
+tieshanli@126.com
+
+Abstract-This paper investigates a thruster fault detection strategy for unmanned marine vehicles (UMVs) subjected to external disturbances and aperiodic Denial of Service (DoS) attacks. To address the challenge of timely detection of DoS attacks, the UMV and the corresponding filters are modeled within the framework of an asynchronous switched system. Sufficient conditions ensuring the system's exponential stability and prescribed performance are derived using model-dependent average dwell time and piecewise Lyapunov functions. Additionally, the tolerable lower bound of the sleep interval and the upper bound of the attack interval for DoS attacks are established. Solvable conditions for the designed fault detection filters are obtained by leveraging decoupling techniques. Finally, simulations conducted on a UMV validate the effectiveness of the proposed methods.
+
+Index Terms-Unmanned marine vehicles, asynchronous switched system, DoS attacks, fault detection.
+
+§ I. INTRODUCTION
+
+In recent years, unmanned marine vehicles (UMVs) have attracted significant attention in marine science and technology due to their wide-ranging applications in marine exploration, environmental monitoring, and resource development [1]. Nevertheless, the operational environment for UMVs is inherently complex, and their reliance on wireless communication networks for communication with shore-based centers makes them vulnerable to external disturbances, equipment malfunctions, cyber-attacks, and other disruptions [2]. The unpredictable nature of potential harm caused by these disturbances or faults, combined with the inherent vulnerabilities of cyberspace, renders UMV systems particularly susceptible to cyber-attacks. These risks can result in system failures and potentially catastrophic accidents [3]. As a result, improving the reliability and security of UMVs has emerged as a crucial area of research and development.
+
+The unpredictable nature of potential harm caused by disturbances or faults to unmanned marine vehicles (UMVs) underscores the critical need for a real-time fault detection (FD) warning mechanism. The core of fault detection methodology involves comparing system performances to identify fault signals. Current research predominantly focuses on model-based fault detection, which has shown significant success in various systems, including continuous-discrete systems [4], T-S fuzzy systems [5], and Markovian jump systems [6]. The primary approach involves generating residual signals through filters or observers and subsequently establishing a fault warning mechanism. For UMVs, several studies have made noteworthy contributions. [7] has explored the design of controllers and FD filters based on observers for networked UMVs, [8] proposed event-triggered fault detection mechanisms for UMVs in networked environments, and [2] utilized T-S fuzzy systems to model UMV systems, particularly addressing fault detection under replay attacks. Despite these advancements, the scope of fault detection research for UMVs remains relatively narrow and lacks comprehensive coverage [9]. Consequently, further investigation into robust and holistic fault detection strategies for UMVs is imperative to enhance their reliability and operational safety [10].
+
+On the other hand, due to the openness of cyberspace, UMV systems are particularly vulnerable to cyber-attacks. Deception attacks and Denial of Service (DoS) attacks are currently common types of attacks [11]. Deception attacks involve sending incorrect or tampered data to the system [12], including replay attacks [13] and false data injection attacks [14]. Compared to deception attacks, DoS attacks cause signal transmission to be unavailable for a period, leaving the system in an open-loop state, which makes it easier to cause severe disruption in system operations. Consequently, numerous studies on DoS attacks have emerged [15], [16].
+
+However, most existing research assumes that Denial of Service (DoS) attacks can be detected promptly, suggesting that the switching of filters corresponding to each subsystem happens simultaneously with the subsystem switching [10], [17]. However, in practical applications, detecting DoS attacks in a timely manner proves challenging, leading to delays. This delay implies that the filter often takes additional time to adjust to the appropriate control mode based on the subsystem mode, resulting in asynchronous filter/subsystem switching [18]. As a result, filters designed for synchronous switching may not provide optimal detection performance in real-world scenarios [19]. Thus, incorporating asynchronous switching into thruster fault detection for unmanned marine vehicles (UMVs) under DoS attacks is of substantial practical significance.
+
+This work is supported in part by the National Natural Science Foundation of China under Grants 62273072, 51939001. (Corresponding author: Yue Long)
+
+Inspired by the previous discussion, this paper investigates thruster fault detection (FD) for unmanned marine vehicles (UMVs) under Denial of Service (DoS) attacks using an asynchronous switched method to enhance reliability and security. Addressing the challenge of timely DoS attack detection, the paper proposes an asynchronous switched filter specifically designed for thruster fault detection. Furthermore, leveraging model-dependent average dwell time (MDADT) and piecewise Lyapunov functions (PLF), the paper establishes the tolerable lower bound of the sleep interval and the upper bound of the attack interval for DoS attacks. The filter parameters are determined based on linear solvability conditions. The effectiveness of the proposed method is ultimately validated through simulation.
+
+§ II. PROBLEM FORMULATION AND MODELING
+
+§ A.UMV MODEL
+
+Consider the UMV and the following body-fixed equations of motion
+
+$$
+M\dot{\delta }\left( t\right) + {N\delta }\left( t\right) + {R\psi }\left( t\right) = {E\varphi }\left( t\right) , \tag{1}
+$$
+
+$$
+\dot{\psi }\left( t\right) = J\left( {\eta \left( t\right) }\right) \delta \left( t\right) ,
+$$
+
+where $\delta \left( t\right) = {\left\lbrack {\delta }_{u}\left( t\right) ,{\delta }_{v}\left( t\right) ,{\delta }_{r}\left( t\right) \right\rbrack }^{T}$ with ${\delta }_{u}\left( t\right) ,{\delta }_{v}\left( t\right) ,{\delta }_{r}\left( t\right)$ representing the surge, sway and yaw velocities, respectively. $\psi \left( t\right) = {\left\lbrack {x}_{p}\left( t\right) ,{y}_{p}\left( t\right) ,\eta \left( t\right) \right\rbrack }^{T}$ with ${x}_{p}\left( t\right)$ and ${y}_{p}\left( t\right)$ are positions and $\eta \left( t\right)$ is the yaw angle. $\varphi \left( t\right)$ is the control input. $M,N,R$ and $E$ denote inertia, damping, mooring forces and configuration matrices, and $M$ is a symmetric positive-definite and invertible matrix that satisfies $M = {M}^{T} > 0$ ,
+
+$J\left( {\eta \left( t\right) }\right) = \left\lbrack \begin{matrix} \cos \left( {\eta \left( t\right) }\right) & - \sin \left( {\eta \left( t\right) }\right) & 0 \\ \sin \left( {\eta \left( t\right) }\right) & \cos \left( {\eta \left( t\right) }\right) & 0 \\ 0 & 0 & 1 \end{matrix}\right\rbrack .$
+
+Then, by defining $x\left( t\right) = \delta \left( t\right) - {\delta }_{\text{ ref }},A\left( t\right) =$ $- M{\left( t\right) }^{-1}N\left( t\right) ,{B}_{1}\left( t\right) = M{\left( t\right) }^{-1}R$ and ${B}_{2}\left( t\right) = M{\left( t\right) }^{-1}E$ , and taking into account the unavoidable disturbance $\widetilde{d}\left( t\right)$ caused by wind, wave and current, the system (1) can be expressed as
+
+$$
+\left\{ \begin{array}{l} \dot{x}\left( t\right) = {Ax}\left( t\right) + {B}_{1}d\left( t\right) + {B}_{2}\varphi \left( t\right) , \\ y\left( t\right) = {Cx}\left( t\right) , \end{array}\right. \tag{2}
+$$
+
+where $d\left( t\right) = {B}_{1}{\left( t\right) }^{-1}{d}^{ * }\left( t\right) - \psi \left( t\right) + {B}_{1}{\left( t\right) }^{-1}A{\delta }_{\text{ ref }}$ and $C =$ $\left\lbrack \begin{array}{lll} 0 & 0 & 1 \end{array}\right\rbrack$ denotes the output matrix.
+
+Consider thruster fault ${\varphi }^{F}\left( t\right) = {\rho \varphi }\left( t\right) + {\sigma f}\left( t\right)$ and assume control inputs $\varphi \left( t\right) = {Kx}\left( t\right)$ are designed,(2) is represented
+
+as
+
+$$
+\left\{ \begin{array}{l} \dot{x}\left( t\right) = \widehat{A}x\left( t\right) + {B}_{1}d\left( t\right) + {B}_{2}\widehat{f}\left( t\right) , \\ y\left( t\right) = {Cx}\left( t\right) , \end{array}\right. \tag{3}
+$$
+
+where $\widehat{A} = A + {B}_{2}K$ and $\widehat{f}\left( t\right) = - \bar{\rho }\varphi \left( t\right) + {\sigma f}\left( t\right)$ .
+
+§ B.DOS ATTACKS MODEL
+
+Consider the aperiodic dos attacks as follows:
+
+$$
+{A}_{\text{ Dos }} = \left\{ \begin{matrix} 0, & t \in \left\lbrack {{t}_{2l},{t}_{{2l} + 1}}\right) \triangleq {\kappa }_{0,{2l}} \\ 1, & t \in \left\lbrack {{t}_{{2l} + 1},{t}_{2\left( {l + 1}\right) }}\right) \triangleq {\kappa }_{1,{2l}} \end{matrix}\right. \tag{4}
+$$
+
+where $t \in \left\lbrack {{t}_{2l},{t}_{{2l} + 1}}\right) \triangleq {\kappa }_{0,{2l}}\;\left( {l \in \mathrm{N},{t}_{2l} \geq 0}\right)$ indicates the ${l}^{th}$ sleep interval with the length ${s}_{l} = {t}_{{2l} + 1} - {t}_{2l}$ , and $t \in \left\lbrack {{t}_{{2l} + 1},{t}_{2\left( {l + 1}\right) }}\right) \triangleq {\kappa }_{1,{2l}}$ indicates the ${l}^{th}$ DoS attacks interval with the length ${d}_{l} = {t}_{2\left( {l + 1}\right) } - {t}_{{2l} + 1}$ .
+
+Due to the communication disruption caused by DoS attacks, the UMV system (3) can be augmented into the following switched system, which has been discretized. The sleeping interval can be expressed as $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ , and the DoS attacks interval can be expressed as $k \in \left\lbrack {{k}_{{2l} + 1},{k}_{2\left( {l + 1}\right) }}\right)$ .
+
+$$
+\left\{ \begin{array}{l} x\left( {k + 1}\right) = {A}_{id}x\left( k\right) + {B}_{1id}d\left( k\right) + {B}_{2id}\widehat{f}\left( k\right) \\ y\left( k\right) = {C}_{d}x\left( k\right) \end{array}\right. \tag{5}
+$$
+
+§ C. ASYNCHRONOUS SWITCHING FILTER
+
+In the case of the DoS attacks and thruster faults, the residual signal produced by the switched filter is as follows:
+
+$$
+\left\{ {\begin{array}{l} {x}_{f}\left( {k + 1}\right) = {A}_{fi}{x}_{f}\left( k\right) + {B}_{fi}y\left( k\right) \\ r\left( k\right) = {C}_{fi}{x}_{f}\left( k\right) + {D}_{fi}y\left( k\right) \end{array}\left( {i = 0,1}\right) }\right. \tag{6}
+$$
+
+where ${x}_{f}\left( k\right)$ is the state of the filters, $r\left( k\right)$ is the residual signal of the switched system (5). Define $\widetilde{x}\left( k\right) =$ ${\left\lbrack \begin{array}{ll} {x}^{T}\left( k\right) & {x}_{f}^{T}\left( k\right) \end{array}\right\rbrack }^{T},\varpi \left( k\right) = {\left\lbrack \begin{array}{ll} {d}^{T}\left( k\right) & {f}^{T}\left( k\right) \end{array}\right\rbrack }^{T}$ and the residual evaluation signal $e\left( k\right) = r\left( k\right) - \widehat{f}\left( k\right) ,\left( 6\right)$ is rewritten as (7)
+
+$$
+{\Phi }_{0} : \left\{ {\begin{array}{l} \widetilde{x}\left( {k + 1}\right) = {\widetilde{A}}_{i}\widetilde{x}\left( k\right) + {\widetilde{B}}_{i}\varpi \left( k\right) \\ e\left( k\right) = {\widetilde{C}}_{i}\widetilde{x}\left( k\right) + {\widetilde{D}}_{i}\varpi \left( k\right) \end{array},k \in \left\lbrack {{k}_{l} + {\varepsilon }_{l},{k}_{l + 1}}\right) }\right.
+$$
+
+$$
+{\Phi }_{1} : \left\{ {\begin{array}{l} \widetilde{x}\left( {k + 1}\right) = {\widetilde{A}}_{ij}\widetilde{x}\left( k\right) + {\widetilde{B}}_{ij}\varpi \left( k\right) \\ e\left( k\right) = {\widetilde{C}}_{ij}\widetilde{x}\left( k\right) + {\widetilde{D}}_{ij}\varpi \left( k\right) \end{array},k \in \left\lbrack {{k}_{l},{k}_{l} + {\varepsilon }_{l}}\right) }\right.
+$$
+
+where $i \neq j,i \in \{ 0,1\} ,j \in \{ 0,1\} ,{\widetilde{A}}_{ij} = \left\lbrack \begin{matrix} {A}_{id} & 0 \\ {B}_{fj}{C}_{d} & {A}_{fj} \end{matrix}\right\rbrack$ , ${\widetilde{B}}_{ij} = \left\lbrack \begin{matrix} {B}_{1i} & {B}_{2i} \\ 0 & 0 \end{matrix}\right\rbrack ,{\widetilde{C}}_{ij} = \left\lbrack \begin{array}{ll} {D}_{fj}{C}_{d} & {C}_{fj} \end{array}\right\rbrack$ and ${\widetilde{D}}_{ij} =$ $\left\lbrack \begin{array}{ll} 0 & - \bar{I} \end{array}\right\rbrack$ .
+
+To better set the stage for the next section, the following definitions are presented.
+
+Definition 1: For any switching signal $\tau \left( k\right)$ and $0 < {k}_{0} \leq$ $k$ , let ${\mathcal{M}}_{\tau ,l}\left( {{k}_{0},k}\right)$ indicate the number of switching times that the ${l}_{th}$ subsystem is activated over $\left\lbrack {{k}_{0},k}\right)$ . If
+
+$$
+{M}_{\tau ,l}\left( {{k}_{0},k}\right) \leq {N}_{{\mathcal{M}}_{0,l}} + \frac{{N}_{l}\left( {{k}_{0},k}\right) }{{\lambda }_{l}}
+$$
+
+holds for scalar ${\lambda }_{l} > 0$ and integer ${N}_{{M}_{0,l}} \geq 0$ , then ${\lambda }_{l}$ is called model-dependent average dwell time. ${N}_{l}\left( {{k}_{0},k}\right)$ is the total running time of the ${l}_{th}$ subsystem over $\left\lbrack {{k}_{0},k}\right)$ .
+
+Definition 2: Consider asynchronous switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ , and given scalar $\alpha ,\beta$ , and $\gamma$ satisfying $0 < \alpha < 1$ , $\beta > 0$ and $\gamma > 0$ . Under zero initial condition, if the asynchronous switched system is exponentially stable and satisfies $\mathop{\sum }\limits_{{s = {k}_{0}}}^{\infty }{\left( 1 - \alpha \right) }^{s}{e}^{\mathrm{T}}\left( s\right) e\left( s\right) \leq {\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{\infty }{\varpi }^{\mathrm{T}}\left( s\right) \varpi \left( s\right)$ , it is said that the system exhibits exponential stability and has exponential ${H}_{\infty }$ index $\gamma$ .
+
+§ III. MAIN RESULTS
+
+In this section, the stability and ${H}_{\infty }$ performance of asynchronous switched systems (7) will be analyzed, and the sufficient and linearly solvable conditions for the designed switched FD filters are given.
+
+Theorem 1: Consider the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ under DoS attacks, scalars ${\alpha }_{i},{\beta }_{i},\gamma ,{\mu }_{0}$ and ${\mu }_{1}$ satisfying $0 < {\alpha }_{i} < 1,{\beta }_{i} > 0,\gamma > 0,{\mu }_{0} > 1$ and $0 < {\mu }_{1} < 1$ , if there exist symmetric positive-definite matrices ${\mathcal{P}}_{i}$ satisfying the following conditions
+
+$$
+{\widetilde{A}}_{i}^{T}{\mathcal{P}}_{i}{\widetilde{A}}_{i} - {\mathcal{P}}_{i} + {\alpha }_{i}{\mathcal{P}}_{i} < 0, \tag{8}
+$$
+
+$$
+{\widetilde{A}}_{ij}^{T}{\mathcal{P}}_{i}{\widetilde{A}}_{ij} - {\mathcal{P}}_{i} - {\beta }_{i}{\mathcal{P}}_{i} < 0, \tag{9}
+$$
+
+$$
+{\mathcal{P}}_{i} \leq {\mu }_{i}{\mathcal{P}}_{j} \tag{10}
+$$
+
+$$
+{\tau }_{D} < \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{\ln {\widetilde{\alpha }}_{1}},{\tau }_{F} > - \frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{\ln {\widetilde{\alpha }}_{0}}, \tag{11}
+$$
+
+the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable with the exponential ${H}_{\infty }$ performance, where $i \neq j,{\widetilde{\alpha }}_{i} = 1 - {\alpha }_{i},{\widetilde{\beta }}_{i} = 1 + {\beta }_{i},{\phi }_{i} = \frac{{\widetilde{\beta }}_{i}}{{\widetilde{\alpha }}_{i}}$ and ${\varepsilon }_{M}$ denotes the maximum time that the filter lags the subsystem.
+
+Proof: The piecewise Lyapunov function for the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are given as follows
+
+$$
+{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) = {\widetilde{x}}^{T}\left( k\right) {\mathcal{P}}_{i}\widetilde{x}\left( k\right) . \tag{12}
+$$
+
+When $\varpi \left( k\right) = 0$ and $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ , it can be obtained
+
+$$
+\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq \left\{ \begin{array}{l} {\widetilde{\alpha }}_{i}^{k - {k}_{2l} - {\varepsilon }_{2l}}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( {{k}_{2l} + {\varepsilon }_{2l}}\right) }\right) ,k \in {\Gamma }^{ + } \\ {\widetilde{\beta }}_{i}^{k - {k}_{2l}}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( {k}_{2l}\right) }\right) ,k \in {\Gamma }^{ - } \end{array}\right. \tag{13}
+$$
+
+where ${\widetilde{\alpha }}_{i} = 1 - {\alpha }_{i}$ and ${\widetilde{\beta }}_{i} = 1 + {\beta }_{i}$ . And when $k \in {\mathcal{T}}^{ + }\left( {{k}_{2l},{k}_{{2l} + 1}}\right)$ , from (8) and (11), it can be derived
+
+$$
+\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l} - {\varepsilon }_{2l}}{\mathcal{V}}_{0}\left( {\widetilde{x}\left( {{k}_{2l} + {\varepsilon }_{2l}}\right) }\right)
+$$
+
+$$
+\leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l} - {\varepsilon }_{2l}} \cdot {\widetilde{\beta }}_{0}^{{\varepsilon }_{2l}} \cdot {\mathcal{V}}_{0}\left( {\widetilde{x}\left( {k}_{2l}\right) }\right)
+$$
+
+$$
+< \cdots
+$$
+
+$$
+\leq \theta \exp \left\{ {\max \left( {\frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{{\tau }_{F}} + {v}_{0}, - \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{{\tau }_{D}} + {v}_{1}}\right) }\right.
+$$
+
+$$
+\left. \left( {{\Xi }_{F}\left( {{k}_{0},k}\right) + {\Xi }_{D}\left( {{k}_{0},k}\right) }\right) \right\} \mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right)
+$$
+
+(14)
+
+where $\theta = \exp \left\lbrack {\left( {{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}\right) {\xi }_{F} - \left( {{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}\right) {\xi }_{D}}\right\rbrack$ , $\omega = \max \left\{ {-\frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{{\tau }_{F}} - \ln {\widetilde{\alpha }}_{0},\frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{{\tau }_{D}} - \ln {\widetilde{\alpha }}_{1}}\right\} ,$ ${\chi }_{0} = {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0},{\chi }_{1} = {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1},{v}_{i} = \ln {\widetilde{\alpha }}_{i}.$
+
+From (11), it has $\omega > 0$ . Then, it is clear that $\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right)$ converges to zero when $k \rightarrow \infty$ . Therefore, the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable when (8) and (11) hold.
+
+Next, if $\varpi \left( k\right) \neq 0$ for $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ and zero initial conditions, (??) is derived as follows
+
+$$
+\Delta {\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) < \left\{ \begin{array}{l} - {\alpha }_{i}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) - \Upsilon \left( k\right) ,k \in {\Gamma }^{ + } \\ {\beta }_{i}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) - \Upsilon \left( k\right) ,k \in {\Gamma }^{ - } \end{array}\right. \tag{15}
+$$
+
+where $i = 0,1,\Upsilon \left( k\right) = {e}^{T}\left( k\right) e\left( k\right) - {\gamma }^{2}{\varpi }^{T}\left( k\right) \varpi \left( k\right)$ . When $k \in {\mathcal{T}}^{ + }\left( {{k}_{2l},{k}_{{2l} + 1}}\right)$ , it can have the following inequality in the similar way from (10) and (15)
+
+$$
+\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\cdots {\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}.
+$$
+
+$$
+{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0},k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots {\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}.
+$$
+
+$$
+{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0},k}\right) }\mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right) - {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\ldots
+$$
+
+$$
+{\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0},k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots
+$$
+
+$$
+{\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0},k}\right) }\mathop{\sum }\limits_{{s = {k}_{0} + {\Delta }_{0}}}^{{{k}_{1} - 1}}{\widetilde{\alpha }}_{0}^{{k}_{1} - s - 1}\Upsilon \left( s\right)
+$$
+
+$$
+- {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\cdots {\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}.
+$$
+
+$$
+{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0},k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots {\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0},k}\right) }
+$$
+
+$$
+\mathop{\sum }\limits_{{s = {k}_{0}}}^{{{\hslash }_{0} - 1}}\left( {{\widetilde{\alpha }}^{{k}_{1} - {\hslash }_{0}}{\phi }_{0}^{{\hslash }_{0} - s - 1}\Upsilon \left( s\right) }\right) - \mathop{\sum }\limits_{{s = {\hslash }_{2l}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}\Upsilon \left( s\right)
+$$
+
+$$
+- \mathop{\sum }\limits_{{s = {k}_{2l}}}^{{{\hslash }_{2l} - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\phi }_{0}^{{\hslash }_{2l} - s - 1}\Upsilon \left( s\right)
+$$
+
+(16)
+
+Since ${\varepsilon }_{M} = \max \left\{ {\varepsilon }_{i}\right\}$ and $1 < {\phi }_{0}^{{k}_{2l} + {\varepsilon }_{2l} - s - 1} < {\phi }_{0}^{{\varepsilon }_{M} - 1}$ , under zero initial conditions $\mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right) = 0$ and $\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \geq 0$ and according to the Definition 1, it can get
+
+$$
+\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\widetilde{\alpha }}_{0}^{{\Xi }_{F}\left( {{k}_{0},s}\right) }{\widetilde{\alpha }}_{1}^{{\Xi }_{D}\left( {{k}_{0},s}\right) }{e}^{T}\left( s\right) e\left( s\right) \leq \tag{17}
+$$
+
+$$
+{\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}}{\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\theta }_{0}^{{\varepsilon }_{M} - 1}{\varpi }^{T}\left( s\right) \varpi \left( s\right) .
+$$
+
+The accumulated sum of (17) over $\lbrack k,\infty )$ is given by
+
+$$
+\mathop{\sum }\limits_{{k = {k}_{0}}}^{\infty }\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\widetilde{\alpha }}^{s - {k}_{0}}{e}^{T}\left( s\right) e\left( s\right) \leq {\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}} \tag{18}
+$$
+
+$$
+{\gamma }^{2}\mathop{\sum }\limits_{{k = {k}_{0}}}^{\infty }\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\theta }_{0}^{{\varepsilon }_{M} - 1}{\varpi }^{T}\left( s\right) \varpi \left( s\right)
+$$
+
+which is equivalent to
+
+$$
+\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}^{s - {k}_{0}}{e}^{T}\left( s\right) e\left( s\right) \leq {\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}} \tag{19}
+$$
+
+$$
+{\theta }_{0}^{{\varepsilon }_{M} - 1}{\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\varpi }^{T}\left( s\right) \varpi \left( s\right) .
+$$
+
+Thus, the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are finally shown to be exponentially asymptotically stable and satisfy the exponential ${H}_{\infty }$ performance index ${\gamma }_{s} =$
+
+$\max \left\{ {\sqrt{{\left( {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0}\right) }^{{\xi }_{F}}{\left( {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1}\right) }^{{\xi }_{D}}{\theta }_{0}^{{\varepsilon }_{M} - 1}} \cdot \gamma }\right\}$ , which completes the proof.
+
+Due to the presence of numerous unknown matrix couplings, it is typically difficult to obtain filter gains from Theorem 1. Then, the linear solvability conditions of the designed filters are proposed in Theorem 2.
+
+Theorem 2: Consider the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ , under DoS attacks with ${\tau }_{F}$ and ${\tau }_{D}$ , scalar ${\alpha }_{i},{\beta }_{i},\gamma ,{\mu }_{0}$ and ${\mu }_{1}$ satisfying $0 < {\alpha }_{i} < 1,{\beta }_{i} > 0,\gamma > 0,{\mu }_{0} > 1$ and $0 < {\mu }_{1} < 1$ . If there exist symmetric positive-definite matrices ${\mathcal{P}}_{i1},{\mathcal{P}}_{i3}$ , matrices ${\mathcal{P}}_{i2},{\mathcal{G}}_{i},{\mathcal{Q}}_{i},{\mathcal{R}}_{i},{\mathcal{A}}_{Fi},{\mathcal{B}}_{Fi},{\mathcal{C}}_{Fi},{\mathcal{D}}_{Fi}$ , scalar $\gamma ,i,j,i \neq j$ satisfying the following conditions
+
+$$
+\left\lbrack \begin{matrix} {\Pi }_{i}^{11} & {\Pi }_{i}^{12} & 0 & {\Pi }_{i}^{14} & {\mathcal{A}}_{Fi} & {\Pi }_{i}^{16} & {\Pi }_{i}^{17} \\ * & {\Pi }_{i}^{22} & 0 & {\Pi }_{i}^{24} & {\mathcal{A}}_{Fi} & {\Pi }_{i}^{26} & {\Pi }_{i}^{27} \\ * & * & - I & {\Pi }_{i}^{34} & {\mathcal{C}}_{Fi} & 0 & - I \\ * & * & * & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i1} & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i2} & 0 & 0 \\ * & * & * & * & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i3} & 0 & 0 \\ * & * & * & * & * & - {\gamma }^{2}I & 0 \\ * & * & * & * & * & * & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0,
+$$
+
+(20)
+
+$$
+\left\lbrack \begin{matrix} {\Pi }_{ij}^{11} & {\Pi }_{ij}^{12} & 0 & {\Pi }_{ij}^{14} & {\mathcal{A}}_{Fj} & {\Pi }_{ij}^{16} & {\Pi }_{ij}^{17} \\ * & {\Pi }_{i}^{22} & 0 & {\Pi }_{ij}^{24} & {\mathcal{A}}_{Fj} & {\Pi }_{ij}^{26} & {\Pi }_{ij}^{27} \\ * & * & - I & {\Pi }_{ij}^{34} & {\mathcal{C}}_{Fj} & 0 & - I \\ * & * & * & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i1} & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i2} & 0 & 0 \\ * & * & * & * & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i3} & 0 & 0 \\ * & * & * & * & * & - {\gamma }^{2}I & 0 \\ * & * & * & * & * & * & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0
+$$
+
+(21)
+
+$$
+\left\lbrack \begin{matrix} {\Omega }^{11} & {\Omega }^{12} & {\mathcal{G}}_{i}^{T} & {\mathcal{R}}_{i} \\ & {\Omega }^{22} & {\mathcal{Q}}_{i}^{T} & {\mathcal{R}}_{i} \\ & * & - {\mu }_{i}{\mathcal{P}}_{j1} & - {\mu }_{i}{\mathcal{P}}_{j2} \\ & * & * & - {\mu }_{i}{\mathcal{P}}_{j3} \end{matrix}\right\rbrack \leq 0 \tag{22}
+$$
+
+$$
+{\tau }_{D} < \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{\ln {\widetilde{\alpha }}_{1}},{\tau }_{F} > - \frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{\ln {\widetilde{\alpha }}_{0}}, \tag{23}
+$$
+
+the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable and and satisfy the exponential ${H}_{\infty }$ performance index ${\gamma }_{s} = \max \left\{ {\sqrt{{\left( {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0}\right) }^{{\xi }_{F}}{\left( {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1}\right) }^{{\xi }_{D}}{\theta }_{0}^{{\varepsilon }_{M} - 1}} \cdot \gamma }\right\}$ , where $\widetilde{\alpha } = 1 - \alpha ,\widetilde{\beta } = 1 + \beta .{\Pi }_{i}^{11} = {\mathcal{P}}_{i1} - {\dot{\mathcal{G}}}_{i} - {\mathcal{G}}_{i} - {\mathcal{G}}_{i}^{T},$ ${\Pi }_{i}^{12} = {\mathcal{P}}_{i2} - {\mathcal{Q}}_{i} - {\mathcal{R}}_{i},{\Pi }_{i}^{14} = {\mathcal{G}}_{i}{}^{T}{A}_{id} + {\mathcal{B}}_{Fi}{C}_{d},$ ${\Pi }_{i}^{16} = {\mathcal{G}}_{i}{}^{T}{B}_{1i},{\Pi }_{i}^{17} = {\mathcal{G}}_{i}{}^{T}{B}_{2i},{\Pi }_{i}^{22} = {\mathcal{P}}_{i3} - {\mathcal{R}}_{i} - {\mathcal{R}}_{i}{}^{T},$ ${\Pi }_{i}^{24} = {\mathcal{Q}}_{i}{}^{T}{A}_{id} + {\mathcal{B}}_{Fi}{C}_{d},{\Pi }_{i}^{26} = {\mathcal{Q}}_{i}{}^{T}{B}_{1i},{\Pi }_{i}^{27} = {\mathcal{Q}}_{i}{}^{T}{B}_{2i},$ ${\Pi }_{i}^{34} = {\mathcal{D}}_{Fi}{C}_{d},{\Pi }_{ij}^{11} = {\mathcal{P}}_{i1} - {\mathcal{G}}_{j} - {\mathcal{G}}_{j}{}^{T},{\Pi }_{ij}^{12} = {\mathcal{P}}_{i2} - {\mathcal{Q}}_{j} - {\mathcal{R}}_{j},$ ${\Pi }_{ij}^{14} = {\mathcal{G}}_{j}{}^{T}{A}_{id} + {\mathcal{B}}_{Fj}{C}_{d},{\Pi }_{ij}^{16} = {\mathcal{G}}_{j}{}^{T}{B}_{1i},{\Pi }_{ij}^{17} = {\mathcal{G}}_{j}{}^{T}{B}_{2i},$ ${\Pi }_{ij}^{22} = {\mathcal{P}}_{i3} - {\mathcal{R}}_{j} - {\mathcal{R}}_{j}{}^{T},{\Pi }_{ij}^{24} = {\mathcal{Q}}_{j}{}^{T}{A}_{id} + {\mathcal{B}}_{Fj}{C}_{d},$ ${\Pi }_{ij}^{26} = {Q}_{j}{}^{T}{B}_{1i},{\Pi }_{ij}^{27} = {Q}_{j}{}^{T}{B}_{2i},{\Pi }_{ij}^{34} = {D}_{Fj}{C}_{d},$ ${\Omega }^{11} = {\mathcal{P}}_{i1} - {\mu }_{i}\left( {{\mathcal{G}}_{i} + {\mathcal{G}}_{i}{}^{T}}\right) ,{\Omega }^{12} = {\mathcal{P}}_{i2} - {\mu }_{i}{\mathcal{Q}}_{i} -$ ${\mu }_{i}{\mathcal{R}}_{i}{}^{T},{\Omega }^{22} = {\mathcal{P}}_{i3} - {\mu }_{i}\left( {{\mathcal{R}}_{i} + {\mathcal{R}}_{i}{}^{T}}\right) .$
+
+In addition, if there is a solution to (20)-(23), then the filter gain can be obtained
+
+$$
+\left\lbrack \begin{matrix} {\mathcal{A}}_{fi} & {\mathcal{B}}_{fi} \\ {\mathcal{C}}_{fi} & {\mathcal{D}}_{fi} \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} {\mathcal{R}}_{i}{}^{-1} & 0 \\ 0 & I \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {\mathcal{A}}_{Fi} & {\mathcal{B}}_{Fi} \\ {\mathcal{C}}_{Fi} & {\mathcal{D}}_{Fi} \end{matrix}\right\rbrack . \tag{24}
+$$
+
+Proof: Based on the Project Lemma and the Schur complement Lemma, pre- and post-multiplying (8), one can deduce that (8) and (20) are equivalent. Similarly, pre- and postmultiplying (9) implies that (9) and (21) are equivalent. Theorem 2 is proved.
+
+For the purpose of fault detection, the residual is obtained from the difference between the measured value and its estimated value. Design the following residual estimation function
+
+$$
+{\mathcal{J}}_{r}\left( k\right) = \sqrt{\frac{1}{k}\mathop{\sum }\limits_{{s = 1}}^{k}{r}^{T}\left( s\right) r\left( s\right) }. \tag{25}
+$$
+
+And select threshold value of (25) as
+
+$$
+{\mathcal{J}}_{th} = \mathop{\sup }\limits_{\substack{{d\left( k\right) \in {l}_{2}} \\ {f\left( k\right) = 0} }}{\mathcal{J}}_{r}\left( k\right) . \tag{26}
+$$
+
+Therefore, the fault detection logical relationship is
+
+$$
+\left\{ \begin{matrix} \begin{Vmatrix}{{\mathcal{J}}_{r}\left( k\right) }\end{Vmatrix} > {\mathcal{J}}_{th} & \text{ Alarm } \\ \begin{Vmatrix}{{\mathcal{J}}_{r}\left( k\right) }\end{Vmatrix} \leq {\mathcal{J}}_{th} & \text{ No-alarm. } \end{matrix}\right. \tag{27}
+$$
+
+§ IV. SIMULATION
+
+This section intends to demonstrate the effectiveness of asynchronous FD strategy for networked UMV under DoS attacks. By choosing matrices $M,N$ and $R$ in system (1) as [20]. Let ${\alpha }_{0} = {0.09},{\beta }_{0} = {0.05},{\alpha }_{1} = {0.11},{\beta }_{1} = {0.03}$ , ${\mu }_{0} = {1.4},{\mu }_{1} = {0.45},{\varepsilon }_{M} = 2,\sigma = 1$ and $\gamma = {44}$ . Then, from (11) the MDADT satisfies ${\tau }_{D} < {4.34}$ and ${\tau }_{F} > {6.60}$ . The UMV fault detection filter gain under DoS attacks can be calculated by Theorem 2.
+
+To demonstrate the practicability of FD filters designed for networked UMV under DoS attacks, the following simulations are performed to verify it. Firstly, UMV are suffered from thruster faults, external disturbances and DoS attacks. One possible sequences of DoS attacks are depicted in Fig. 1, where 1 denotes that attacks have occurred and 0 denotes the sleep state with no attack. Because of the existence of DoS attacks, which in turn leads to asynchronous switching between the filter and the primary system, then the switching sequence between the filter and the subsystem is shown in Fig. 2.
+
+ < g r a p h i c s >
+
+Fig. 1. DoS attacks sequences.
+
+ < g r a p h i c s >
+
+Fig. 2. Switching sequences.
+
+The external disturbance $d\left( k\right)$ is given as the following form
+
+$$
+d\left( k\right) = \left\{ {\begin{array}{l} {d}_{1}\left( k\right) = {12}\sin \left( k\right) \exp \left( {-{0.15k}}\right) \\ {d}_{2}\left( k\right) = {15}\sin \left( {0.73k}\right) ,k \in \left\lbrack {5,{37}}\right\rbrack \\ {d}_{3}\left( k\right) = 9\sin \left( {0.2k}\right) ,k \in \left\lbrack {{11},{45}}\right\rbrack \end{array}.}\right.
+$$
+
+Case 1: Use DoS attacks sequence 1, and the fault signals ${f}^{1}\left( k\right)$ takes the following form
+
+$$
+{f}^{1}\left( k\right) = \left\{ {\begin{array}{l} {f}_{1}\left( k\right) = 2\sin \left( {0.2k}\right) \\ {f}_{2}\left( k\right) = \cos \left( {0.1k}\right) \\ {f}_{3}\left( k\right) = {0.8}\sin \left( {0.15k}\right) \end{array},k \in \left\lbrack {{25},{35}}\right\rbrack .}\right.
+$$
+
+Under the DoS attack sequence and the faults ${f}^{1}\left( k\right)$ , the curves of the residual signal $\parallel r\left( k\right) {\parallel }_{2}$ and the REF signal are depicted in Fig. 3 and Fig. 4, respectively. In the absence of faults, the threshold value is chosen depending on the maximum value of the REF signal: ${\mathcal{J}}_{th} = {0.215}$ . When $t$ $= {25.11}\mathrm{\;s}$ , the fault signal is detected in time.
+
+ < g r a p h i c s >
+
+Fig. 3. The residual signal $\parallel r\left( k\right) {\parallel }_{2}$ in Case 1.
+
+ < g r a p h i c s >
+
+Fig. 4. The REF signal in Case 1.
+
+Case 2: In order to further verify the sensitivity of the FD filter to the faults, a fault with a smaller amplitude than case 1 but with the same frequency is selected for verification, and the DoS attack sequence is still used. The fault form of ${f}^{2}\left( k\right)$ is shown as follows
+
+$$
+{f}^{2}\left( k\right) = \left\{ {\begin{array}{l} {f}_{1}\left( k\right) = {0.4}\sin \left( {0.2k}\right) \\ {f}_{2}\left( k\right) = {0.2}\cos \left( {0.1k}\right) \\ {f}_{3}\left( k\right) = {0.16}\sin \left( {0.15k}\right) \end{array},k \in \left\lbrack {{25},{35}}\right\rbrack .}\right.
+$$
+
+Under the DoS attack sequence and the faults ${f}^{2}\left( k\right)$ , the curves of the residual signal $\parallel r\left( k\right) {\parallel }_{2}$ and the REF signal are depicted in Fig. 5 and Fig. 6, respectively. Fig. 6 indicates that the threshold for fault detection becomes smaller than in Case 1: ${\mathcal{J}}_{th} = {0.067}$ . And when $t = {25.27s}$ , the fault signal is detected in time. In contrast to Case 1, the residual amplitude and the REF signal are significantly reduced. This shows that the fault amplitude has a non-negligible effect on the system.
+
+ < g r a p h i c s >
+
+Fig. 5. The residual signal $\parallel r\left( k\right) {\parallel }_{2}$ in Case 2.
+
+ < g r a p h i c s >
+
+Fig. 6. The REF signal in Case 2.
+
+§ V. CONCLUSION
+
+To solve the problem that DoS attacks cannot be detected in time, this paper designs an exponential convergent ${H}_{\infty }$ filters based on an asynchronous switched method for UMVs under DoS attacks, which solves the issue that the filters' switching frequently lags behind subsystems in practical applications. On the basis of the MDADT and the PLF, one criterion on the tolerability of the MDADT is derived to maintain exponential ${H}_{\infty }$ performance. Sufficient conditions for the designed FD filter to exist are described by LMIs, and the filter gain and the related parameters of MDADT can be derived by solving these LMIs. Finally, the effectiveness of the designed filter is verified by numerical simulation.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..5793e9b3cd59e845e57a66a55434ff9e6a49fa6b
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,97 @@
+# Research on battery SOC estimation method by combining optimization algorithm and multi-model Kalman filtering
+
+Zhi Ming Chen
+
+College of Science
+
+Liaoning University of Technology
+
+Jinzhou, China
+
+chenzhiminglab@163.com
+
+Chang Qi Zhu
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+zhuchangqi_work@163.com
+
+Lei Liu
+
+College of Science
+
+Liaoning University of Technology
+
+Jinzhou, China
+
+liuleill@live.cn
+
+${Abstract}$ -With the rapid growth of electric vehicles and energy storage systems, accurate state of charge (SOC) estimation has become a critical component of battery management systems (BMS), essential for preventing overcharging and over-discharging, enhancing operational safety, and extending battery life. This paper proposes a novel SOC estimation method based on an enhanced self-correcting (ESC) model incorporating a second-order RC circuit, enabling a more accurate simulation of battery response time and dynamic behavior. To improve model reliability, a genetic algorithm-particle swarm optimization (GA-PSO) approach is employed for parameter identification. Additionally, a multi-model adaptive extended Kalman filter (AEKF) algorithm is introduced to achieve precise SOC estimation. MATLAB simulations using constant current discharge and automotive driving cycle data demonstrate that the proposed method outperforms traditional AEKF algorithms, with faster convergence and higher estimation accuracy, particularly in scenarios with varying initial estimation accuracies. The results highlight the potential of this approach to significantly enhance SOC estimation in BMS, contributing to safer operation and prolonged battery life in electric vehicles and energy storage systems.
+
+Keywords—SOC estimation, Enhanced Self-Correcting model, parameter identification, GA-PSO, multi-model AEKF.
+
+## I. INTRODUCTION
+
+As electric vehicles and energy storage systems continue to develop rapidly, the application of batteries as key energy storage devices has become increasingly widespread, highlighting the growing importance of battery management and control [1]. Within battery management systems (BMS), accurately estimating the state of charge (SOC) is a critical task [2]. Precise SOC estimation not only enables more reliable predictions of vehicle range but also improves battery utilization and helps prevent significant reductions in battery lifespan caused by overcharging or deep discharging [3]. However, the nonlinear characteristics, time-varying behavior, and electrochemical reactions of batteries make it impossible to measure their SOC directly with sensors [4]. Instead, SOC must be estimated using indirect measurements such as voltage, current, and temperature. Common SOC estimation methods include approaches based on open-circuit voltage, coulomb counting, data-driven techniques, and model-based estimation methods [5]. Each of these approaches presents distinct advantages and disadvantages [6].
+
+Among these methods, model-based estimation achieves a reasonable balance between accuracy, real-time performance, and computational cost by integrating the battery equivalent circuit model (ECM) with state estimation algorithms. The ECM is a key component in this approach. Previous studies have advanced SOC estimation using various models and algorithms. Li et al. [7] utilized a second-order RC model with a stochastic gradient algorithm for parameter identification and developed a multi-innovation extended Kalman filter, validated experimentally. Shi et al. [8] employed Bayesian belief networks and adaptive extended Kalman particle filtering, demonstrating enhanced convergence and accuracy.
+
+However, these studies largely overlook the hysteresis effect in battery charging and discharging. Gregory L. Plett [9] addressed this by introducing an Enhanced Self-Correcting (ESC) model that incorporates hysteresis into the ECM. Sk Bittu et al. [10] simulated a first-order RC ESC model with an EKF algorithm for SOC estimation but found that the model struggles with complex polarization dynamics, and the EKF's performance deteriorates with significant measurement errors.
+
+Accurate SOC estimation requires precise circuit modeling and effective algorithms. This study incorporates the hysteresis phenomenon using an ESC model with second-order RC characteristics. The GA-PSO algorithm is applied for precise identification of battery model parameters via an optimized fitness function. Additionally, a multi-model AEKF is developed, integrating an adaptive factor into the EKF to refine the gain matrix, thereby improving the capture of the model's dynamic properties. This multi-model approach reduces estimation errors and enhances the robustness, accuracy, and stability of SOC estimation. The main contributions of this paper are as follows:
+
+1) Battery parameter estimation: An ESC model with second-order RC characteristics is used for accurate characterization, with GA-PSO employed for parameter identification, validated through model testing.
+
+2) Multi-model AEKF algorithm for SOC estimation: A multi-model AEKF algorithm is designed, combining adaptive parameters for process noise with a multi-model approach to improve SOC estimation accuracy.
+
+3) Simulation comparative analysis: SOC estimation is analyzed using constant current discharge and automotive driving cycle scenarios, comparing the multi-model AEKF with the traditional AEKF, demonstrating enhanced convergence and accuracy..
+
+## II. LITHIUM-ION BATTERY SOC ESTIMATION METHOD
+
+This paper presents an ESC model based on a second-order RC equivalent circuit, incorporating the hysteresis phenomenon observed during battery charging and discharging. The model captures the battery's dynamic behavior, static characteristics, and hysteresis effects, as shown in Fig. 1.
+
+
+
+Fig. 1. ESC model of second-order RC
+
+To identify the unknown parameters in the ESC model, the GA-PSO algorithm, an integration of Genetic Algorithm and Particle Swarm Optimization, is utilized. The process initiates with GA generating an initial population of parameter sets, which are subsequently evaluated by comparing the model's predictions with experimental battery data. GA operations, including selection, crossover, and mutation, are employed to refine these parameters, while PSO dynamically adjusts their search direction. After several iterations, the algorithm converges on the optimal parameter set, facilitating precise SOC estimation.
+
+Building on the ESC model and parameter identification, a multi-model AEKF framework is developed to enhance SOC estimation. This framework employs multi-model fusion, integrating the estimates from several models to improve filter performance and robustness. The battery SOC is quantized into discrete sets, with $\mathrm{n}$ AEKF models constructed. The conditional probability of each SOC is calculated using Bayesian rules, and the SOC with the highest probability is selected for each time step. By using conditional probability as the switching rule, the multi-model AEKF adapts to varying operating conditions and improves SOC estimation accuracy and stability. The Bayesian rule used to compute these conditional probabilities is given by the following formula:
+
+$$
+p\left( {{s}_{i} \mid {Y}_{k}}\right) = \frac{p\left( {{y}_{k} \mid {Y}_{k - 1},{s}_{i}}\right) p\left( {{Y}_{k - 1} \mid {s}_{i}}\right) p\left( {s}_{i}\right) }{\mathop{\sum }\limits_{{i = 1}}^{N}p\left( {{y}_{k} \mid {Y}_{k - 1},{s}_{i}}\right) p\left( {{Y}_{k - 1} \mid {s}_{i}}\right) p\left( {s}_{i}\right) } \tag{1}
+$$
+
+where $p\left( {s}_{i}\right)$ denotes the prior probability, reflecting the initial estimate of the state ${s}_{i}$ in the absence of any measurement information. The entire expression delineates the posterior probability of each potential state given all previous measurements ${Y}_{k - 1}$ and the current measurement ${s}_{i}$ .
+
+MATLAB simulations were conducted to model constant current discharge and automotive driving cycle discharge scenarios. A comparative experiment was set up between the traditional AEKF and the multi-model AEKF, focusing on evaluating their convergence performance and accuracy under conditions of unstable initial parameters and complex variations in discharge current.
+
+## III. CONCLUSION
+
+This paper focuses on the estimation performance of SOC in lithium-ion batteries. A second-order RC ESC model is considered, and the battery parameters are identified using the GA-PSO algorithm. Additionally, accurate estimation of battery SOC is achieved through the implementation of a multi-model Adaptive Kalman Filter. To validate the effectiveness of the proposed method, a series of simulation comparisons are conducted. The simulation results demonstrate that the proposed multi-model AEKF algorithm exhibits fast convergence and high estimation accuracy in predicting battery SOC, showcasing its superior performance in SOC estimation.
+
+## REFERENCES
+
+[1] J. S. Goud, K. R and B. Singh, "An online method of estimating state of health of a Li-ion battery," IEEE Transactions on Energy Conversion, vol. 36, no. 1, pp. 111-119, Mar. 2021.
+
+[2] X. Fan, W. Zhang, C. Zhang, A. Chen, and F. An, "SOC estimation of Li-ion battery using convolutional neural network with U-Net architecture". Energy, vol. 256, p. 124612, Oct. 2022.
+
+[3] A. Tang, Y. Huang, S. Liu, Q. Yu, W. Shen, Q. Yu, W. Shen, and R. Xiong, "A novel lithium-ion battery state of charge estimation method based on the fusion of neural network and equivalent circuit models," Applied Energy, vol. 348, p. 121578, Oct. 2023.
+
+[4] F. Li, W. Zuo, K. Zhou, Q. Li, Y. Huang, and G. Zhang, "State-of-charge estimation of lithium-ion battery based on second order resistor-capacitance circuit-PSO-TCN model," Energy, vol. 289, p.130025, Feb. 2024.
+
+[5] H. Yu, L. Zhang, W. Wang, S. Li, S. Chen, S. Yang, J. Li, and X. Liu, "State of charge estimation method by using a simplified electrochemical model in deep learning framework for lithium-ion batteries," Energy, vol. 278, p. 127846, Sep. 2023.
+
+[6] H. Zhang, J. Xiong, S. Li, L. Sun, and Y. Zhang, A, "review on estimation strategies of lithium-ion battery state of charge and health for electric vehicle applications," Journal of Power Sources, vol. 356, pp. 11-26, 2017.
+
+[7] W. Li, Y. Yang, D. Wang, and S. Yin, "The multi-innovation extended Kalman filter algorithm for battery SOC estimation," Ionics, vol. 26, pp. 6145-6156. Dec. 2020.
+
+[8] Q. Shi, Z. Jiang, Z. Wang, and L. He, "State of charge estimation by joint approach with model-based and data-driven algorithm for lithium-ion battery," IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-10, Aug. 2022.
+
+[9] G.L. Plett, Battery management systems, Volume I: Battery modeling, Artech House, ch.2, sec.8, p. 44, 2015.
+
+[10] S. Bittu, S. Halder, S. Kumar, N. Das, S. Bhattacharjee, and M. Ghosh, "Battery SOC Estimation Using Enhanced Self-Correcting Model-Based Extended Kalman Filter," in 2023 7th International Conference on Computer Applications in Electrical Engineering-Recent Advances (CERA). IEEE. pp. 1-6, 2023.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..475cfb16ce64d95e9e41be1e3d65f1f5505b3e7c
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,75 @@
+§ RESEARCH ON BATTERY SOC ESTIMATION METHOD BY COMBINING OPTIMIZATION ALGORITHM AND MULTI-MODEL KALMAN FILTERING
+
+Zhi Ming Chen
+
+College of Science
+
+Liaoning University of Technology
+
+Jinzhou, China
+
+chenzhiminglab@163.com
+
+Chang Qi Zhu
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+zhuchangqi_work@163.com
+
+Lei Liu
+
+College of Science
+
+Liaoning University of Technology
+
+Jinzhou, China
+
+liuleill@live.cn
+
+${Abstract}$ -With the rapid growth of electric vehicles and energy storage systems, accurate state of charge (SOC) estimation has become a critical component of battery management systems (BMS), essential for preventing overcharging and over-discharging, enhancing operational safety, and extending battery life. This paper proposes a novel SOC estimation method based on an enhanced self-correcting (ESC) model incorporating a second-order RC circuit, enabling a more accurate simulation of battery response time and dynamic behavior. To improve model reliability, a genetic algorithm-particle swarm optimization (GA-PSO) approach is employed for parameter identification. Additionally, a multi-model adaptive extended Kalman filter (AEKF) algorithm is introduced to achieve precise SOC estimation. MATLAB simulations using constant current discharge and automotive driving cycle data demonstrate that the proposed method outperforms traditional AEKF algorithms, with faster convergence and higher estimation accuracy, particularly in scenarios with varying initial estimation accuracies. The results highlight the potential of this approach to significantly enhance SOC estimation in BMS, contributing to safer operation and prolonged battery life in electric vehicles and energy storage systems.
+
+Keywords—SOC estimation, Enhanced Self-Correcting model, parameter identification, GA-PSO, multi-model AEKF.
+
+§ I. INTRODUCTION
+
+As electric vehicles and energy storage systems continue to develop rapidly, the application of batteries as key energy storage devices has become increasingly widespread, highlighting the growing importance of battery management and control [1]. Within battery management systems (BMS), accurately estimating the state of charge (SOC) is a critical task [2]. Precise SOC estimation not only enables more reliable predictions of vehicle range but also improves battery utilization and helps prevent significant reductions in battery lifespan caused by overcharging or deep discharging [3]. However, the nonlinear characteristics, time-varying behavior, and electrochemical reactions of batteries make it impossible to measure their SOC directly with sensors [4]. Instead, SOC must be estimated using indirect measurements such as voltage, current, and temperature. Common SOC estimation methods include approaches based on open-circuit voltage, coulomb counting, data-driven techniques, and model-based estimation methods [5]. Each of these approaches presents distinct advantages and disadvantages [6].
+
+Among these methods, model-based estimation achieves a reasonable balance between accuracy, real-time performance, and computational cost by integrating the battery equivalent circuit model (ECM) with state estimation algorithms. The ECM is a key component in this approach. Previous studies have advanced SOC estimation using various models and algorithms. Li et al. [7] utilized a second-order RC model with a stochastic gradient algorithm for parameter identification and developed a multi-innovation extended Kalman filter, validated experimentally. Shi et al. [8] employed Bayesian belief networks and adaptive extended Kalman particle filtering, demonstrating enhanced convergence and accuracy.
+
+However, these studies largely overlook the hysteresis effect in battery charging and discharging. Gregory L. Plett [9] addressed this by introducing an Enhanced Self-Correcting (ESC) model that incorporates hysteresis into the ECM. Sk Bittu et al. [10] simulated a first-order RC ESC model with an EKF algorithm for SOC estimation but found that the model struggles with complex polarization dynamics, and the EKF's performance deteriorates with significant measurement errors.
+
+Accurate SOC estimation requires precise circuit modeling and effective algorithms. This study incorporates the hysteresis phenomenon using an ESC model with second-order RC characteristics. The GA-PSO algorithm is applied for precise identification of battery model parameters via an optimized fitness function. Additionally, a multi-model AEKF is developed, integrating an adaptive factor into the EKF to refine the gain matrix, thereby improving the capture of the model's dynamic properties. This multi-model approach reduces estimation errors and enhances the robustness, accuracy, and stability of SOC estimation. The main contributions of this paper are as follows:
+
+1) Battery parameter estimation: An ESC model with second-order RC characteristics is used for accurate characterization, with GA-PSO employed for parameter identification, validated through model testing.
+
+2) Multi-model AEKF algorithm for SOC estimation: A multi-model AEKF algorithm is designed, combining adaptive parameters for process noise with a multi-model approach to improve SOC estimation accuracy.
+
+3) Simulation comparative analysis: SOC estimation is analyzed using constant current discharge and automotive driving cycle scenarios, comparing the multi-model AEKF with the traditional AEKF, demonstrating enhanced convergence and accuracy..
+
+§ II. LITHIUM-ION BATTERY SOC ESTIMATION METHOD
+
+This paper presents an ESC model based on a second-order RC equivalent circuit, incorporating the hysteresis phenomenon observed during battery charging and discharging. The model captures the battery's dynamic behavior, static characteristics, and hysteresis effects, as shown in Fig. 1.
+
+ < g r a p h i c s >
+
+Fig. 1. ESC model of second-order RC
+
+To identify the unknown parameters in the ESC model, the GA-PSO algorithm, an integration of Genetic Algorithm and Particle Swarm Optimization, is utilized. The process initiates with GA generating an initial population of parameter sets, which are subsequently evaluated by comparing the model's predictions with experimental battery data. GA operations, including selection, crossover, and mutation, are employed to refine these parameters, while PSO dynamically adjusts their search direction. After several iterations, the algorithm converges on the optimal parameter set, facilitating precise SOC estimation.
+
+Building on the ESC model and parameter identification, a multi-model AEKF framework is developed to enhance SOC estimation. This framework employs multi-model fusion, integrating the estimates from several models to improve filter performance and robustness. The battery SOC is quantized into discrete sets, with $\mathrm{n}$ AEKF models constructed. The conditional probability of each SOC is calculated using Bayesian rules, and the SOC with the highest probability is selected for each time step. By using conditional probability as the switching rule, the multi-model AEKF adapts to varying operating conditions and improves SOC estimation accuracy and stability. The Bayesian rule used to compute these conditional probabilities is given by the following formula:
+
+$$
+p\left( {{s}_{i} \mid {Y}_{k}}\right) = \frac{p\left( {{y}_{k} \mid {Y}_{k - 1},{s}_{i}}\right) p\left( {{Y}_{k - 1} \mid {s}_{i}}\right) p\left( {s}_{i}\right) }{\mathop{\sum }\limits_{{i = 1}}^{N}p\left( {{y}_{k} \mid {Y}_{k - 1},{s}_{i}}\right) p\left( {{Y}_{k - 1} \mid {s}_{i}}\right) p\left( {s}_{i}\right) } \tag{1}
+$$
+
+where $p\left( {s}_{i}\right)$ denotes the prior probability, reflecting the initial estimate of the state ${s}_{i}$ in the absence of any measurement information. The entire expression delineates the posterior probability of each potential state given all previous measurements ${Y}_{k - 1}$ and the current measurement ${s}_{i}$ .
+
+MATLAB simulations were conducted to model constant current discharge and automotive driving cycle discharge scenarios. A comparative experiment was set up between the traditional AEKF and the multi-model AEKF, focusing on evaluating their convergence performance and accuracy under conditions of unstable initial parameters and complex variations in discharge current.
+
+§ III. CONCLUSION
+
+This paper focuses on the estimation performance of SOC in lithium-ion batteries. A second-order RC ESC model is considered, and the battery parameters are identified using the GA-PSO algorithm. Additionally, accurate estimation of battery SOC is achieved through the implementation of a multi-model Adaptive Kalman Filter. To validate the effectiveness of the proposed method, a series of simulation comparisons are conducted. The simulation results demonstrate that the proposed multi-model AEKF algorithm exhibits fast convergence and high estimation accuracy in predicting battery SOC, showcasing its superior performance in SOC estimation.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a1d09381e7c7e2fdf48a748a82dfa2602a51d3b
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,297 @@
+# Dynamic Target Pursuit by Multi-UAV Under Communication Coverage: ACO-MATD3 Approach
+
+${1}^{\text{st }}$ Zhuang Cao
+
+School of Information and Communication Engineering Hainan University
+
+Haikou, Hainan
+
+hnucz@hainanu.edu.cn
+
+${2}^{\text{nd }}$ Di Wu*
+
+School of Information and Communication Engineering
+
+Hainan University
+
+Haikou, Hainan
+
+hainuicaplab@hainanu.edu.cn
+
+Abstract-This study proposes a new approach for cooperative pursuit of dynamic targets under communication coverage involving multi-unmanned aerial vehicles (UAVs). This approach combines the ant colony optimization algorithm with the multi-agent twin delay deep deterministic policy gradient, called ACO-MATD3. The ACO-MATD3 algorithm dynamically adjusts hyper-parameters based on varying stages and requirements, greatly enhancing the stability and performance of cooperative multi-UAV pursuit tasks, especially under strong communication coverage. Experimental results demonstrate that the ACO-MATD3 algorithm significantly outperforms other algorithms in terms of mean reward and communication return.
+
+Index Terms-Multi-UAV, Pursuit, Communication coverage, Ant colony optimization algorithm, Multi-agent reinforcement learning
+
+## I. INTRODUCTION
+
+In recent years, multi-unmanned aerial vehicles (UAVs) have found extensive applications in fields like agriculture [1], environmental monitoring [2], and communication [3], [4], due to their flexibility and ease of deployment. As technology progresses, UAVs are tasked with more complex challenges such as pursuing dynamic targets, where UAVs need to consistently pursue and approach a moving target in complex environments through strategic adjustments. This pursuit involves a strategic interaction between the UAVs and the targets, where effective decision-making is vital for success and showcases the UAV's intelligence. Therefore, developing effective pursuit strategies is crucial.
+
+Significant research has been conducted on the pursuit of UAVs using traditional methods. For instance, the study in [5] developed a cooperative pursuit-evasion strategy for UAVs in a complex 3D environment, utilizing a heterogeneous system to enhance spatial perception and decision-making. However, this approach encounters challenges related to scalability, computational demands, and robustness in dynamic environments. In [6], the problem of minimizing the time for a UAV to pursuit a moving ground target by optimizing the pursuit strategy using sensor data. Additionally, a hierarchical game structure was proposed in [7] to enhance the cooperative pursuit-evasion capabilities of UAVs in dynamic environments. Despite these advancements, the high computational complexity of these methods and the necessity to predefine the UAVs' flight paths limit their applicability in unknown environments.
+
+Fortunately, advancements in deep reinforcement learning (DRL) have introduced new methods for addressing UAV pursuit problems. Techniques such as the deep deterministic policy gradient (DDPG) [8] and twin delay deep deterministic policy gradient (TD3) [9] enable simultaneous learning of value and policy functions, thereby enhancing algorithm efficiency and stability. However, in multi-agent environments, interactions between agents can lead to policy non-convergence when DRL algorithms are applied directly. To address this issue, multi-agent reinforcement learning (MARL) algorithms, including the multi-agent deep deterministic policy gradient (MADDPG) [10] and multi-agent twin delay deep deterministic policy gradient (MATD3) [11], have been developed. The MATD3 is an improvement based on MADDPG. These algorithms improve stability and collaboration among agents by employing a centralized training and decentralized execution (CTDE) mechanism [10].
+
+Based on these DRL methods mentioned above, several studies have attempted to utilize DRL to solve UAV pursuit tasks. An approach proposed for UAV pursuit-evasion games utilizes hierarchical maneuvering decision-making with soft actor-critic algorithm [12] to enhance autonomy and strategic flexibility in complex environments. However, this method needs to work on high-dimensional state spaces. Another study [13] proposed a UAV pursuit policy combining DDPG with imitation learning to improve sample exploration efficiency, resulting in better performance and faster convergence than traditional DDPG method. A multi-UAV pursuit-evasion game was also explored in [14], utilizing online motion planning and DRL to enhance UAV interactions in complex environments. However, these studies still do not address the challenge of maintaining communication among UAVs while performing their tasks.
+
+Based on the above related research, we propose an algorithm that combines MATD3 and ant colony optimization (ACO) algorithm to address the multi-UAV cooperative pursuit problem under communication coverage, called ACO-MATD3. The algorithm can adaptively select the optimal hyperpa-rameters at different stages during the training process. As a result, the multi-UAV system learns a policy that allows it to pursuit dynamic targets in the airspace without prior knowledge, while maintaining strong communication coverage from base stations (BSs). The main contributions of this paper are as follows:
+
+---
+
+This work is partly distributed under the "South China Sea Rising Star" Education Platform Foundation of Hainan Province (JYNHXX2023-17G), the Natural Science Foundation of Hainan Province (624MS036), the Postgraduate Innovation Projects in Hainan Province (Qhys2023-290).
+
+Corresponding author: Di Wu.
+
+---
+
+(1) In contrast to non-learning based approaches [5], [6], [7], the problem of multi-UAV cooperative pursuit problem under communication coverage is formulated as a Markov game. Each UAV operates as an independent agent while cooperating with others to maximize cumulative rewards and optimize their policies.
+
+(2) Differently from other DRL-based approaches [12], [13], [14], this study investigates the communication connectivity between multi-UAV and BSs during pursuit tasks, and considers the effect of noise in the environment on communication.
+
+(3) Compared with the MATD3 [11] algorithm, the ACO-MATD3 algorithm proposed in this study can dynamically optimize the hyperparameters according to the training stage, reduce the impact of hyperparameters on performance, and improve training efficiency and effectiveness.
+
+The paper is organized as follows: Section 2 provides the problem description and system modeling. Section 3 presents the ACO-MATD3 algorithm proposed in this paper. Section 4 analyses the results of the experiment. Section 5 concludes the paper.
+
+## II. Problem Description and System Modeling
+
+In this section, we describe the multi-UAV pursuit problem under communication coverage. Then the BS antenna model and the path loss model are introduced. Finally, we illustrate the communication coverage model used in this experiment.
+
+## A. Problem Description
+
+
+
+Fig. 1: Communication coverage strength map.
+
+This experiment investigates the multi-UAV pursuit problem under communication coverage, consisting of multi-UAV, obstacles and dynamic targets, as shown in Fig. 1. Their initial positions are randomly generated. The BSs support UAV communication, with the blue shading in Fig. 1 indicating the strength of the communication coverage. During the pursuit of dynamic targets, each UAV must avoid collisions with obstacles and maintain strong communication coverage.
+
+## B. Antenna Model and Path Loss Model
+
+This experiment formulates the antenna model of the BSs through the 3GPP [15] specification. Each BS has the same height ${h}_{BS}$ and divided into three sectors, each vertically placed with a uniform linear array of 8 elements.
+
+The radiation pattern of each element is determined by combining its horizontal and vertical radiation patterns, defined as
+
+follows:
+
+$$
+{AH} = - \min \left\lbrack {{12}{\left( \frac{\phi }{{\phi }_{3dB}}\right) }^{2},{A}_{m}}\right\rbrack \tag{1}
+$$
+
+$$
+{AV} = - \min \left\lbrack {{12}{\left( \frac{\theta - {90}}{{\theta }_{3dB}}\right) }^{2},{A}_{m}}\right\rbrack \tag{2}
+$$
+
+where $\phi$ is the azimuth angle indicating the angle of the antenna in the horizontal plane, and $\theta$ is the elevation angle indicating the angle of the antenna in the vertical plane. Both are in degrees, ${\phi }_{3dB}$ and ${\theta }_{3dB}$ are the half-power beamwidths, ${A}_{m}$ is the element gain threshold.
+
+The total gain of the antenna elements is expressed in dB as:
+
+$$
+{G}_{{el}{e}_{dB}} = {G}_{max} + {A}_{ele} \tag{3}
+$$
+
+$$
+= {G}_{\max } + \left\{ {-\min \left\lbrack {-\left( {{AH} + {AV}}\right) ,{A}_{m}}\right\rbrack }\right\}
+$$
+
+where ${A}_{ele}$ represents the power gain of the antenna element and ${G}_{\max }$ is the maximum directional gain of the antenna element. For ease of computation, we convert ${G}_{{el}{e}_{dB}}$ to the linear scale of ${G}_{ele}$ .
+
+The combined gain of the antenna array is expressed in dB as:
+
+$$
+G = {10} \times {\log }_{10}{\left| {F}_{\text{ele }} \times AF\right| }^{2} \tag{4}
+$$
+
+where ${F}_{ele}$ is the arithmetic square root of ${G}_{ele}$ , and ${AF}$ is the antenna array factor.
+
+This experiment determines whether the communication link between the UAV and the BS sector is a line of sight (LoS) link or an non-line of sight (NLoS) link by assessing whether buildings in the environment obscure the communication link. The path loss of the LoS link from the UAV to sector $m$ is expressed in $\mathrm{{dB}}$ as:
+
+$$
+{h}_{m}^{\mathrm{{LoS}}}\left( t\right) = {28} + {22}{\log }_{10}{d}_{m}\left( t\right) + {20}{\log }_{10}{f}_{c} \tag{5}
+$$
+
+where ${d}_{m}\left( t\right)$ represents the distance between the UAV and sector $m$ , and ${f}_{c}$ denotes the carrier frequency.
+
+The path loss of the NLoS link between the UAV and sector $m$ is given in $\mathrm{{dB}}$ as:
+
+$$
+{h}_{m}^{\mathrm{{NLoS}}}\left( t\right) = - {17.5} + \left( {{46} - 7{\log }_{10}h\left( t\right) }\right) {\log }_{10}{d}_{m}\left( t\right) \tag{6}
+$$
+
+$$
++ {20}{\log }_{10}\left( {{40\pi }{f}_{c}/3}\right)
+$$
+
+where $h\left( t\right)$ is the height of UAV at time $t$ .
+
+In addition, the channel small-scale fading is Rician fading in the case of LoS and Rayleigh fading in the case of NLoS.
+
+## C. Communication Model
+
+The baseband equivalent channel between the UAV and the communication BS sector $m$ at time $t$ is denoted by ${H}_{m}\left( t\right)$ , where $1 \leq m \leq M$ , and $M$ represents the total number of communication BS sectors linked with the UAV throughout its flight. The baseband equivalent channel ${H}_{m}\left( t\right)$ is influenced by the BS antenna array gain $G$ , the path loss $\beta$ , and the small-scale fading $h$ . The magnitudes of ${H}_{m}\left( t\right)$ and $\beta$ are related to the position $q\left( t\right)$ of the UAV at time $t$ , while $h$ is a random variable. The signal power received by the UAV from the communication BS sector $m$ at time $t$ can be expressed as:
+
+$$
+{P}_{m}\left( t\right) = \bar{P}{\left| {H}_{m}\left( t\right) \right| }^{2} = \bar{P}{G}_{m}\left( {q\left( t\right) }\right) \beta \left( {q\left( t\right) }\right) h\left( t\right) \tag{7}
+$$
+
+where $\bar{P}$ represents the transmit power of the BS sector $m$ , which is assumed to remain constant. The path loss is calculated using the following equation:
+
+$$
+\beta \left( {q\left( t\right) }\right) = \left\{ \begin{array}{l} P{L}_{LoS},\text{ if LoS link } \\ P{L}_{NLoS},\text{ if NLoS link } \end{array}\right. \tag{8}
+$$
+
+where $P{L}_{LoS}$ and $P{L}_{NLoS}$ are the linear scales of ${h}_{m}^{\mathrm{{LoS}}}\left( t\right)$ and ${h}_{m}^{\mathrm{{NLoS}}}\left( t\right)$ , respectively.
+
+In this experiment, the signal to interference plus noise ratio (SINR) is used as a crucial criterion for evaluating the communication coverage performance of UAVs. This criterion can be expressed as:
+
+$$
+{SIN}{R}_{t} = \frac{{P}_{m}\left( t\right) }{\mathop{\sum }\limits_{{n \neq m}}{P}_{n}\left( t\right) + {\sigma }^{2}} \tag{9}
+$$
+
+where $n$ represents the BSs not associated with the UAV at time $t$ . In this case, the communication of the UAV is affected not only by interference from all non-associated BS sectors but also by the environmental noise, which impacts the quality of its communication.
+
+To ensure communication coverage while the UAV is airborne, the SINR of the UAV should not drop below a minimum threshold $\alpha$ . That is, the UAV is not under the communication coverage of the BS when $\operatorname{SINR}\left( t\right) < \alpha$ . Each UAV has an independent SINR at time $t$ .
+
+## III. Multi-UAV COOPERATIVE PURSUIT USING ACO-MATD3
+
+In this subsection, we characterize the UAV's state space, action space, and reward function within a Markov game framework and detail our proposed ACO-MATD3 algorithm.
+
+## A. Markov Game with Multi-UAV
+
+This subsection explores the framework of the Markov game as applied to multi-UAV systems. It details the state and action spaces for UAVs and defines the reward function guiding their interactions in a complex environment.
+
+The state space for each UAV $i$ at time $t$ is defined as ${s}_{it} = \left( {{s}_{ut},{s}_{ot},{SIN}{R}_{t}}\right)$ , where ${s}_{ut} = \left( {{x}_{t},{y}_{t},{v}_{xt},{v}_{yt}}\right)$ is a combination of the position and the speed. Additionally, ${s}_{ot} =$ $\left( {{l}_{uu},{l}_{uo},{l}_{ut}}\right)$ represents the distance from the UAV to other UAVs, obstacles and dynamic targets, respectively. ${SIN}{R}_{t}$ denotes the SINR of the UAV at that moment.
+
+The action space for each UAV is discrete. The action of UAV $i$ is defined as ${V}_{u} = \left( {{V}_{x},{V}_{y}}\right)$ , which denotes the vector velocity on the $\mathrm{x}$ -axis and $\mathrm{y}$ -axis, respectively. It also changes its own speed when the UAV collides.
+
+The reward function for the UAVs in this experiment has three components. It encourages the UAV to quickly pursuit the dynamic target by considering the distance between them, providing a reward ${R}_{\text{goal }}$ upon successful pursuit. It also penalizes collisions to ensure safe flight and rewards higher SINR to promote flying in areas with better communication coverage. The reward function can be expressed as:
+
+$$
+r\left( {{s}_{t},{a}_{t}}\right) = {R}_{\text{dist }} + {R}_{\text{goal }} + {R}_{\text{coll }} + {R}_{{\text{SINR }}_{t}} \tag{10}
+$$
+
+## B. Fundamental of the ACO-MATD3 Approach
+
+
+
+Fig. 2: Framework of ACO-MATD3 algorithm.
+
+The ACO algorithm is an optimization algorithm that simulates the foraging behavior of ants. It directs the ant colony towards the optimal path in complex search spaces through pheromone accumulation and evaporation, combined with a probabilistic selection mechanism. This experiment combines the ACO algorithm with the MATD3 algorithm, aiming to dynamically choose the most appropriate learning rate $\alpha$ , discount factor $\gamma$ , and batch size $\mathcal{B}$ based on the current situation at different stages. This integration enhances the adaptability and robustness of the ACO-MATD3 algorithm. The framework of the algorithm is illustrated in Fig. 2.
+
+We define a search space containing three hyperparameters: $\alpha ,\gamma$ , and $\mathcal{B}$ . Each hyperparameter has multiple candidate values, and the range of values for these hyperparameters is given in detail in the next chapter. Additionally, we initialize a pheromone matrix.
+
+In the initialization phase, we establish an initial colony of 100 ants. Each ant's hyperparameter configuration is derived by calculating selection probabilities based on the current values in the pheromone matrix. These probabilities then guide the random selection of hyperparameters from the corresponding spaces. The selection probability for each hyperparameter value is calculated as follows:
+
+$$
+p\left( {v}_{i}\right) = \frac{\tau \left( {v}_{i}\right) }{\mathop{\sum }\limits_{{k = 1}}^{n}\tau \left( {v}_{k}\right) } \tag{11}
+$$
+
+where $p\left( {v}_{i}\right)$ represents the probability of selecting the $i$ -th value, $\tau \left( {v}_{i}\right)$ denotes the pheromone level associated with the $i$ -th value, and $n$ is the total number of possible values for the hyperparameter. This approach ensures that the search space is thoroughly explored, enabling the algorithm to evaluate a wide array of potential solutions right from the start.
+
+In the multi-UAV system, each UAV uses hyperparameters derived from the current ant's configuration to execute a pursuit task, and the resulting reward values are recorded. If the reward from a particular set of hyperparameters exceeds the highest reward recorded in previous iterations, that configuration is designated as the optimal set for the current phase.
+
+After each iteration, the pheromone level is adjusted according to the optimal hyperparameter configuration determined during the evaluation process. During this update process, the pheromone level for the chosen optimal configuration is increased to reinforce its selection in future iterations. Simultaneously, the pheromone levels for the other hyper-parameters are reduced in accordance with the evaporation rate to ensure diversity in the search process and prevent premature convergence. This pheromone updating method can be succinctly described as follows:
+
+$$
+\tau \left( {v}_{i}\right) \leftarrow \tau \left( {v}_{i}\right) + {\Delta \tau } \tag{12}
+$$
+
+$$
+\tau \left( {v}_{i}\right) \leftarrow \tau \left( {v}_{i}\right) \times \left( {1 - \rho }\right) \tag{13}
+$$
+
+where ${\Delta \tau }$ represents the increment added to the pheromone level upon a successful iteration, $\rho$ is the evaporation rate that moderates the decrease in pheromone levels to facilitate sustained exploration and exploitation balance. This dynamic adjustment ensures that the search algorithm not only intensifies exploration around proven successful parameters but also explores new potential areas effectively.
+
+In the ACO-MATD3 algorithm, the target Q-value for UAV $i$ is calculated as:
+
+$$
+{y}_{i} = {r}_{i} + \gamma \mathop{\min }\limits_{{j = 1,2}}{Q}_{{w}_{i, j}^{\prime }}\left( {{x}^{\prime },{a}_{1}^{\prime },\ldots ,{a}_{N}^{\prime }}\right) \tag{14}
+$$
+
+where ${r}_{i}$ is the reward received by UAV $i,\gamma$ is the discount factor, ${Q}_{{w}_{i, j}^{\prime }}$ is the $j$ -th target critic network of UAV $i, x$ is the joint next state of all UAVs, and ${a}_{i}^{\prime }$ represents the joint actions of all UAVs at the next time.
+
+The loss function for updating the critic networks is:
+
+$$
+L\left( {w}_{i}\right) = {\mathbb{E}}_{\left( {x,{a}_{i}, r,{x}^{\prime }}\right) \sim D}\left\lbrack {\left( {y}_{i} - {Q}_{{w}_{i}}\left( x,{a}_{1},\ldots ,{a}_{N}\right) \right) }^{2}\right\rbrack \tag{15}
+$$
+
+where ${w}_{i}$ represents the parameters of the critic network for UAV $i, D$ is the experience replay buffer.
+
+The policy update rule for the actor networks is given by:
+
+$$
+{\nabla }_{{\theta }_{i}}J\left( {\theta }_{i}\right) =
+$$
+
+$$
+{\mathbb{E}}_{x,{a}_{i} \sim D}\left\lbrack {\left. {\nabla }_{{\theta }_{i}}{\pi }_{{\theta }_{i}}\left( {s}_{i}\right) {\nabla }_{{a}_{i}}{Q}_{{w}_{i}}\left( x,{a}_{1},\ldots ,{a}_{N}\right) \right| }_{{a}_{i} = {\pi }_{{\theta }_{i}}\left( {s}_{i}\right) }\right\rbrack
+$$
+
+(16)where ${\theta }_{i}$ represents the parameters of the actor network for UAV $i,{s}_{i}$ is the state of $i$ th UAV, ${\pi }_{{\theta }_{i}}\left( {s}_{i}\right)$ is the policy of UAV $i$ .
+
+## IV. SIMULATION RESULTS AND DISCUSSION
+
+## A. Parameter Setting
+
+In this experiment, we build a $2\mathrm{\;{km}} \times 2\mathrm{\;{km}}$ urban area scenario with numerous buildings, each with a maximum height ${h}_{bd}$ of 90 meters. The presence of a LoS link is determined by examining the linear connection between the BSs and the UAVs, considering the distribution of buildings. There are seven BSs in this area, totaling $M = {21}$ sectors. The transmit power of each sector is $\bar{P} = {20}\mathrm{\;{dBm}}$ . The half-power beamwidth ${\phi }_{3dB}$ and ${\theta }_{3dB}$ both are ${65}^{ \circ }$ . The SINR interruption threshold is ${\gamma }_{th} = 1\mathrm{\;{dB}}$ . The noise power ${\sigma }^{2}$ of $5\mathrm{{dBm}}$ .
+
+The hyperparameter search spaces for the ACO-MATD3 algorithm are: learning rate $= \{ {0.005},{0.01},{0.015}\}$ , discount factor $= \{ {0.93},{0.95},{0.97}\}$ , batch size $= \{ {512},{1024}\}$ . The remaining algorithm parameters and the parameters for the DRL algorithms are provided in Table 1.
+
+TABLE I: DRL algorithm parameters setting
+
+| Definition | Value | Definition | Value |
| Max episodes | 100000 | Max step per episode | 25 |
| Replay buffer capacity | 1000000 | Batch size | 1024 |
| Learning rate | 0.01 | Gamma | 0.95 |
| R_coll | -2 | R_goal | 8 |
+
+## B. Result Analysis
+
+The experiment involves 3 UAVs, 3 dynamic targets, and 2 obstacles. To ensure fairness, all parameters were kept constant except for the ACO-MATD3 hyperparameter search space.
+
+
+
+Fig. 3: Mean reward for different algorithms.
+
+In Fig. 3, we compare the mean reward of the ACO-MATD3 algorithm with other algorithms. At the start of training, reward values drop significantly as the algorithms explore the environment to build awareness. It is clear from the figure that after reaching the converged state, the ACO-MATD3 algorithm achieves a higher mean reward than other algorithms. This highlights the effectiveness of the ACO-MATD3 algorithm, which can dynamically select optimal hyperparameters at different stages, enhancing its performance in complex environments with communication coverage challenges.
+
+
+
+Fig. 4: Communication return for different algorithms.
+
+The communication return for several algorithms are shown in Fig. 4. The final convergence values of the ACO-MATD3 algorithm are higher than those of the other algorithms, indicating that the flight path selected by the ACO-MATD3 algorithm for multi-UAV operations has stronger communication coverage. This further verifies the effectiveness of the ACO-MATD3 algorithm. In contrast, DDPG shows poor convergence performance in communication return because the UAVs operate independently and cannot learn a common policy. DDPG has poor convergence performance in communication return because the UAVs are all independent of each other and cannot learn a common policy. This situation highlights the improvement brought by the CTDE framework for multi-UAV cooperation.
+
+
+
+Fig. 5: Mean reward of each UAV in ACO-MATD3 algorithm.
+
+Fig. 5 demonstrates the mean rewards of the three UAVs using the ACO-MATD3 algorithm in this environment. The convergence state aligns with the overall mean reward convergence of the ACO-MATD3 algorithm, demonstrating the superiority of this algorithm with the CTDE mechanism in coordinating the decisions of each UAV. This indicates that the ACO-MATD3 algorithm effectively optimizes both overall performance and individual UAV policies.
+
+## V. CONCLUSION
+
+In this study, we have presented the ACO-MATD3 algorithm to address multi-UAV pursuit of dynamic targets under communication coverage. This algorithm has dynamically adjusted hyperparameters for different stages to enhance performance and stability. Experimental results have shown that ACO-MATD3 outperforms other algorithms in mean reward and communication return, demonstrating the significant enhancement in task efficiency achieved through dynamically adjusting hyperparameters. Future research will explore how to safely conduct multi-UAV pursuit missions in more complex environments, especially those with dynamic obstacles.
+
+## REFERENCES
+
+[1] M. F. F. Rahman, S. Fan, Y. Zhang, and L. Chen, "A comparative study on application of unmanned aerial vehicle systems in agriculture," Agriculture, vol. 11, no. 1, p. 22, 2021.
+
+[2] R. Sharma and R. Arya, "UAV based long range environment monitoring system with industry 5.0 perspectives for smart city infrastructure," Computers & Industrial Engineering, vol. 168, p. 108066, 2022.
+
+[3] Y. Zeng, X. Xu, S. Jin, and R. Zhang, "Simultaneous navigation and radio mapping for cellular-connected UAV with deep reinforcement learning," IEEE Transactions on Wireless Communications, vol. 20, no. 7, pp. 4205-4220, 2021.
+
+[4] X. Zhou, S. Yan, J. Hu, J. Sun, J. Li, and F. Shu, "Joint optimization of a UAV's trajectory and transmit power for covert communications," IEEE Transactions on Signal Processing, vol. 67, no. 16, pp. 4276-4290, 2019.
+
+[5] X. Liang, H. Wang, and H. Luo, "Collaborative pursuit-evasion strategy of UAV/UGV heterogeneous system in complex three-dimensional polygonal environment," Complexity, vol. 2020, no. 1, p. 7498740, 2020.
+
+[6] K. Krishnamoorthy, D. Casbeer, and M. Pachter, "Minimum time UAV pursuit of a moving ground target using partial information," in 2015 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, 2015, pp. 204-208.
+
+[7] A. Alexopoulos, T. Schmidt, and E. Badreddin, "Cooperative pursue in pursuit-evasion games with unmanned aerial vehicles," in 2015 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS). IEEE, 2015, pp. 4538-4543.
+
+[8] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, "Continuous control with deep reinforcement learning," in 4th International Conference on Learning Representations (ICLR), 2016.
+
+[9] S. Fujimoto, H. Hoof, and D. Meger, "Addressing function approximation error in actor-critic methods," in International conference on machine learning (ICML), 2018, pp. 1587-1596.
+
+[10] R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, "Multi-agent actor-critic for mixed cooperative-competitive environments," Advances in neural information processing systems, vol. 30, 2017.
+
+[11] F. Zhang, J. Li, and Z. Li, "A TD3-based multi-agent deep reinforcement learning method in mixed cooperation-competition environment," Neurocomputing, vol. 411, pp. 206-215, 2020.
+
+[12] B. Li, H. Zhang, P. He, G. Wang, K. Yue, and E. Neretin, "Hierarchical maneuver decision method based on PG-Option for UAV pursuit-evasion game," Drones, vol. 7, no. 7, p. 449, 2023.
+
+[13] X. Fu, J. Zhu, Z. Wei, H. Wang, and S. Li, "A UAV pursuit-evasion strategy based on UAV and imitation learning," International Journal of Aerospace Engineering, vol. 2022, no. 1, p. 3139610, 2022.
+
+[14] R. Zhang, Q. Zong, X. Zhang, L. Dou, and B. Tian, "Game of drones: Multi-UAV pursuit-evasion game with online motion planning by deep reinforcement learning," IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 10, pp. 7900-7909, 2022.
+
+[15] J. Cao, M. Ma, H. Li, R. Ma, Y. Sun, P. Yu, and L. Xiong, "A survey on security aspects for 3GPP 5G networks," IEEE communications surveys & tutorials, vol. 22, no. 1, pp. 170-195, 2019.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6a03a8d7f0f30f3aa67c13bc43966181049a45ae
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,277 @@
+§ DYNAMIC TARGET PURSUIT BY MULTI-UAV UNDER COMMUNICATION COVERAGE: ACO-MATD3 APPROACH
+
+${1}^{\text{ st }}$ Zhuang Cao
+
+School of Information and Communication Engineering Hainan University
+
+Haikou, Hainan
+
+hnucz@hainanu.edu.cn
+
+${2}^{\text{ nd }}$ Di Wu*
+
+School of Information and Communication Engineering
+
+Hainan University
+
+Haikou, Hainan
+
+hainuicaplab@hainanu.edu.cn
+
+Abstract-This study proposes a new approach for cooperative pursuit of dynamic targets under communication coverage involving multi-unmanned aerial vehicles (UAVs). This approach combines the ant colony optimization algorithm with the multi-agent twin delay deep deterministic policy gradient, called ACO-MATD3. The ACO-MATD3 algorithm dynamically adjusts hyper-parameters based on varying stages and requirements, greatly enhancing the stability and performance of cooperative multi-UAV pursuit tasks, especially under strong communication coverage. Experimental results demonstrate that the ACO-MATD3 algorithm significantly outperforms other algorithms in terms of mean reward and communication return.
+
+Index Terms-Multi-UAV, Pursuit, Communication coverage, Ant colony optimization algorithm, Multi-agent reinforcement learning
+
+§ I. INTRODUCTION
+
+In recent years, multi-unmanned aerial vehicles (UAVs) have found extensive applications in fields like agriculture [1], environmental monitoring [2], and communication [3], [4], due to their flexibility and ease of deployment. As technology progresses, UAVs are tasked with more complex challenges such as pursuing dynamic targets, where UAVs need to consistently pursue and approach a moving target in complex environments through strategic adjustments. This pursuit involves a strategic interaction between the UAVs and the targets, where effective decision-making is vital for success and showcases the UAV's intelligence. Therefore, developing effective pursuit strategies is crucial.
+
+Significant research has been conducted on the pursuit of UAVs using traditional methods. For instance, the study in [5] developed a cooperative pursuit-evasion strategy for UAVs in a complex 3D environment, utilizing a heterogeneous system to enhance spatial perception and decision-making. However, this approach encounters challenges related to scalability, computational demands, and robustness in dynamic environments. In [6], the problem of minimizing the time for a UAV to pursuit a moving ground target by optimizing the pursuit strategy using sensor data. Additionally, a hierarchical game structure was proposed in [7] to enhance the cooperative pursuit-evasion capabilities of UAVs in dynamic environments. Despite these advancements, the high computational complexity of these methods and the necessity to predefine the UAVs' flight paths limit their applicability in unknown environments.
+
+Fortunately, advancements in deep reinforcement learning (DRL) have introduced new methods for addressing UAV pursuit problems. Techniques such as the deep deterministic policy gradient (DDPG) [8] and twin delay deep deterministic policy gradient (TD3) [9] enable simultaneous learning of value and policy functions, thereby enhancing algorithm efficiency and stability. However, in multi-agent environments, interactions between agents can lead to policy non-convergence when DRL algorithms are applied directly. To address this issue, multi-agent reinforcement learning (MARL) algorithms, including the multi-agent deep deterministic policy gradient (MADDPG) [10] and multi-agent twin delay deep deterministic policy gradient (MATD3) [11], have been developed. The MATD3 is an improvement based on MADDPG. These algorithms improve stability and collaboration among agents by employing a centralized training and decentralized execution (CTDE) mechanism [10].
+
+Based on these DRL methods mentioned above, several studies have attempted to utilize DRL to solve UAV pursuit tasks. An approach proposed for UAV pursuit-evasion games utilizes hierarchical maneuvering decision-making with soft actor-critic algorithm [12] to enhance autonomy and strategic flexibility in complex environments. However, this method needs to work on high-dimensional state spaces. Another study [13] proposed a UAV pursuit policy combining DDPG with imitation learning to improve sample exploration efficiency, resulting in better performance and faster convergence than traditional DDPG method. A multi-UAV pursuit-evasion game was also explored in [14], utilizing online motion planning and DRL to enhance UAV interactions in complex environments. However, these studies still do not address the challenge of maintaining communication among UAVs while performing their tasks.
+
+Based on the above related research, we propose an algorithm that combines MATD3 and ant colony optimization (ACO) algorithm to address the multi-UAV cooperative pursuit problem under communication coverage, called ACO-MATD3. The algorithm can adaptively select the optimal hyperpa-rameters at different stages during the training process. As a result, the multi-UAV system learns a policy that allows it to pursuit dynamic targets in the airspace without prior knowledge, while maintaining strong communication coverage from base stations (BSs). The main contributions of this paper are as follows:
+
+This work is partly distributed under the "South China Sea Rising Star" Education Platform Foundation of Hainan Province (JYNHXX2023-17G), the Natural Science Foundation of Hainan Province (624MS036), the Postgraduate Innovation Projects in Hainan Province (Qhys2023-290).
+
+Corresponding author: Di Wu.
+
+(1) In contrast to non-learning based approaches [5], [6], [7], the problem of multi-UAV cooperative pursuit problem under communication coverage is formulated as a Markov game. Each UAV operates as an independent agent while cooperating with others to maximize cumulative rewards and optimize their policies.
+
+(2) Differently from other DRL-based approaches [12], [13], [14], this study investigates the communication connectivity between multi-UAV and BSs during pursuit tasks, and considers the effect of noise in the environment on communication.
+
+(3) Compared with the MATD3 [11] algorithm, the ACO-MATD3 algorithm proposed in this study can dynamically optimize the hyperparameters according to the training stage, reduce the impact of hyperparameters on performance, and improve training efficiency and effectiveness.
+
+The paper is organized as follows: Section 2 provides the problem description and system modeling. Section 3 presents the ACO-MATD3 algorithm proposed in this paper. Section 4 analyses the results of the experiment. Section 5 concludes the paper.
+
+§ II. PROBLEM DESCRIPTION AND SYSTEM MODELING
+
+In this section, we describe the multi-UAV pursuit problem under communication coverage. Then the BS antenna model and the path loss model are introduced. Finally, we illustrate the communication coverage model used in this experiment.
+
+§ A. PROBLEM DESCRIPTION
+
+ < g r a p h i c s >
+
+Fig. 1: Communication coverage strength map.
+
+This experiment investigates the multi-UAV pursuit problem under communication coverage, consisting of multi-UAV, obstacles and dynamic targets, as shown in Fig. 1. Their initial positions are randomly generated. The BSs support UAV communication, with the blue shading in Fig. 1 indicating the strength of the communication coverage. During the pursuit of dynamic targets, each UAV must avoid collisions with obstacles and maintain strong communication coverage.
+
+§ B. ANTENNA MODEL AND PATH LOSS MODEL
+
+This experiment formulates the antenna model of the BSs through the 3GPP [15] specification. Each BS has the same height ${h}_{BS}$ and divided into three sectors, each vertically placed with a uniform linear array of 8 elements.
+
+The radiation pattern of each element is determined by combining its horizontal and vertical radiation patterns, defined as
+
+follows:
+
+$$
+{AH} = - \min \left\lbrack {{12}{\left( \frac{\phi }{{\phi }_{3dB}}\right) }^{2},{A}_{m}}\right\rbrack \tag{1}
+$$
+
+$$
+{AV} = - \min \left\lbrack {{12}{\left( \frac{\theta - {90}}{{\theta }_{3dB}}\right) }^{2},{A}_{m}}\right\rbrack \tag{2}
+$$
+
+where $\phi$ is the azimuth angle indicating the angle of the antenna in the horizontal plane, and $\theta$ is the elevation angle indicating the angle of the antenna in the vertical plane. Both are in degrees, ${\phi }_{3dB}$ and ${\theta }_{3dB}$ are the half-power beamwidths, ${A}_{m}$ is the element gain threshold.
+
+The total gain of the antenna elements is expressed in dB as:
+
+$$
+{G}_{{el}{e}_{dB}} = {G}_{max} + {A}_{ele} \tag{3}
+$$
+
+$$
+= {G}_{\max } + \left\{ {-\min \left\lbrack {-\left( {{AH} + {AV}}\right) ,{A}_{m}}\right\rbrack }\right\}
+$$
+
+where ${A}_{ele}$ represents the power gain of the antenna element and ${G}_{\max }$ is the maximum directional gain of the antenna element. For ease of computation, we convert ${G}_{{el}{e}_{dB}}$ to the linear scale of ${G}_{ele}$ .
+
+The combined gain of the antenna array is expressed in dB as:
+
+$$
+G = {10} \times {\log }_{10}{\left| {F}_{\text{ ele }} \times AF\right| }^{2} \tag{4}
+$$
+
+where ${F}_{ele}$ is the arithmetic square root of ${G}_{ele}$ , and ${AF}$ is the antenna array factor.
+
+This experiment determines whether the communication link between the UAV and the BS sector is a line of sight (LoS) link or an non-line of sight (NLoS) link by assessing whether buildings in the environment obscure the communication link. The path loss of the LoS link from the UAV to sector $m$ is expressed in $\mathrm{{dB}}$ as:
+
+$$
+{h}_{m}^{\mathrm{{LoS}}}\left( t\right) = {28} + {22}{\log }_{10}{d}_{m}\left( t\right) + {20}{\log }_{10}{f}_{c} \tag{5}
+$$
+
+where ${d}_{m}\left( t\right)$ represents the distance between the UAV and sector $m$ , and ${f}_{c}$ denotes the carrier frequency.
+
+The path loss of the NLoS link between the UAV and sector $m$ is given in $\mathrm{{dB}}$ as:
+
+$$
+{h}_{m}^{\mathrm{{NLoS}}}\left( t\right) = - {17.5} + \left( {{46} - 7{\log }_{10}h\left( t\right) }\right) {\log }_{10}{d}_{m}\left( t\right) \tag{6}
+$$
+
+$$
++ {20}{\log }_{10}\left( {{40\pi }{f}_{c}/3}\right)
+$$
+
+where $h\left( t\right)$ is the height of UAV at time $t$ .
+
+In addition, the channel small-scale fading is Rician fading in the case of LoS and Rayleigh fading in the case of NLoS.
+
+§ C. COMMUNICATION MODEL
+
+The baseband equivalent channel between the UAV and the communication BS sector $m$ at time $t$ is denoted by ${H}_{m}\left( t\right)$ , where $1 \leq m \leq M$ , and $M$ represents the total number of communication BS sectors linked with the UAV throughout its flight. The baseband equivalent channel ${H}_{m}\left( t\right)$ is influenced by the BS antenna array gain $G$ , the path loss $\beta$ , and the small-scale fading $h$ . The magnitudes of ${H}_{m}\left( t\right)$ and $\beta$ are related to the position $q\left( t\right)$ of the UAV at time $t$ , while $h$ is a random variable. The signal power received by the UAV from the communication BS sector $m$ at time $t$ can be expressed as:
+
+$$
+{P}_{m}\left( t\right) = \bar{P}{\left| {H}_{m}\left( t\right) \right| }^{2} = \bar{P}{G}_{m}\left( {q\left( t\right) }\right) \beta \left( {q\left( t\right) }\right) h\left( t\right) \tag{7}
+$$
+
+where $\bar{P}$ represents the transmit power of the BS sector $m$ , which is assumed to remain constant. The path loss is calculated using the following equation:
+
+$$
+\beta \left( {q\left( t\right) }\right) = \left\{ \begin{array}{l} P{L}_{LoS},\text{ if LoS link } \\ P{L}_{NLoS},\text{ if NLoS link } \end{array}\right. \tag{8}
+$$
+
+where $P{L}_{LoS}$ and $P{L}_{NLoS}$ are the linear scales of ${h}_{m}^{\mathrm{{LoS}}}\left( t\right)$ and ${h}_{m}^{\mathrm{{NLoS}}}\left( t\right)$ , respectively.
+
+In this experiment, the signal to interference plus noise ratio (SINR) is used as a crucial criterion for evaluating the communication coverage performance of UAVs. This criterion can be expressed as:
+
+$$
+{SIN}{R}_{t} = \frac{{P}_{m}\left( t\right) }{\mathop{\sum }\limits_{{n \neq m}}{P}_{n}\left( t\right) + {\sigma }^{2}} \tag{9}
+$$
+
+where $n$ represents the BSs not associated with the UAV at time $t$ . In this case, the communication of the UAV is affected not only by interference from all non-associated BS sectors but also by the environmental noise, which impacts the quality of its communication.
+
+To ensure communication coverage while the UAV is airborne, the SINR of the UAV should not drop below a minimum threshold $\alpha$ . That is, the UAV is not under the communication coverage of the BS when $\operatorname{SINR}\left( t\right) < \alpha$ . Each UAV has an independent SINR at time $t$ .
+
+§ III. MULTI-UAV COOPERATIVE PURSUIT USING ACO-MATD3
+
+In this subsection, we characterize the UAV's state space, action space, and reward function within a Markov game framework and detail our proposed ACO-MATD3 algorithm.
+
+§ A. MARKOV GAME WITH MULTI-UAV
+
+This subsection explores the framework of the Markov game as applied to multi-UAV systems. It details the state and action spaces for UAVs and defines the reward function guiding their interactions in a complex environment.
+
+The state space for each UAV $i$ at time $t$ is defined as ${s}_{it} = \left( {{s}_{ut},{s}_{ot},{SIN}{R}_{t}}\right)$ , where ${s}_{ut} = \left( {{x}_{t},{y}_{t},{v}_{xt},{v}_{yt}}\right)$ is a combination of the position and the speed. Additionally, ${s}_{ot} =$ $\left( {{l}_{uu},{l}_{uo},{l}_{ut}}\right)$ represents the distance from the UAV to other UAVs, obstacles and dynamic targets, respectively. ${SIN}{R}_{t}$ denotes the SINR of the UAV at that moment.
+
+The action space for each UAV is discrete. The action of UAV $i$ is defined as ${V}_{u} = \left( {{V}_{x},{V}_{y}}\right)$ , which denotes the vector velocity on the $\mathrm{x}$ -axis and $\mathrm{y}$ -axis, respectively. It also changes its own speed when the UAV collides.
+
+The reward function for the UAVs in this experiment has three components. It encourages the UAV to quickly pursuit the dynamic target by considering the distance between them, providing a reward ${R}_{\text{ goal }}$ upon successful pursuit. It also penalizes collisions to ensure safe flight and rewards higher SINR to promote flying in areas with better communication coverage. The reward function can be expressed as:
+
+$$
+r\left( {{s}_{t},{a}_{t}}\right) = {R}_{\text{ dist }} + {R}_{\text{ goal }} + {R}_{\text{ coll }} + {R}_{{\text{ SINR }}_{t}} \tag{10}
+$$
+
+§ B. FUNDAMENTAL OF THE ACO-MATD3 APPROACH
+
+ < g r a p h i c s >
+
+Fig. 2: Framework of ACO-MATD3 algorithm.
+
+The ACO algorithm is an optimization algorithm that simulates the foraging behavior of ants. It directs the ant colony towards the optimal path in complex search spaces through pheromone accumulation and evaporation, combined with a probabilistic selection mechanism. This experiment combines the ACO algorithm with the MATD3 algorithm, aiming to dynamically choose the most appropriate learning rate $\alpha$ , discount factor $\gamma$ , and batch size $\mathcal{B}$ based on the current situation at different stages. This integration enhances the adaptability and robustness of the ACO-MATD3 algorithm. The framework of the algorithm is illustrated in Fig. 2.
+
+We define a search space containing three hyperparameters: $\alpha ,\gamma$ , and $\mathcal{B}$ . Each hyperparameter has multiple candidate values, and the range of values for these hyperparameters is given in detail in the next chapter. Additionally, we initialize a pheromone matrix.
+
+In the initialization phase, we establish an initial colony of 100 ants. Each ant's hyperparameter configuration is derived by calculating selection probabilities based on the current values in the pheromone matrix. These probabilities then guide the random selection of hyperparameters from the corresponding spaces. The selection probability for each hyperparameter value is calculated as follows:
+
+$$
+p\left( {v}_{i}\right) = \frac{\tau \left( {v}_{i}\right) }{\mathop{\sum }\limits_{{k = 1}}^{n}\tau \left( {v}_{k}\right) } \tag{11}
+$$
+
+where $p\left( {v}_{i}\right)$ represents the probability of selecting the $i$ -th value, $\tau \left( {v}_{i}\right)$ denotes the pheromone level associated with the $i$ -th value, and $n$ is the total number of possible values for the hyperparameter. This approach ensures that the search space is thoroughly explored, enabling the algorithm to evaluate a wide array of potential solutions right from the start.
+
+In the multi-UAV system, each UAV uses hyperparameters derived from the current ant's configuration to execute a pursuit task, and the resulting reward values are recorded. If the reward from a particular set of hyperparameters exceeds the highest reward recorded in previous iterations, that configuration is designated as the optimal set for the current phase.
+
+After each iteration, the pheromone level is adjusted according to the optimal hyperparameter configuration determined during the evaluation process. During this update process, the pheromone level for the chosen optimal configuration is increased to reinforce its selection in future iterations. Simultaneously, the pheromone levels for the other hyper-parameters are reduced in accordance with the evaporation rate to ensure diversity in the search process and prevent premature convergence. This pheromone updating method can be succinctly described as follows:
+
+$$
+\tau \left( {v}_{i}\right) \leftarrow \tau \left( {v}_{i}\right) + {\Delta \tau } \tag{12}
+$$
+
+$$
+\tau \left( {v}_{i}\right) \leftarrow \tau \left( {v}_{i}\right) \times \left( {1 - \rho }\right) \tag{13}
+$$
+
+where ${\Delta \tau }$ represents the increment added to the pheromone level upon a successful iteration, $\rho$ is the evaporation rate that moderates the decrease in pheromone levels to facilitate sustained exploration and exploitation balance. This dynamic adjustment ensures that the search algorithm not only intensifies exploration around proven successful parameters but also explores new potential areas effectively.
+
+In the ACO-MATD3 algorithm, the target Q-value for UAV $i$ is calculated as:
+
+$$
+{y}_{i} = {r}_{i} + \gamma \mathop{\min }\limits_{{j = 1,2}}{Q}_{{w}_{i,j}^{\prime }}\left( {{x}^{\prime },{a}_{1}^{\prime },\ldots ,{a}_{N}^{\prime }}\right) \tag{14}
+$$
+
+where ${r}_{i}$ is the reward received by UAV $i,\gamma$ is the discount factor, ${Q}_{{w}_{i,j}^{\prime }}$ is the $j$ -th target critic network of UAV $i,x$ is the joint next state of all UAVs, and ${a}_{i}^{\prime }$ represents the joint actions of all UAVs at the next time.
+
+The loss function for updating the critic networks is:
+
+$$
+L\left( {w}_{i}\right) = {\mathbb{E}}_{\left( {x,{a}_{i},r,{x}^{\prime }}\right) \sim D}\left\lbrack {\left( {y}_{i} - {Q}_{{w}_{i}}\left( x,{a}_{1},\ldots ,{a}_{N}\right) \right) }^{2}\right\rbrack \tag{15}
+$$
+
+where ${w}_{i}$ represents the parameters of the critic network for UAV $i,D$ is the experience replay buffer.
+
+The policy update rule for the actor networks is given by:
+
+$$
+{\nabla }_{{\theta }_{i}}J\left( {\theta }_{i}\right) =
+$$
+
+$$
+{\mathbb{E}}_{x,{a}_{i} \sim D}\left\lbrack {\left. {\nabla }_{{\theta }_{i}}{\pi }_{{\theta }_{i}}\left( {s}_{i}\right) {\nabla }_{{a}_{i}}{Q}_{{w}_{i}}\left( x,{a}_{1},\ldots ,{a}_{N}\right) \right| }_{{a}_{i} = {\pi }_{{\theta }_{i}}\left( {s}_{i}\right) }\right\rbrack
+$$
+
+(16)where ${\theta }_{i}$ represents the parameters of the actor network for UAV $i,{s}_{i}$ is the state of $i$ th UAV, ${\pi }_{{\theta }_{i}}\left( {s}_{i}\right)$ is the policy of UAV $i$ .
+
+§ IV. SIMULATION RESULTS AND DISCUSSION
+
+§ A. PARAMETER SETTING
+
+In this experiment, we build a $2\mathrm{\;{km}} \times 2\mathrm{\;{km}}$ urban area scenario with numerous buildings, each with a maximum height ${h}_{bd}$ of 90 meters. The presence of a LoS link is determined by examining the linear connection between the BSs and the UAVs, considering the distribution of buildings. There are seven BSs in this area, totaling $M = {21}$ sectors. The transmit power of each sector is $\bar{P} = {20}\mathrm{\;{dBm}}$ . The half-power beamwidth ${\phi }_{3dB}$ and ${\theta }_{3dB}$ both are ${65}^{ \circ }$ . The SINR interruption threshold is ${\gamma }_{th} = 1\mathrm{\;{dB}}$ . The noise power ${\sigma }^{2}$ of $5\mathrm{{dBm}}$ .
+
+The hyperparameter search spaces for the ACO-MATD3 algorithm are: learning rate $= \{ {0.005},{0.01},{0.015}\}$ , discount factor $= \{ {0.93},{0.95},{0.97}\}$ , batch size $= \{ {512},{1024}\}$ . The remaining algorithm parameters and the parameters for the DRL algorithms are provided in Table 1.
+
+TABLE I: DRL algorithm parameters setting
+
+max width=
+
+Definition Value Definition Value
+
+1-4
+Max episodes 100000 Max step per episode 25
+
+1-4
+Replay buffer capacity 1000000 Batch size 1024
+
+1-4
+Learning rate 0.01 Gamma 0.95
+
+1-4
+R_coll -2 R_goal 8
+
+1-4
+
+§ B. RESULT ANALYSIS
+
+The experiment involves 3 UAVs, 3 dynamic targets, and 2 obstacles. To ensure fairness, all parameters were kept constant except for the ACO-MATD3 hyperparameter search space.
+
+ < g r a p h i c s >
+
+Fig. 3: Mean reward for different algorithms.
+
+In Fig. 3, we compare the mean reward of the ACO-MATD3 algorithm with other algorithms. At the start of training, reward values drop significantly as the algorithms explore the environment to build awareness. It is clear from the figure that after reaching the converged state, the ACO-MATD3 algorithm achieves a higher mean reward than other algorithms. This highlights the effectiveness of the ACO-MATD3 algorithm, which can dynamically select optimal hyperparameters at different stages, enhancing its performance in complex environments with communication coverage challenges.
+
+ < g r a p h i c s >
+
+Fig. 4: Communication return for different algorithms.
+
+The communication return for several algorithms are shown in Fig. 4. The final convergence values of the ACO-MATD3 algorithm are higher than those of the other algorithms, indicating that the flight path selected by the ACO-MATD3 algorithm for multi-UAV operations has stronger communication coverage. This further verifies the effectiveness of the ACO-MATD3 algorithm. In contrast, DDPG shows poor convergence performance in communication return because the UAVs operate independently and cannot learn a common policy. DDPG has poor convergence performance in communication return because the UAVs are all independent of each other and cannot learn a common policy. This situation highlights the improvement brought by the CTDE framework for multi-UAV cooperation.
+
+ < g r a p h i c s >
+
+Fig. 5: Mean reward of each UAV in ACO-MATD3 algorithm.
+
+Fig. 5 demonstrates the mean rewards of the three UAVs using the ACO-MATD3 algorithm in this environment. The convergence state aligns with the overall mean reward convergence of the ACO-MATD3 algorithm, demonstrating the superiority of this algorithm with the CTDE mechanism in coordinating the decisions of each UAV. This indicates that the ACO-MATD3 algorithm effectively optimizes both overall performance and individual UAV policies.
+
+§ V. CONCLUSION
+
+In this study, we have presented the ACO-MATD3 algorithm to address multi-UAV pursuit of dynamic targets under communication coverage. This algorithm has dynamically adjusted hyperparameters for different stages to enhance performance and stability. Experimental results have shown that ACO-MATD3 outperforms other algorithms in mean reward and communication return, demonstrating the significant enhancement in task efficiency achieved through dynamically adjusting hyperparameters. Future research will explore how to safely conduct multi-UAV pursuit missions in more complex environments, especially those with dynamic obstacles.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..3cc6e84d991cc0d1128ec522e71429e06c854b2d
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,303 @@
+# Research on the classification of ship encounter scenarios based on CAE-LSTM
+
+Taiyu Chai
+
+School of Navigation
+
+Wuhan University of Technology
+
+Wuhan, China
+
+282614@whut.edu.cn
+
+Zhitao Yuan*
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+ztyuan@whut.edu.cn
+
+Weiqiang Wang
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+weiqiangwang@whut.edu.cn
+
+Shengjie Yang
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+yangshengjie@whut.edu.cn
+
+${Abstract}$ - To tackle the challenge of recognizing similar ship encounter scenarios under multi-ship interference coupling and dynamic evolution, this paper proposes a classification method that combines a Convolutional Auto-Encoder (CAE) and a Long Short-Term Memory (LSTM) recurrent neural network model. To extract many genuine ship encounter scenarios from historical AIS data for further categorization, first, a method for extracting ship encounter scenarios taking spatiotemporal proximity restrictions is devised. Then, by setting a time window and rasterizing the scenarios, a CAE-based model is constructed to characterize the spatial interference of ships in the scenarios. Further, an LSTM network is used to learn temporal evolution features, achieving a low-dimensional spatiotemporal vector representation of ship encounter scenarios. Finally, hierarchical clustering is applied to classify different ship encounter scenarios based on these low-dimensional spatiotemporal vectors. The proposed method is validated through extensive experiments using data from Ningbo-Zhoushan Port, and the results show that this method can effectively extract real ship encounter scenarios and accurately identify similar scenarios. This research provides robust support for a deep understanding of ship encounter scenarios and the mining of similar ship behavior patterns.
+
+Keywords-ship encounter scenarios, scenarios classification, CAE, LSTM
+
+## I. INTRODUCTION
+
+In recent years, the continuous growth in shipping volume has significantly increased maritime traffic density, leading to a rise in ship collision accidents [1]. Research shows that these mishaps are mostly caused by human factors. [2]. To mitigate collision incidents caused by human error, researchers have developed numerous navigation collision avoidance algorithms to enhance maritime safety[3]. Historical ship encounter scenarios contain rich avoidance processes and strategies. Extracting these scenarios and analyzing collision avoidance behavior patterns in similar situations allows this implicit knowledge to be integrated into the design of collision avoidance algorithms. This approach enhances the practicality of these algorithms and improves avoidance safety in similar scenarios. Therefore, extracting real ship encounter scenarios and effectively classifying similar scenarios hold significant potential for advancing collision avoidance algorithm design.
+
+Ship encounter scenarios essentially involve interactions between multiple vessels, which can be explained through their trajectories. Because the Automatic Identification System (AIS) is widely used on ships, scholars can collect large quantities of high-quality vessel trajectory data at a low cost, providing a rich and reliable data source for extracting ship encounter scenarios. Related research on encounter scenario extraction using AIS data has been carried out by several academics. Through the use of AIS data, Ma Jie et al. [4,5] were able to successfully extract ship encounter scenarios by analyzing the spatiotemporal correlations during ship interactions. Similarly, Based on the spatiotemporal proximity relationships between ships, Wang et al.[6] identified ship encounter possibilities from AIS data, evaluated the significance of each event, and sampled the data to create test scenarios for collision avoidance algorithms.
+
+Ship encounter scenarios are typical spatiotemporal sequence data, often exhibiting significant temporal evolution characteristics and complex multi-vessel interaction couplings. This complexity makes classifying ship encounter scenarios challenging. Current research mainly focuses on clustering analysis of individual ship trajectories. To identify frequent paths and discover abnormal trajectories, Li et al. [7] for instance, suggested a multi-step clustering methodology that combines principal component analysis, dynamic time warping, and an enhanced trajectory clustering center method. Ship itineraries were inferred from AIS data by Zhang et al. [8] using data-driven techniques such as ant colony optimization and geographic clustering of applications with noise based on density (DBSCAN). Zhang et al [9] classified ship trajectories using K-Means and DBSCAN clustering algorithms, then identified potential collision scenarios by detecting illegal evasive maneuvers through relative bearing angles and quantified the collision risk index when evasive actions were taken. However, these methods primarily rely on the similarity calculation of individual ship trajectories. Although they perform well in trajectory similarity analysis and classification, encounter scenarios involve the interactions of multiple ships, featuring significant temporal evolution characteristics and complex multi-ship interference effects. As a result, these methods have limitations in representing and measuring the spatio-temporal interference features in encounter scenarios and face challenges when directly applied to encounter scenario classification.
+
+---
+
+This paper is supported by the National Natural Science Foundation of China(NSFC) under Grant NO.52031009. (Corresponding author: Zhitao Yuan).
+
+---
+
+In recent years, deep learning has shown great potential in handling complex spatio-temporal data, and some studies have begun exploring its potential in trajectory similarity computation. These works demonstrate how deep learning techniques can more effectively capture the features of ship trajectories. Compared to traditional methods, deep learning models can automatically learn useful features from large amounts of data without relying on manual feature extraction, offering certain advantages [10]. Liang et al [11] proposed an unsupervised learning method based on a convolutional autoencoder (CAE), which maps trajectories into two-dimensional matrices to generate trajectory images and automatically extracts low-dimensional features via the CAE to compute similarity. Chen et al [12] introduced a method based on convolutional neural networks (CNN) to identify movement patterns in emerging trajectories. In this approach, a mobility-based trajectory structure is introduced as input to the identification model, and evaluations using real maritime trajectory datasets show the superiority of this method. Kontopoulos et al [13] proposed a novel method that integrates research in computer vision and trajectory classification, automatically extracting meaningful information from trajectory data and identifying movement patterns without the need for expert input.
+
+Overall, unsupervised and semi-supervised methods based on deep learning are gradually gaining attention in the field of maritime situational awareness. These methods share a common feature: they reduce reliance on manual intervention through automatic feature extraction, demonstrating strong adaptability, especially when handling large amounts of unlabeled data. It is recommended to develop an unsupervised learning method for representing the complex temporal evolution characteristics of ship encounter scenarios to enable effective classification. Based on the above analysis, this study proposes a ship encounter scenario classification method that combines a Convolutional Autoencoder (CAE) with a Long Short-Term Memory (LSTM) network. This approach comprehensively considers both the spatial interference coupling features among multiple ships and the temporal evolution patterns within the encounter scenario, enabling effective classification of ship encounter scenarios.
+
+## II. METHODOLOGY
+
+This paper focuses on two main tasks: the extraction of real ship encounter scenarios based on AIS data, and the classification of these scenarios using a combination of CAE and LSTM models. As seen in Figure. 1., the research framework consists of three steps: preprocessing AIS data, ship encounter scenario extraction, and clustering ship encounter scenarios.
+
+Step 1: Data Preprocessing. Original AIS data is preprocessed to retain key attributes such as timestamp, Maritime Mobile Service Identity (MMSI), ship length, longitude, latitude, speed over ground (SOG), and course over ground (COG). These attributes are essential for calculating the subsequent spatiotemporal relationships of the vessels.
+
+Step 2: Encounter Scenario Extraction. Based on the spatiotemporal proximity analysis of ships, ship encounter scenarios are extracted from historical AIS data. This extraction provides numerous encounter scenarios that reflect the real navigational behaviors of ships for subsequent classification.
+
+Step 3: Time Slicing and Gridding. Time slicing and gridding are applied to the scenarios to characterize their spatiotemporal attributes.
+
+Step 4: Feature Representation. CAE and LSTM represent the spatial and temporal features of the encounter scenarios with feature vectors.
+
+Step 5: Clustering of Encounter Scenarios. Hierarchical clustering is applied to the feature vectors of all scenarios. To achieve the classification of encounter scenarios, the ideal number of clusters is found using the Silhouette Coefficient (SC) index.
+
+In summary, based on the most advanced research findings, our CAE-based ship encounter scenario classification method offers the following innovations. We propose generating information trajectory images by remapping the ship trajectories involved in encounter scenarios into two-dimensional matrices:
+
+1. The similarity between different encounter scenarios is measured by assessing the structural similarity between the corresponding information trajectory images.
+
+2. A convolutional autoencoder neural network is proposed to learn the low-dimensional representation of these images in an unsupervised manner. The learned representation can effectively capture the characteristics of ship encounter scenarios.
+
+Step 1: Data Preprocessing Retention of information Time Latitude and stamp longitude COG SOG MMSI Ship encounter Scenario relative distance Clustering LSTM ⑯ Featung Hierarchical clustering Time feature Raw Noise Data Filtering interpolation AIS data Abnormal Static data elimination matching Step 2: Encounter Scenario Extraction Spatial-temporal Ship pairs proximity assessment relationship judgment Co-occurrence time Minimum passing DCPA、TCPA distance Time and distance Collision warning thresholds threshold Step 3: Encounter Scenario Clustering Scenario Representation Time Slicing CAE Encoder Featun Hidden Layer Matrix Decoder Space-time sequence Spatial feature
+
+Fig. 1. Overview of the proposed approach.
+
+## A. Data Preprocessing
+
+The quality of AIS data significantly impacts the accuracy of the extracted encounter scenarios. Due to various factors, AIS data may contain inconsistencies with the actual navigational state of the ships. Therefore, preprocessing is necessary before extracting encounter scenarios[14]. Main preprocessing operations include noise filtering, anomaly removal, data interpolation, and matching of static data information[15].
+
+### B.AIS Data-Based Encounter Scenarios Extracted
+
+Spatio-temporal relationships between ships are fundamental for extracting encounter scenarios. In this work, ship encounter scenarios are described as a series of ship pairs, that within a specific time sequence, satisfy specific spatiotemporal proximity conditions. Figure 2. shows a graphical description of ship encounter scenarios. The timeline is shown on the x-axis in Figure 2, and the ship identification numbers that are part of the encounter scenarios are shown on the y-axis. The lines with arrows represent the navigation period of the Own Ship (OS) in the study area, while the lines with arrows in front of each Target Ship (TS) indicate the periods when the TS meets the preset spatiotemporal proximity conditions with the OS.
+
+ID of ships in encounter TS3 ${\mathrm{t}}_{4}$ ${\mathrm{t}}_{5}$ ${\mathrm{t}}_{6}$ ${\mathrm{t}}_{7}$ time OS TS1 ${\mathrm{t}}_{0}$ ${\mathrm{t}}_{1}$ ${\mathrm{t}}_{2}$ ${\mathrm{t}}_{3}$
+
+Fig. 2. Overview of the proposed approach.
+
+Additional evolution analysis of the Distance at the Closest Point of Approach (DCPA) and the Time to the Closest Point of Approach (TCPA) is necessary to precisely define spatiotemporal proximity relationships between ships at each time[16]. By analyzing the preprocessed AIS data, the spatiotemporal relationships between ships can be extracted, allowing the identification of ship encounters. Specifically, when two ships remain in the study area for a period exceeding the set time threshold, the minimum distance between them is calculated. Further analysis will be done on their relative distance, DCPA, and TCPA evolution patterns if this closest passing distance is less than the distance criterion. A ship pair will be deemed to meet the spatiotemporal proximity constraints that may result in a collision if their relative distance is decreasing and stays within the early-warning distance, and both DCPA and TCPA values stay below a specific threshold before approaching the closest passing distance. Under such circumstances, the relevant data will be saved and the segments of two ships that satisfy these spatiotemporal proximity constraints will be retrieved. The beginning and ending times of the extracted segments, as well as static and dynamic information on each ship (such as MMSI, length, width, type, and so on) at each timestamp over this period, are all included in this data. Figure. 3. provides a graphical illustration of DCPA and TCPA, with the calculation formulas provided below.
+
+$$
+{DCP}{A}_{t} = {D}_{ijt} \cdot \sqrt{1 - {\cos }^{2}\left( {\theta }_{ijt}\right) } \tag{1}
+$$
+
+$$
+{TCP}{A}_{t} = \frac{-{D}_{ijt} \cdot \cos \left( {\theta }_{ijt}\right) }{{v}_{ijt}} \tag{2}
+$$
+
+where ${D}_{ijt}$ represents the distance between ship $i$ and ship $j$ at time $t.{v}_{ijt}$ represents the relative speed between ship $i$ and ship $j$ at time $t.\cos \left( {\theta }_{ijt}\right)$ indicates the angle formed by the cosine of the relative velocity and the line joining the two ships.
+
+$y$ $\left( {{v}_{j},{a}_{j}}\right) \angle$ ${v}_{i}$ $\left( {{x}_{j},{y}_{j}}\right)$ ${\text{ship}}_{j}$ ${d}_{ij}$ ${TCPA} = - D$ $\cos \left( {\theta }_{j}\right) /{v}_{j}$ $\left( {{v}_{i},{a}_{i}}\right)$ ${DCP}\dot{A}$ $\left( {{x}_{i},{y}_{i}}\right)$ ship
+
+Fig. 3. DCPA and TCPA interpretation in graphics.
+
+## C. Encounter Scenario time slice
+
+Ship encounter scenarios, as spatiotemporal sequence data, involve mutual interference between ships that varies over time. Therefore, classifying encounter scenarios requires attention to both spatial interference characteristics and temporal evolution patterns of the ships. Time-slicing the scenarios and gridding each slice is the first step in the process of efficiently extracting the spatial and temporal features of these scenarios. This maps the temporal evolution of spatial interference characteristics into multi-time-window grids. Compared with the original trajectory image pixels, raster images contain richer information and are more conducive to CAE to characterize the interaction of ships in the encounter scenario.
+
+Time Slice1 Time Slice m ... Time Slice1 Time Slice $\mathbf{m}$ ...
+
+Fig. 4. Raster map generation and scene time slicing.
+
+Thus, this paper projects the original ship trajectory into a two-dimensional matrix to generate a trajectory raster image based on the time sequence of the encounter scenarios, maintaining the original spatiotemporal characteristics. To balance the information richness of encounter scene slices and the total number of slices, the time window duration is set to 3 minutes, and the time window step to 1 minute. The particular procedure is depicted in Figure. 4.
+
+## D. Feature Representation of Encounter Scenarios
+
+To fully representant spatial interaction features between ships from multi-time window raster images and to learn the contextual relationships between feature sequences, as well as to uncover the temporal evolution patterns of the scenarios, we employ a multi-layer CAE neural network combined with LSTM for unsupervised learning and feature representation. The CAE, with convolutional and pooling layers, learns to identify local spatial interactions and patterns within each raster image[17]. Once spatial features are obtained, they are fed into the LSTM model, which captures the temporal evolution of these features over multiple time windows. The combination of CAE and LSTM enables a comprehensive representation of both the spatial interactions between ships and their dynamic changes over time.
+
+This study employs a CAE-based autoencoder architecture. Compared to traditional autoencoders, CAE incorporates convolutional and pooling layers, allowing for better extraction of local features related to ship spatial interference in the scene grid maps. As shown in Figure. 5, the CAE model consists of three convolutional layers, three max-pooling layers, and fully connected layers. The encoder layer transforms input scene grid maps into low-dimensional feature vectors, thereby representing the spatial features of encounter scenarios. The decoder layer uses ReLU as the activation function to effectively reconstruct the low-dimensional feature vectors into scene grid maps. Additionally, to enhance the feature representation capability CAE, this study introduces a loss function sensitive to the structure of the images, specifically the structural similarity (SSIM) index, to ensure the accuracy of the extracted features. To further elucidate the working mechanism of the CAE model, the operations of convolutional and fully connected layers are described in detail as follows:
+
+$$
+{x}_{k}^{l} = {A}_{E}\left( {{f}_{k}^{l} \odot {x}_{k}^{\left( l - 1\right) } + {b}_{k}^{l}}\right) \tag{3}
+$$
+
+$$
+Y = \mathcal{H}\left( x\right) = {wx} + \beta \tag{4}
+$$
+
+where $l$ represents the layer number, $\odot$ denotes the convolution operation, ${f}_{k}^{l}$ represents the convolution kernel, ${x}_{k}^{l - 1}$ represents the feature map, ${b}_{k}^{l}$ is the bias term, and $Y$ is the feature vector with a final output dimension $L$ . The loss function, through training the model, ensures that the reconstruction $\widetilde{x}$ of the decoder output has minimal error relative to the original input $x$ . The following is the definition of the loss function SSIM:
+
+$$
+\mathcal{F}\left( {x,\widetilde{x}}\right) = 1 - \frac{1}{M}\mathop{\sum }\limits_{{m = 1}}^{M}\operatorname{SSIM}\left( {x,{\widetilde{x}}_{m}}\right) \tag{5}
+$$
+
+$\operatorname{SSIM}\left( {{x}_{m},{\widetilde{x}}_{m}}\right) =$ (6) Conv Feature Fully $\left( {5 \div 5 - 8}\right)$ Maps connected layer (2*2) Feature volution layer vector Unpooling DeConv Feature Fully connected layer $\frac{2{\mu }_{{x}_{m}}{\mu }_{{\widetilde{x}}_{m}} + {c}_{1}}{{\mu }_{{x}_{m}}^{2} + {\mu }_{{\widetilde{x}}_{m}}^{2} + {c}_{1}^{2}{\sigma }_{{x}_{m}}^{2} + {\sigma }_{{\widetilde{x}}_{m}}^{2} + {c}_{2}}$ Original Encounter Conv Conv Scenarios $\left( {9 \times 9 - {16}}\right)$ (7*7-16) Encoder (2*2 (2*2) Loss Convolutional Layer Function Decode Unpooling (2*2 Encounter DeConv DeConv Scenarios (9*9-16) (7*7-16)
+
+Fig. 5. The architecture of convolutional autoencoder.
+
+LSTM is widely used for studying persistent features in time series data and can effectively learn dependencies between time series[18]. Therefore, LSTM is chosen to represent temporal feature evolution. The LSTM primarily consists of three gating units: the forget gate, the input gate, and the output gate, as shown in Figure. 6. The forget gate controls the transmission or forgetting of information. The process is described by Equation (7):
+
+$$
+{f}_{t} = \sigma \left( {{W}_{f} \cdot \left\lbrack {{h}_{t - 1},{x}_{t}}\right\rbrack + {b}_{f}}\right) \tag{7}
+$$
+
+where $W$ represents weight, $b$ represents bias, $\left\lbrack {{h}_{t - 1},{x}_{t}}\right\rbrack$ represents a vector consisting of the hidden layer output ${h}_{t - 1}$ of the previous LSTM module, and the input ${x}_{t}$ of the current module, $\sigma \left( \cdot \right)$ represents the sigmoid function.
+
+0 $\sigma$ tanh tanh 0
+
+Fig. 6. LSTM unit structure diagram.
+
+## E. Clustering of Encounter Scenarios
+
+Feature vectors can eventually describe the intricate spatial relationships and temporal evolution of ship encounter events through the aforementioned method. By calculating the distance between each related feature vector, the similarity between ship encounter scenarios is determined. Once the distances are obtained, clustering algorithms classify the scenarios, and the results are evaluated using metrics to obtain the final classification outcome. Hierarchical clustering, simple and widely used, can reflect the step-by-step partitioning process of each object through a hierarchical clustering tree. $\left\lbrack {{19},{20}}\right\rbrack$ Therefore, hierarchical clustering is chosen as the clustering algorithm for this study's encounter scenarios.
+
+In the process of hierarchical clustering, it is challenging to directly select the best clustering result Therefore, an indicator is needed to select the appropriate number of clusters. In this paper, the value of $k$ is adaptively determined using the silhouette coefficient. ${SC}$ is defined by the mean distance from any point in the cluster to other points in the cluster after classification and the mean distance from any point to all points in the adjacent clusters. The better the categorization effect, the higher the SC value. The formula (8) displays the ${SC}$ calculating procedure.
+
+$$
+{SC}\left( i\right) = \frac{{CTb}\left( i\right) - {CTa}\left( i\right) }{\max \{ {CTa}\left( i\right) ,{CTb}\left( i\right) \} } \tag{8}
+$$
+
+The average distance between scenario $i$ and other scenes in the same cluster is ${CTb}\left( i\right)$ , whereas the minimal average distance between scenario $i$ and other clusters is ${CTa}\left( i\right)$ . The silhouette coefficient ranges from -1 to 1 , with higher values indicating better clustering performance.
+
+## III. CASE STUDY
+
+## A. Data collection and processing
+
+This research uses data from November 1, 2018, to November 30, 2018, for the outside waters of Ningbo-Zhoushan Port. As shown in Fig.7, the targeted area is situated between latitudes ${29} \circ {30}\mathrm{\;N} - {29} \circ {49}\mathrm{\;N}$ and longitudes ${122} \circ {20}\mathrm{E} - {122} \circ {60}\mathrm{\;E}$ . To guarantee the precision of the ship encounter scenario analysis, specific mission vessel data, including tugboats, fishing boats, and anchored ships, were removed from the data. Subsequently, the residual data underwent data preprocessing procedures in preparation for more experiments. It is evident from the trajectory distribution that there are a lot of ship interactions in the research area.
+
+the outside waters of Ning-Bo-Zhoushan Port
+
+Fig. 7. The location of the study area.
+
+## B. Analysis and validation of scenario extraction results
+
+Three sample ship encounter scenarios are shown in Figure 8 to confirm the retrieved ship encounter scenarios. Four graphics are used to explain each scenario: the first graph shows the encounter process from start to finish using printed trajectories. The end state of the interaction is indicated by the ship icon in this subgraph. The progression of relative distance, DCPA, and TCPA between the OS and other TSs during the encounter process is shown in the remaining three graphs (a), (b), and (c). In these cases, the DCPA stays tiny for a while, the TCPA changes from positive to negative, and the relative distance first drops to a very low value before gradually increasing. The retrieved scenarios are validated by the evolutionary patterns that align with real-world encounter experiences. The aforementioned evolution trends of relative distance, DCPA, and TCPA are all consistent across all extracted situations.
+
+-06-TS -08-TS -cs-ts - threshold -OS-TS2 -OS-TS2 Time(+10s) $\operatorname{Time}\left( {\times {10}\mathrm{s}}\right)$ Time(-10s) TS/ —OS-TS2 ${\epsilon }_{\mathrm{{TS2}}}$ TS1 Longitade (*) Time(^10s) OS /TS3 Longitude $\left( {}^{0}\right)$ Time $\left( {\times {10}\mathrm{\;s}}\right)$
+
+Fig. 8. Encounter situations involving varying numbers of ships and the development of their features.
+
+Due to computational cost constraints, experimenting with all ship encounter scenarios is difficult. Therefore, selecting common encounter scenarios in maritime navigation as experimental data is necessary. As seen in Figure 9, the extracted encounter scenarios were first categorized and statistically examined according to the number of ships engaged. According to the classification results, two-ship encounters make up around half of all extracted scenarios, making them the most frequent. As ships involved increase, the number of scenarios gradually decreases, with a substantial decline occurring when the number of ships exceeds five.
+
+18000 133 6 7 8 9 10 11 12 The number of ships involved in encounter 16090 16000 The number of encounters 14000 12000 10000 8682 8000 6000 4000 3961 2000 1798 2 3
+
+Fig. 9. Scenario classification outcomes depending on the number of ships.
+
+To ensure the experimental data is representative while also saving computational costs, two-ship and three-ship encounter scenarios are chosen as the experimental dataset. This selection includes common two-ship encounters and more complex multi-ship encounters, which occur more frequently in actual maritime navigation. The durations of the two types of encounter scenarios in the experimental dataset were then statistically analyzed, and Figure. 10. displays the results. The analysis revealed that the proportions of two-ship and three-ship scenarios lasting more than 10 minutes were 84.6% and 90.1%, respectively. This data segment is representative of all data exceeding 10 minutes, providing an important reference value for experimental analysis. Based on maritime navigation experience, scenarios lasting 10-20 minutes were chosen as experimental data. This selection ensures the significance of ship interactions while preventing the dataset from becoming overly large. Therefore, two-ship and three-ship encounter scenarios lasting 10-20 minutes were chosen as the final experimental dataset.
+
+1800 two-ship 150 200 The duration of scenarios $\left( {\times {10}\mathrm{\;s}}\right)$ 1600 1400 Frequency 1000 800 400 200 0 50 100
+
+Fig. 10. Duration statistics for encounter scenarios.
+
+## C. Experimental Software Environment and Model Training
+
+For the experimental software environment, Python was chosen, using the PyTorch deep learning framework to train the model. The hyperparameter settings are shown in Table I. In Table I, Adam is the optimizer for the adaptive moment estimation method; Batch size represents the number of samples trained in each batch; Epoch refers to the number of training epochs; and Num Hidden Unit is the hidden layer dimensions of the LSTM.
+
+TABLE I. HYPERPARAMETER SETTINGS
+
+| HYPERPARAMETER | Parameter Value |
| Optimizer | Adam |
| CAE hidden layer dimensions | 8 |
| Batch size | 128 |
| Learning Rate | 0.001 |
| Epoch | 760 |
| Num Hidden Unit | 3 |
+
+A total of 500 scenarios were selected from the experimental dataset for model training. First, the encounter scenarios were time-sliced, resulting in 7,366 and 7,261 scenario grid images, respectively. These encounter scenarios were then input into the CAE to extract spatial features. After 760 training epochs, the change in the loss function values with the number of training epochs is shown in Figure. 11. The training error converges to a very small value, indicating that the trained CAE can reconstruct the input data from the latent layer features. To demonstrate that the trained CAE can reconstruct the original encounter scenarios, the original scenario images and their reconstructed versions are shown in Figure. 12. The first row displays the original ship encounter scenarios, while the second row shows the reconstructed images. The structural similarity between the original and reconstructed scenarios demonstrates that the CAE model excels in capturing low-dimensional representations and reconstructing high-quality images from these features. Finally, the feature matrix generated by the CAE is input into the LSTM model to learn the spatial feature evolution of the scenarios over time, outputting feature vectors to represent them.
+
+0.8 two-ship three-ship 400 600 800 Epochs 0.6 Loss 0.4 0.2 0 200
+
+Fig. 11. Loss during the training of CAE.
+
+Original Reconstructed
+
+Fig. 12. Original and reconstructed encounter scenario images of CAE
+
+## D. Clustering and Evaluation
+
+The ship encounter scenarios were represented by feature vectors using the CAE-LSTM approach. Subsequently, hierarchical clustering was applied to these feature vectors to classify the ship encounter scenarios and obtain clustering results. SC was used to determine the ideal number of clusters and evaluate the effectiveness of clustering. Cluster counts varied from two to fifteen., and the ${SC}$ values varied accordingly, as shown in Figure. 13.
+
+0.9 two-ship three-ship 12 14 Number of Clusters Mean Silhouette Value 0.8 0.7 0.6 0.5 0.4 0.3 0.2
+
+Fig. 13. Variation of silhouette coefficient values with the number of clusters
+
+It demonstrates that both datasets obtained the highest silhouette coefficient values when there are two clusters. However, avoiding too few clusters is required to ensure a detailed separation of the microscopic aspects of ship interactions in various encounter scenarios. Therefore, 5 and 4 were chosen as the final number of clusters for the two datasets, respectively. These values represent the inflection points of the silhouette coefficient for both datasets. Beyond these points, as the number of clusters increases, the silhouette coefficient generally declines, indicating a deterioration in clustering performance.
+
+After clustering the encounter scenarios, the frequency and duration distributions for each cluster are shown in Figures 14 and Figure. 15, respectively. For further analysis, the two clusters with the highest and lowest frequencies from each dataset were selected for feature analysis.
+
+two-ship three-ship 300 250 Frequency 200 150 100 50 0 3 250 Frequency 200 150 100 1 3 K
+
+Fig. 14. Frequency distribution of encounter scenarios.
+
+rwo-ship 115 Duration ( $\times {10}\mathrm{\;s}$ ) 105 2 The number of clusters Duration ( $\times {10}\mathrm{\;s}$ ) 115 110 105 95 3 The number of clusters
+
+Fig. 15. The duration distribution of each cluster of encounter scenarios.
+
+The interaction process between ship trajectories and the evolution of two features-relative distance and TCPA is shown in Figures 16 and Figure. 17. The first row of three images shows the complete trajectory of three encounter scenarios, where " $\circ$ " and " $\times$ " represent the start and end positions of the encounter scenario, respectively. The relevant scenarios' relative distance and TCPA evolution are shown in the other two rows. The first two columns belong to the same cluster and illustrate the common characteristics of the scenarios. The third column represents a different cluster to highlight the distinctions.
+
+For the two-ship encounter scenarios, Cluster 4 features ships moving in opposite directions, showing a head-on encounter with the relative distance initially decreasing and then increasing, and the TCPA exhibiting a linear decreasing trend. Cluster 5, on the other hand, consists of ships moving in the same direction, with the relative distance remaining constant and TCPA showing a decreasing trend but with significant fluctuations. For the three-ship encounter scenarios, Cluster 1 involves one target ship crossing paths with the OS, while the other target ship encounters head-on. The relative distances for both target ships initially decrease and then increase, with the increase varying in magnitude. The TCPA shows a decreasing trend, with one ship's TCPA decreasing linearly and the other exhibiting noticeable fluctuations. In contrast, Cluster 3 features both target ships crossing paths with the OS. Although the relative distance trend is similar to Cluster 1 , the ships in Cluster 3 are moving in the same direction, resulting in consistent changes in relative distance and TCPA fluctuating consistently before reaching zero. In summary, the interaction of trajectories, the evolution of features, and the duration within the same cluster exhibit consistent patterns. Different clusters, however, show distinctly different patterns.
+
+Latinde Latitude Latitude -OS-TS #4 44 -OS-TS Distance(m) #4 #4 TCPA(min)
+
+Fig. 16. Trajectory interaction and feature evolution process of the two-ship encounter scenarios.
+
+Locitoud #3 #3 Time(*10s) Time $\left( {\times {10}\mathrm{\;s}}\right)$ -OS-TS1 —OS-TS1 OS-TS2 -OS-TS) --- threshold Time(×10s) $\operatorname{Time}\left( {\times {10}\mathrm{\;s}}\right)$ #1 Distance(m) #1 #1 -OS-TS1 OS-TS2 TCPA(min) #1 Time(×10s)
+
+Fig. 17. Trajectory interaction and feature evolution process of the three-ship encounter scenarios.
+
+Through the above analysis, the ship encounter scenario clustering method proposed in this paper effectively classifies different scenarios. The visual verification of trajectory interactions and feature evolution during the encounter process confirms the validity of this classification method. It demonstrates the various interaction patterns and contexts among multiple ships in complex navigable waters, aiding in distinguishing and understanding different types of ship encounter scenarios.
+
+## IV. CONCLUSION
+
+This paper proposes a method for clarifying ship encounter scenarios. First, ship encounter scenarios are segmented using time windows, and convolutional autoencoders generate spatial feature vectors for each time slice. Next, these spatial feature vectors are sequentially input into a long short-term memory (LSTM) network to produce temporal feature vectors. Finally, hierarchical clustering is applied to group the feature vectors based on their spatiotemporal attributes. Experimental results demonstrate that this method effectively classifies encounter scenarios involving various numbers of ships. The visualization of the interaction process and the dynamic evolution of features between ships confirms the classification's effectiveness.
+
+## V. FUTURE WORK
+
+In the future, we plan to make improvements in the following two directions:
+
+1. Increase the size of the experimental data sample and optimize the scenario construction method to develop a multi-ship encounter scenario library tailored for complex navigational waters. Additionally, establish a query index based on ship scenarios.
+
+2. Improve the classification method of ship encounter scenarios and enrich the dynamic characterization of encounter scenarios; design relevant application algorithms based on the scenario library, such as scenario prediction, risk assessment, and ship collision avoidance algorithms, etc., and further study the characterization of multi-ship encounter scenarios and the evolution law in depth.
+
+## REFERENCES
+
+[1] Xin, X., Liu, K., Yang, Z., Zhang, J., & Wu, X. (2021). A probabilistic risk approach for the collision detection of multi-ships under spatiotemporal movement uncertainty. Reliability Engineering & System Safety, 215, 107772.
+
+[2] Fan, S., Blanco-Davis, E., Yang, Z., Zhang, J., & Yan, X. (2020). Incorporation of human factors into maritime accident analysis using a data-driven Bayesian network. Reliability Engineering & System Safety, 203, 107070.
+
+[3] Goerlandt, F., & Montewka, J. (2015). Maritime transportation risk analysis: Review and analysis in light of some foundational issues. Reliability Engineering & System Safety, 138, 115-134.
+
+[4] Ma, J., Liu, Q., Zhang, C., Liu, K., & Zhang, Y. (2019). Spatiotemporal analysis of AIS-based data and extraction of ship encounter situations. Journal of China Safety Science, (5), 111-116.
+
+[5] Ma, J., Li, W., Zhang, C., & Zhang, Y. (2021). Ship encounter situation identification in converging waters based on AIS data. China Navigation, (01), 68-74.
+
+[6] Wang, W., Huang, L., Liu, K., Zhou, Y., Yuan, Z., Xin, X., & Wu, X. (2024). Ship encounter scenario generation for collision avoidance algorithm testing based on AIS data. Ocean Engineering, 291, 116436.
+
+[7] Li, H., Liu, J., Liu, R. W., Xiong, N., Wu, K., & Kim, T. H. (2017). A dimensionality reduction-based multi-step clustering method for robust vessel trajectory analysis. Sensors, 17(8), 1792.
+
+[8] Zhang, S. K., Shi, G. Y., Liu, Z. J., Zhao, Z. W., & Wu, Z. L. (2018). Data-driven based automatic maritime routing from massive AIS trajectories in the face of disparity. Ocean Engineering, 155, 240-250.
+
+[9] Zhang, M., Montewka, J., Manderbacka, T., Kujala, P., & Hirdaris, S. (2021). A big data analytics method for the evaluation of ship-ship collision risk reflecting hydrometeorological conditions. Reliability Engineering & System Safety, 213, 107674.
+
+[10] Zhou, F., Li, J., & Wang, Y. (2023). An improved CNN-LSTM network for modulation identification relying on periodic features of signal. IET Communications, 17(18), 2097-2106.
+
+[11] Liang, M., Liu, R. W., Li, S., Gao, Z., Liu, X., & Lu, F. (2021). An unsupervised learning method with convolutional auto-encoder for vessel trajectory similarity computation. Ocean Engineering, 225, 108803.
+
+[12] Chen, X., Liu, Y., Achuthan, K., Zhang, X., & Chen, J. (2021). A semi-supervised deep learning model for ship encounter situation classification. Ocean Engineering, 239, 109824.
+
+[13] Kontopoulos, I., Makris, A., Zissis, D., & Tserpes, K. (2021, June). A computer vision approach for trajectory classification. In 2021 22nd IEEE International Conference on Mobile Data Management (MDM) (pp. 163- 168). IEEE.
+
+[14] Chun, D. H., Roh, M. I., Lee, H. W., Ha, J., & Yu, D. (2021). Deep reinforcement learning-based collision avoidance for an autonomous ship. Ocean Engineering, 234, 109216.
+
+[15] Liu, K., Yuan, Z., Xin, X., Zhang, J., & Wang, W. (2021). Conflict detection method based on dynamic ship domain model for visualization of collision risk hot-spots. Ocean Engineering, 242, 110143.
+
+[16] Li, S., Liu, J., & Negenborn, R. R. (2019). Distributed coordination for collision avoidance of multiple ships considering ship maneuverability. Ocean Engineering, 181, 212-226.
+
+[17] Wang, W., Ramesh, A., Zhu, J., Li, J., & Zhao, D. (2020). Clustering of driving encounter scenarios using connected vehicle trajectories. IEEE Transactions on Intelligent Vehicles, 5(3), 485-496.
+
+[18] Chen, H., Shao, Y., Ao, G., & Zhang, H. (2021). Speed prediction based on GCN-LSTM neural network for online maps. Journal of Transportati on Engineering, (04), 183-196.
+
+[19] Fahad, A., Alshatri, N., Tari, Z., Alamri, A., Khalil, I., Zomaya, A. Y., ... & Bouras, A. (2014). A survey of clustering algorithms for big data: Taxonomy and empirical analysis. IEEE Transactions on Emerging Topics in Computing, 2(3), 267-279.
+
+[20] Fan, J. (2019). OPE-HCA: An optimal probabilistic estimation approach for hierarchical clustering algorithm. Neural Computing and Applications, 31, 2095-2105.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..58a63d89f71aa9c66c0373dccc2a5839f5ac69df
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,279 @@
+§ RESEARCH ON THE CLASSIFICATION OF SHIP ENCOUNTER SCENARIOS BASED ON CAE-LSTM
+
+Taiyu Chai
+
+School of Navigation
+
+Wuhan University of Technology
+
+Wuhan, China
+
+282614@whut.edu.cn
+
+Zhitao Yuan*
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+ztyuan@whut.edu.cn
+
+Weiqiang Wang
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+weiqiangwang@whut.edu.cn
+
+Shengjie Yang
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+yangshengjie@whut.edu.cn
+
+${Abstract}$ - To tackle the challenge of recognizing similar ship encounter scenarios under multi-ship interference coupling and dynamic evolution, this paper proposes a classification method that combines a Convolutional Auto-Encoder (CAE) and a Long Short-Term Memory (LSTM) recurrent neural network model. To extract many genuine ship encounter scenarios from historical AIS data for further categorization, first, a method for extracting ship encounter scenarios taking spatiotemporal proximity restrictions is devised. Then, by setting a time window and rasterizing the scenarios, a CAE-based model is constructed to characterize the spatial interference of ships in the scenarios. Further, an LSTM network is used to learn temporal evolution features, achieving a low-dimensional spatiotemporal vector representation of ship encounter scenarios. Finally, hierarchical clustering is applied to classify different ship encounter scenarios based on these low-dimensional spatiotemporal vectors. The proposed method is validated through extensive experiments using data from Ningbo-Zhoushan Port, and the results show that this method can effectively extract real ship encounter scenarios and accurately identify similar scenarios. This research provides robust support for a deep understanding of ship encounter scenarios and the mining of similar ship behavior patterns.
+
+Keywords-ship encounter scenarios, scenarios classification, CAE, LSTM
+
+§ I. INTRODUCTION
+
+In recent years, the continuous growth in shipping volume has significantly increased maritime traffic density, leading to a rise in ship collision accidents [1]. Research shows that these mishaps are mostly caused by human factors. [2]. To mitigate collision incidents caused by human error, researchers have developed numerous navigation collision avoidance algorithms to enhance maritime safety[3]. Historical ship encounter scenarios contain rich avoidance processes and strategies. Extracting these scenarios and analyzing collision avoidance behavior patterns in similar situations allows this implicit knowledge to be integrated into the design of collision avoidance algorithms. This approach enhances the practicality of these algorithms and improves avoidance safety in similar scenarios. Therefore, extracting real ship encounter scenarios and effectively classifying similar scenarios hold significant potential for advancing collision avoidance algorithm design.
+
+Ship encounter scenarios essentially involve interactions between multiple vessels, which can be explained through their trajectories. Because the Automatic Identification System (AIS) is widely used on ships, scholars can collect large quantities of high-quality vessel trajectory data at a low cost, providing a rich and reliable data source for extracting ship encounter scenarios. Related research on encounter scenario extraction using AIS data has been carried out by several academics. Through the use of AIS data, Ma Jie et al. [4,5] were able to successfully extract ship encounter scenarios by analyzing the spatiotemporal correlations during ship interactions. Similarly, Based on the spatiotemporal proximity relationships between ships, Wang et al.[6] identified ship encounter possibilities from AIS data, evaluated the significance of each event, and sampled the data to create test scenarios for collision avoidance algorithms.
+
+Ship encounter scenarios are typical spatiotemporal sequence data, often exhibiting significant temporal evolution characteristics and complex multi-vessel interaction couplings. This complexity makes classifying ship encounter scenarios challenging. Current research mainly focuses on clustering analysis of individual ship trajectories. To identify frequent paths and discover abnormal trajectories, Li et al. [7] for instance, suggested a multi-step clustering methodology that combines principal component analysis, dynamic time warping, and an enhanced trajectory clustering center method. Ship itineraries were inferred from AIS data by Zhang et al. [8] using data-driven techniques such as ant colony optimization and geographic clustering of applications with noise based on density (DBSCAN). Zhang et al [9] classified ship trajectories using K-Means and DBSCAN clustering algorithms, then identified potential collision scenarios by detecting illegal evasive maneuvers through relative bearing angles and quantified the collision risk index when evasive actions were taken. However, these methods primarily rely on the similarity calculation of individual ship trajectories. Although they perform well in trajectory similarity analysis and classification, encounter scenarios involve the interactions of multiple ships, featuring significant temporal evolution characteristics and complex multi-ship interference effects. As a result, these methods have limitations in representing and measuring the spatio-temporal interference features in encounter scenarios and face challenges when directly applied to encounter scenario classification.
+
+This paper is supported by the National Natural Science Foundation of China(NSFC) under Grant NO.52031009. (Corresponding author: Zhitao Yuan).
+
+In recent years, deep learning has shown great potential in handling complex spatio-temporal data, and some studies have begun exploring its potential in trajectory similarity computation. These works demonstrate how deep learning techniques can more effectively capture the features of ship trajectories. Compared to traditional methods, deep learning models can automatically learn useful features from large amounts of data without relying on manual feature extraction, offering certain advantages [10]. Liang et al [11] proposed an unsupervised learning method based on a convolutional autoencoder (CAE), which maps trajectories into two-dimensional matrices to generate trajectory images and automatically extracts low-dimensional features via the CAE to compute similarity. Chen et al [12] introduced a method based on convolutional neural networks (CNN) to identify movement patterns in emerging trajectories. In this approach, a mobility-based trajectory structure is introduced as input to the identification model, and evaluations using real maritime trajectory datasets show the superiority of this method. Kontopoulos et al [13] proposed a novel method that integrates research in computer vision and trajectory classification, automatically extracting meaningful information from trajectory data and identifying movement patterns without the need for expert input.
+
+Overall, unsupervised and semi-supervised methods based on deep learning are gradually gaining attention in the field of maritime situational awareness. These methods share a common feature: they reduce reliance on manual intervention through automatic feature extraction, demonstrating strong adaptability, especially when handling large amounts of unlabeled data. It is recommended to develop an unsupervised learning method for representing the complex temporal evolution characteristics of ship encounter scenarios to enable effective classification. Based on the above analysis, this study proposes a ship encounter scenario classification method that combines a Convolutional Autoencoder (CAE) with a Long Short-Term Memory (LSTM) network. This approach comprehensively considers both the spatial interference coupling features among multiple ships and the temporal evolution patterns within the encounter scenario, enabling effective classification of ship encounter scenarios.
+
+§ II. METHODOLOGY
+
+This paper focuses on two main tasks: the extraction of real ship encounter scenarios based on AIS data, and the classification of these scenarios using a combination of CAE and LSTM models. As seen in Figure. 1., the research framework consists of three steps: preprocessing AIS data, ship encounter scenario extraction, and clustering ship encounter scenarios.
+
+Step 1: Data Preprocessing. Original AIS data is preprocessed to retain key attributes such as timestamp, Maritime Mobile Service Identity (MMSI), ship length, longitude, latitude, speed over ground (SOG), and course over ground (COG). These attributes are essential for calculating the subsequent spatiotemporal relationships of the vessels.
+
+Step 2: Encounter Scenario Extraction. Based on the spatiotemporal proximity analysis of ships, ship encounter scenarios are extracted from historical AIS data. This extraction provides numerous encounter scenarios that reflect the real navigational behaviors of ships for subsequent classification.
+
+Step 3: Time Slicing and Gridding. Time slicing and gridding are applied to the scenarios to characterize their spatiotemporal attributes.
+
+Step 4: Feature Representation. CAE and LSTM represent the spatial and temporal features of the encounter scenarios with feature vectors.
+
+Step 5: Clustering of Encounter Scenarios. Hierarchical clustering is applied to the feature vectors of all scenarios. To achieve the classification of encounter scenarios, the ideal number of clusters is found using the Silhouette Coefficient (SC) index.
+
+In summary, based on the most advanced research findings, our CAE-based ship encounter scenario classification method offers the following innovations. We propose generating information trajectory images by remapping the ship trajectories involved in encounter scenarios into two-dimensional matrices:
+
+1. The similarity between different encounter scenarios is measured by assessing the structural similarity between the corresponding information trajectory images.
+
+2. A convolutional autoencoder neural network is proposed to learn the low-dimensional representation of these images in an unsupervised manner. The learned representation can effectively capture the characteristics of ship encounter scenarios.
+
+Step 1: Data Preprocessing Retention of information Time Latitude and stamp longitude COG SOG MMSI Ship encounter Scenario relative distance Clustering LSTM ⑯ Featung Hierarchical clustering Time feature Raw Noise Data Filtering interpolation AIS data Abnormal Static data elimination matching Step 2: Encounter Scenario Extraction Spatial-temporal Ship pairs proximity assessment relationship judgment Co-occurrence time Minimum passing DCPA、TCPA distance Time and distance Collision warning thresholds threshold Step 3: Encounter Scenario Clustering Scenario Representation Time Slicing CAE Encoder Featun Hidden Layer Matrix Decoder Space-time sequence Spatial feature
+
+Fig. 1. Overview of the proposed approach.
+
+§ A. DATA PREPROCESSING
+
+The quality of AIS data significantly impacts the accuracy of the extracted encounter scenarios. Due to various factors, AIS data may contain inconsistencies with the actual navigational state of the ships. Therefore, preprocessing is necessary before extracting encounter scenarios[14]. Main preprocessing operations include noise filtering, anomaly removal, data interpolation, and matching of static data information[15].
+
+§ B.AIS DATA-BASED ENCOUNTER SCENARIOS EXTRACTED
+
+Spatio-temporal relationships between ships are fundamental for extracting encounter scenarios. In this work, ship encounter scenarios are described as a series of ship pairs, that within a specific time sequence, satisfy specific spatiotemporal proximity conditions. Figure 2. shows a graphical description of ship encounter scenarios. The timeline is shown on the x-axis in Figure 2, and the ship identification numbers that are part of the encounter scenarios are shown on the y-axis. The lines with arrows represent the navigation period of the Own Ship (OS) in the study area, while the lines with arrows in front of each Target Ship (TS) indicate the periods when the TS meets the preset spatiotemporal proximity conditions with the OS.
+
+ID of ships in encounter TS3 ${\mathrm{t}}_{4}$ ${\mathrm{t}}_{5}$ ${\mathrm{t}}_{6}$ ${\mathrm{t}}_{7}$ time OS TS1 ${\mathrm{t}}_{0}$ ${\mathrm{t}}_{1}$ ${\mathrm{t}}_{2}$ ${\mathrm{t}}_{3}$
+
+Fig. 2. Overview of the proposed approach.
+
+Additional evolution analysis of the Distance at the Closest Point of Approach (DCPA) and the Time to the Closest Point of Approach (TCPA) is necessary to precisely define spatiotemporal proximity relationships between ships at each time[16]. By analyzing the preprocessed AIS data, the spatiotemporal relationships between ships can be extracted, allowing the identification of ship encounters. Specifically, when two ships remain in the study area for a period exceeding the set time threshold, the minimum distance between them is calculated. Further analysis will be done on their relative distance, DCPA, and TCPA evolution patterns if this closest passing distance is less than the distance criterion. A ship pair will be deemed to meet the spatiotemporal proximity constraints that may result in a collision if their relative distance is decreasing and stays within the early-warning distance, and both DCPA and TCPA values stay below a specific threshold before approaching the closest passing distance. Under such circumstances, the relevant data will be saved and the segments of two ships that satisfy these spatiotemporal proximity constraints will be retrieved. The beginning and ending times of the extracted segments, as well as static and dynamic information on each ship (such as MMSI, length, width, type, and so on) at each timestamp over this period, are all included in this data. Figure. 3. provides a graphical illustration of DCPA and TCPA, with the calculation formulas provided below.
+
+$$
+{DCP}{A}_{t} = {D}_{ijt} \cdot \sqrt{1 - {\cos }^{2}\left( {\theta }_{ijt}\right) } \tag{1}
+$$
+
+$$
+{TCP}{A}_{t} = \frac{-{D}_{ijt} \cdot \cos \left( {\theta }_{ijt}\right) }{{v}_{ijt}} \tag{2}
+$$
+
+where ${D}_{ijt}$ represents the distance between ship $i$ and ship $j$ at time $t.{v}_{ijt}$ represents the relative speed between ship $i$ and ship $j$ at time $t.\cos \left( {\theta }_{ijt}\right)$ indicates the angle formed by the cosine of the relative velocity and the line joining the two ships.
+
+$y$ $\left( {{v}_{j},{a}_{j}}\right) \angle$ ${v}_{i}$ $\left( {{x}_{j},{y}_{j}}\right)$ ${\text{ ship }}_{j}$ ${d}_{ij}$ ${TCPA} = - D$ $\cos \left( {\theta }_{j}\right) /{v}_{j}$ $\left( {{v}_{i},{a}_{i}}\right)$ ${DCP}\dot{A}$ $\left( {{x}_{i},{y}_{i}}\right)$ ship
+
+Fig. 3. DCPA and TCPA interpretation in graphics.
+
+§ C. ENCOUNTER SCENARIO TIME SLICE
+
+Ship encounter scenarios, as spatiotemporal sequence data, involve mutual interference between ships that varies over time. Therefore, classifying encounter scenarios requires attention to both spatial interference characteristics and temporal evolution patterns of the ships. Time-slicing the scenarios and gridding each slice is the first step in the process of efficiently extracting the spatial and temporal features of these scenarios. This maps the temporal evolution of spatial interference characteristics into multi-time-window grids. Compared with the original trajectory image pixels, raster images contain richer information and are more conducive to CAE to characterize the interaction of ships in the encounter scenario.
+
+Time Slice1 Time Slice m ... Time Slice1 Time Slice $\mathbf{m}$ ...
+
+Fig. 4. Raster map generation and scene time slicing.
+
+Thus, this paper projects the original ship trajectory into a two-dimensional matrix to generate a trajectory raster image based on the time sequence of the encounter scenarios, maintaining the original spatiotemporal characteristics. To balance the information richness of encounter scene slices and the total number of slices, the time window duration is set to 3 minutes, and the time window step to 1 minute. The particular procedure is depicted in Figure. 4.
+
+§ D. FEATURE REPRESENTATION OF ENCOUNTER SCENARIOS
+
+To fully representant spatial interaction features between ships from multi-time window raster images and to learn the contextual relationships between feature sequences, as well as to uncover the temporal evolution patterns of the scenarios, we employ a multi-layer CAE neural network combined with LSTM for unsupervised learning and feature representation. The CAE, with convolutional and pooling layers, learns to identify local spatial interactions and patterns within each raster image[17]. Once spatial features are obtained, they are fed into the LSTM model, which captures the temporal evolution of these features over multiple time windows. The combination of CAE and LSTM enables a comprehensive representation of both the spatial interactions between ships and their dynamic changes over time.
+
+This study employs a CAE-based autoencoder architecture. Compared to traditional autoencoders, CAE incorporates convolutional and pooling layers, allowing for better extraction of local features related to ship spatial interference in the scene grid maps. As shown in Figure. 5, the CAE model consists of three convolutional layers, three max-pooling layers, and fully connected layers. The encoder layer transforms input scene grid maps into low-dimensional feature vectors, thereby representing the spatial features of encounter scenarios. The decoder layer uses ReLU as the activation function to effectively reconstruct the low-dimensional feature vectors into scene grid maps. Additionally, to enhance the feature representation capability CAE, this study introduces a loss function sensitive to the structure of the images, specifically the structural similarity (SSIM) index, to ensure the accuracy of the extracted features. To further elucidate the working mechanism of the CAE model, the operations of convolutional and fully connected layers are described in detail as follows:
+
+$$
+{x}_{k}^{l} = {A}_{E}\left( {{f}_{k}^{l} \odot {x}_{k}^{\left( l - 1\right) } + {b}_{k}^{l}}\right) \tag{3}
+$$
+
+$$
+Y = \mathcal{H}\left( x\right) = {wx} + \beta \tag{4}
+$$
+
+where $l$ represents the layer number, $\odot$ denotes the convolution operation, ${f}_{k}^{l}$ represents the convolution kernel, ${x}_{k}^{l - 1}$ represents the feature map, ${b}_{k}^{l}$ is the bias term, and $Y$ is the feature vector with a final output dimension $L$ . The loss function, through training the model, ensures that the reconstruction $\widetilde{x}$ of the decoder output has minimal error relative to the original input $x$ . The following is the definition of the loss function SSIM:
+
+$$
+\mathcal{F}\left( {x,\widetilde{x}}\right) = 1 - \frac{1}{M}\mathop{\sum }\limits_{{m = 1}}^{M}\operatorname{SSIM}\left( {x,{\widetilde{x}}_{m}}\right) \tag{5}
+$$
+
+$\operatorname{SSIM}\left( {{x}_{m},{\widetilde{x}}_{m}}\right) =$ (6) Conv Feature Fully $\left( {5 \div 5 - 8}\right)$ Maps connected layer (2*2) Feature volution layer vector Unpooling DeConv Feature Fully connected layer $\frac{2{\mu }_{{x}_{m}}{\mu }_{{\widetilde{x}}_{m}} + {c}_{1}}{{\mu }_{{x}_{m}}^{2} + {\mu }_{{\widetilde{x}}_{m}}^{2} + {c}_{1}^{2}{\sigma }_{{x}_{m}}^{2} + {\sigma }_{{\widetilde{x}}_{m}}^{2} + {c}_{2}}$ Original Encounter Conv Conv Scenarios $\left( {9 \times 9 - {16}}\right)$ (7*7-16) Encoder (2*2 (2*2) Loss Convolutional Layer Function Decode Unpooling (2*2 Encounter DeConv DeConv Scenarios (9*9-16) (7*7-16)
+
+Fig. 5. The architecture of convolutional autoencoder.
+
+LSTM is widely used for studying persistent features in time series data and can effectively learn dependencies between time series[18]. Therefore, LSTM is chosen to represent temporal feature evolution. The LSTM primarily consists of three gating units: the forget gate, the input gate, and the output gate, as shown in Figure. 6. The forget gate controls the transmission or forgetting of information. The process is described by Equation (7):
+
+$$
+{f}_{t} = \sigma \left( {{W}_{f} \cdot \left\lbrack {{h}_{t - 1},{x}_{t}}\right\rbrack + {b}_{f}}\right) \tag{7}
+$$
+
+where $W$ represents weight, $b$ represents bias, $\left\lbrack {{h}_{t - 1},{x}_{t}}\right\rbrack$ represents a vector consisting of the hidden layer output ${h}_{t - 1}$ of the previous LSTM module, and the input ${x}_{t}$ of the current module, $\sigma \left( \cdot \right)$ represents the sigmoid function.
+
+0 $\sigma$ tanh tanh 0
+
+Fig. 6. LSTM unit structure diagram.
+
+§ E. CLUSTERING OF ENCOUNTER SCENARIOS
+
+Feature vectors can eventually describe the intricate spatial relationships and temporal evolution of ship encounter events through the aforementioned method. By calculating the distance between each related feature vector, the similarity between ship encounter scenarios is determined. Once the distances are obtained, clustering algorithms classify the scenarios, and the results are evaluated using metrics to obtain the final classification outcome. Hierarchical clustering, simple and widely used, can reflect the step-by-step partitioning process of each object through a hierarchical clustering tree. $\left\lbrack {{19},{20}}\right\rbrack$ Therefore, hierarchical clustering is chosen as the clustering algorithm for this study's encounter scenarios.
+
+In the process of hierarchical clustering, it is challenging to directly select the best clustering result Therefore, an indicator is needed to select the appropriate number of clusters. In this paper, the value of $k$ is adaptively determined using the silhouette coefficient. ${SC}$ is defined by the mean distance from any point in the cluster to other points in the cluster after classification and the mean distance from any point to all points in the adjacent clusters. The better the categorization effect, the higher the SC value. The formula (8) displays the ${SC}$ calculating procedure.
+
+$$
+{SC}\left( i\right) = \frac{{CTb}\left( i\right) - {CTa}\left( i\right) }{\max \{ {CTa}\left( i\right) ,{CTb}\left( i\right) \} } \tag{8}
+$$
+
+The average distance between scenario $i$ and other scenes in the same cluster is ${CTb}\left( i\right)$ , whereas the minimal average distance between scenario $i$ and other clusters is ${CTa}\left( i\right)$ . The silhouette coefficient ranges from -1 to 1, with higher values indicating better clustering performance.
+
+§ III. CASE STUDY
+
+§ A. DATA COLLECTION AND PROCESSING
+
+This research uses data from November 1, 2018, to November 30, 2018, for the outside waters of Ningbo-Zhoushan Port. As shown in Fig.7, the targeted area is situated between latitudes ${29} \circ {30}\mathrm{\;N} - {29} \circ {49}\mathrm{\;N}$ and longitudes ${122} \circ {20}\mathrm{E} - {122} \circ {60}\mathrm{\;E}$ . To guarantee the precision of the ship encounter scenario analysis, specific mission vessel data, including tugboats, fishing boats, and anchored ships, were removed from the data. Subsequently, the residual data underwent data preprocessing procedures in preparation for more experiments. It is evident from the trajectory distribution that there are a lot of ship interactions in the research area.
+
+the outside waters of Ning-Bo-Zhoushan Port
+
+Fig. 7. The location of the study area.
+
+§ B. ANALYSIS AND VALIDATION OF SCENARIO EXTRACTION RESULTS
+
+Three sample ship encounter scenarios are shown in Figure 8 to confirm the retrieved ship encounter scenarios. Four graphics are used to explain each scenario: the first graph shows the encounter process from start to finish using printed trajectories. The end state of the interaction is indicated by the ship icon in this subgraph. The progression of relative distance, DCPA, and TCPA between the OS and other TSs during the encounter process is shown in the remaining three graphs (a), (b), and (c). In these cases, the DCPA stays tiny for a while, the TCPA changes from positive to negative, and the relative distance first drops to a very low value before gradually increasing. The retrieved scenarios are validated by the evolutionary patterns that align with real-world encounter experiences. The aforementioned evolution trends of relative distance, DCPA, and TCPA are all consistent across all extracted situations.
+
+-06-TS -08-TS -cs-ts - threshold -OS-TS2 -OS-TS2 Time(+10s) $\operatorname{Time}\left( {\times {10}\mathrm{s}}\right)$ Time(-10s) TS/ —OS-TS2 ${\epsilon }_{\mathrm{{TS2}}}$ TS1 Longitade (*) Time(1̂0s) OS /TS3 Longitude $\left( {}^{0}\right)$ Time $\left( {\times {10}\mathrm{\;s}}\right)$
+
+Fig. 8. Encounter situations involving varying numbers of ships and the development of their features.
+
+Due to computational cost constraints, experimenting with all ship encounter scenarios is difficult. Therefore, selecting common encounter scenarios in maritime navigation as experimental data is necessary. As seen in Figure 9, the extracted encounter scenarios were first categorized and statistically examined according to the number of ships engaged. According to the classification results, two-ship encounters make up around half of all extracted scenarios, making them the most frequent. As ships involved increase, the number of scenarios gradually decreases, with a substantial decline occurring when the number of ships exceeds five.
+
+18000 133 6 7 8 9 10 11 12 The number of ships involved in encounter 16090 16000 The number of encounters 14000 12000 10000 8682 8000 6000 4000 3961 2000 1798 2 3
+
+Fig. 9. Scenario classification outcomes depending on the number of ships.
+
+To ensure the experimental data is representative while also saving computational costs, two-ship and three-ship encounter scenarios are chosen as the experimental dataset. This selection includes common two-ship encounters and more complex multi-ship encounters, which occur more frequently in actual maritime navigation. The durations of the two types of encounter scenarios in the experimental dataset were then statistically analyzed, and Figure. 10. displays the results. The analysis revealed that the proportions of two-ship and three-ship scenarios lasting more than 10 minutes were 84.6% and 90.1%, respectively. This data segment is representative of all data exceeding 10 minutes, providing an important reference value for experimental analysis. Based on maritime navigation experience, scenarios lasting 10-20 minutes were chosen as experimental data. This selection ensures the significance of ship interactions while preventing the dataset from becoming overly large. Therefore, two-ship and three-ship encounter scenarios lasting 10-20 minutes were chosen as the final experimental dataset.
+
+1800 two-ship 150 200 The duration of scenarios $\left( {\times {10}\mathrm{\;s}}\right)$ 1600 1400 Frequency 1000 800 400 200 0 50 100
+
+Fig. 10. Duration statistics for encounter scenarios.
+
+§ C. EXPERIMENTAL SOFTWARE ENVIRONMENT AND MODEL TRAINING
+
+For the experimental software environment, Python was chosen, using the PyTorch deep learning framework to train the model. The hyperparameter settings are shown in Table I. In Table I, Adam is the optimizer for the adaptive moment estimation method; Batch size represents the number of samples trained in each batch; Epoch refers to the number of training epochs; and Num Hidden Unit is the hidden layer dimensions of the LSTM.
+
+TABLE I. HYPERPARAMETER SETTINGS
+
+max width=
+
+HYPERPARAMETER Parameter Value
+
+1-2
+Optimizer Adam
+
+1-2
+CAE hidden layer dimensions 8
+
+1-2
+Batch size 128
+
+1-2
+Learning Rate 0.001
+
+1-2
+Epoch 760
+
+1-2
+Num Hidden Unit 3
+
+1-2
+
+A total of 500 scenarios were selected from the experimental dataset for model training. First, the encounter scenarios were time-sliced, resulting in 7,366 and 7,261 scenario grid images, respectively. These encounter scenarios were then input into the CAE to extract spatial features. After 760 training epochs, the change in the loss function values with the number of training epochs is shown in Figure. 11. The training error converges to a very small value, indicating that the trained CAE can reconstruct the input data from the latent layer features. To demonstrate that the trained CAE can reconstruct the original encounter scenarios, the original scenario images and their reconstructed versions are shown in Figure. 12. The first row displays the original ship encounter scenarios, while the second row shows the reconstructed images. The structural similarity between the original and reconstructed scenarios demonstrates that the CAE model excels in capturing low-dimensional representations and reconstructing high-quality images from these features. Finally, the feature matrix generated by the CAE is input into the LSTM model to learn the spatial feature evolution of the scenarios over time, outputting feature vectors to represent them.
+
+0.8 two-ship three-ship 400 600 800 Epochs 0.6 Loss 0.4 0.2 0 200
+
+Fig. 11. Loss during the training of CAE.
+
+Original Reconstructed
+
+Fig. 12. Original and reconstructed encounter scenario images of CAE
+
+§ D. CLUSTERING AND EVALUATION
+
+The ship encounter scenarios were represented by feature vectors using the CAE-LSTM approach. Subsequently, hierarchical clustering was applied to these feature vectors to classify the ship encounter scenarios and obtain clustering results. SC was used to determine the ideal number of clusters and evaluate the effectiveness of clustering. Cluster counts varied from two to fifteen., and the ${SC}$ values varied accordingly, as shown in Figure. 13.
+
+0.9 two-ship three-ship 12 14 Number of Clusters Mean Silhouette Value 0.8 0.7 0.6 0.5 0.4 0.3 0.2
+
+Fig. 13. Variation of silhouette coefficient values with the number of clusters
+
+It demonstrates that both datasets obtained the highest silhouette coefficient values when there are two clusters. However, avoiding too few clusters is required to ensure a detailed separation of the microscopic aspects of ship interactions in various encounter scenarios. Therefore, 5 and 4 were chosen as the final number of clusters for the two datasets, respectively. These values represent the inflection points of the silhouette coefficient for both datasets. Beyond these points, as the number of clusters increases, the silhouette coefficient generally declines, indicating a deterioration in clustering performance.
+
+After clustering the encounter scenarios, the frequency and duration distributions for each cluster are shown in Figures 14 and Figure. 15, respectively. For further analysis, the two clusters with the highest and lowest frequencies from each dataset were selected for feature analysis.
+
+two-ship three-ship 300 250 Frequency 200 150 100 50 0 3 250 Frequency 200 150 100 1 3 K
+
+Fig. 14. Frequency distribution of encounter scenarios.
+
+rwo-ship 115 Duration ( $\times {10}\mathrm{\;s}$ ) 105 2 The number of clusters Duration ( $\times {10}\mathrm{\;s}$ ) 115 110 105 95 3 The number of clusters
+
+Fig. 15. The duration distribution of each cluster of encounter scenarios.
+
+The interaction process between ship trajectories and the evolution of two features-relative distance and TCPA is shown in Figures 16 and Figure. 17. The first row of three images shows the complete trajectory of three encounter scenarios, where " $\circ$ " and " $\times$ " represent the start and end positions of the encounter scenario, respectively. The relevant scenarios' relative distance and TCPA evolution are shown in the other two rows. The first two columns belong to the same cluster and illustrate the common characteristics of the scenarios. The third column represents a different cluster to highlight the distinctions.
+
+For the two-ship encounter scenarios, Cluster 4 features ships moving in opposite directions, showing a head-on encounter with the relative distance initially decreasing and then increasing, and the TCPA exhibiting a linear decreasing trend. Cluster 5, on the other hand, consists of ships moving in the same direction, with the relative distance remaining constant and TCPA showing a decreasing trend but with significant fluctuations. For the three-ship encounter scenarios, Cluster 1 involves one target ship crossing paths with the OS, while the other target ship encounters head-on. The relative distances for both target ships initially decrease and then increase, with the increase varying in magnitude. The TCPA shows a decreasing trend, with one ship's TCPA decreasing linearly and the other exhibiting noticeable fluctuations. In contrast, Cluster 3 features both target ships crossing paths with the OS. Although the relative distance trend is similar to Cluster 1, the ships in Cluster 3 are moving in the same direction, resulting in consistent changes in relative distance and TCPA fluctuating consistently before reaching zero. In summary, the interaction of trajectories, the evolution of features, and the duration within the same cluster exhibit consistent patterns. Different clusters, however, show distinctly different patterns.
+
+Latinde Latitude Latitude -OS-TS #4 44 -OS-TS Distance(m) #4 #4 TCPA(min)
+
+Fig. 16. Trajectory interaction and feature evolution process of the two-ship encounter scenarios.
+
+Locitoud #3 #3 Time(*10s) Time $\left( {\times {10}\mathrm{\;s}}\right)$ -OS-TS1 —OS-TS1 OS-TS2 -OS-TS) — threshold Time(×10s) $\operatorname{Time}\left( {\times {10}\mathrm{\;s}}\right)$ #1 Distance(m) #1 #1 -OS-TS1 OS-TS2 TCPA(min) #1 Time(×10s)
+
+Fig. 17. Trajectory interaction and feature evolution process of the three-ship encounter scenarios.
+
+Through the above analysis, the ship encounter scenario clustering method proposed in this paper effectively classifies different scenarios. The visual verification of trajectory interactions and feature evolution during the encounter process confirms the validity of this classification method. It demonstrates the various interaction patterns and contexts among multiple ships in complex navigable waters, aiding in distinguishing and understanding different types of ship encounter scenarios.
+
+§ IV. CONCLUSION
+
+This paper proposes a method for clarifying ship encounter scenarios. First, ship encounter scenarios are segmented using time windows, and convolutional autoencoders generate spatial feature vectors for each time slice. Next, these spatial feature vectors are sequentially input into a long short-term memory (LSTM) network to produce temporal feature vectors. Finally, hierarchical clustering is applied to group the feature vectors based on their spatiotemporal attributes. Experimental results demonstrate that this method effectively classifies encounter scenarios involving various numbers of ships. The visualization of the interaction process and the dynamic evolution of features between ships confirms the classification's effectiveness.
+
+§ V. FUTURE WORK
+
+In the future, we plan to make improvements in the following two directions:
+
+1. Increase the size of the experimental data sample and optimize the scenario construction method to develop a multi-ship encounter scenario library tailored for complex navigational waters. Additionally, establish a query index based on ship scenarios.
+
+2. Improve the classification method of ship encounter scenarios and enrich the dynamic characterization of encounter scenarios; design relevant application algorithms based on the scenario library, such as scenario prediction, risk assessment, and ship collision avoidance algorithms, etc., and further study the characterization of multi-ship encounter scenarios and the evolution law in depth.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ba25a5f6cb56894a1fd03f72863e4fe0946443f
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,349 @@
+# Impacts of speed and spacing on resistance in ship formations
+
+Linying Chen
+
+State Key Laboratory of Maritime
+
+Technology and Safety,
+
+School of Navigation, Wuhan
+
+University of Technology
+
+Wuhan, China
+
+LinyingChen@whut.edu.cn
+
+Linhao Xue
+
+School of Navigation, Wuhan
+
+University of Technology
+
+Wuhan, China
+
+xue_lh@whut.edu.cn
+
+Yangying He
+
+School of Intelligent Sports
+
+Engineering, Wuhan Sports
+
+University
+
+Wuhan, China
+
+yangyinghe@whsu.edu.cn
+
+Pengfei Chen
+
+State Key Laboratory of Maritime
+
+Technology and Safety,
+
+School of Navigation, Wuhan
+
+University of Technology
+
+Wuhan, China
+
+Chenpf@whut.edu.cn
+
+Junmin Mou
+
+State Key Laboratory of Maritime Technology and Safety, School of Navigation, Wuhan University of Technology Wuhan, China Moujm@whut.edu.cn
+
+Yamin Huang
+
+State Key Laboratory of Maritime
+
+Technology and Safety,
+
+School of Navigation, Wuhan
+
+University of Technology
+
+Wuhan, China
+
+YaminHuang@whut.edu.cn
+
+Abstract-Sailing in formation has the benefits of drag reduction. In current studies of hydrodynamic analysis of ship formations, the impacts of speed and spacing between adjacent ships on total resistance are seldom considered. To estimate the weight of different factors in formation on total resistance variation, the impacts of speed, longitudinal distance, and transverse locations on the observed total resistance of formations are investigated by analyzing hydrodynamic data in tandem, parallel, and triangle formation. The relation between resistance variation and speed is revealed. The regression analysis results on different formations indicate the differences between longitudinal spacing and transverse impacts. The regression formulation can be adopted to predict total resistance in formations.
+
+Keywords-drag reduction, formation, regression analysis
+
+## I. INTRODUCTION
+
+Nowadays, saving energy, reducing atmospheric pollutant emissions, and lowering carbon emissions are key concerns in the shipping industry. Increasingly, scholars are focusing on reducing ship resistance to save energy. Inspired by observing and analyzing duck flock swimming behavior [1], scholars have drawn insights from biomimicry and begun researching drag reduction through ship formations.
+
+Chen et al. [2] studied the wave interference characteristics of two ships sailing in parallel and following each other and a three-ship "V" formation in shallow water using the bare hull of Series 60. The results indicate that when the two ships follow each other, the wave resistance for both ships decreases. In a three-ship "V" formation, the waves from the trailing ship provide additional thrust, significantly reducing the wave resistance of the leading ship. However, the additional reactive force from the wave crests of the leading ship increases the resistance of the trailing ship. Zheng et al. [3] used the second-order source method based on the Dawson method to calculate the wave resistance of four Wigley ships in three common formations: single-ship, two-ship formation, and three-body ship formation. They identified optimal ship formations for drag reduction in different speed ranges, and adjusting the relative positions of the ships in the Wigley formation can achieve drag reduction. Qin Yan et al. [4] first performed a numerical analysis of the drag characteristics of a single Wigley ship at different speeds. They compared the results with the hydrodynamic performance of a "train" formation at various longitudinal spacings. The analysis showed that, under all conditions, the total drag of the train formation was about ${10}\%$ to ${20}\%$ less than that of a single ship. For lower speeds, reducing the longitudinal distance can achieve drag reduction, but at higher speeds, increasing the longitudinal spacing helps maintain drag reduction. Liu et al. [5] used CFD to study the drag reduction effects of a KCS ship model in a twin-ship "train" formation at different speeds, showing that the drag reduction for the following ship could reach up to 24.3%. He et al. [6][7] focused on the hydrodynamic performance of three-ship formations at low speeds, analyzing linear, parallel, and triangular formations with equal and unequal spacing. The optimal ship formation configuration for drag reduction under different formations was ultimately identified. A regression model [8] was also developed to predict total resistance in different formation systems. Meanwhile, machine learning methods have also been applied to vehicle platooning problems to predict the drag of each vehicle in platoons of varying numbers (varies from 2 - 4). In summary, sailing in formation has the potential for drag reduction. Existing work [9][10][11] mainly focuses on observing drag reduction benefits at different speeds and formations configurations. However, the impact of factors on the resistance reduction of ship formation is unclear. Further research should be investigated to understand how different factors affect the total drag in ship formations.
+
+Therefore, this paper aims to clarify the direct relationship between speed, spacing, and total resistance in ship formations. The primary innovation of this paper lies in employing regression analysis to quantitatively assess the ship formation CFD database, aiming to determine the extent to which speed and distance influence the resistance encountered during ship formation navigation.
+
+The main contributions of the paper are as follows:
+
+---
+
+National Natural Science Foundation of China
+
+---
+
+- Quantitative analysis and estimation of the effects of factors (speed, longitudinal distances, and transverse locations) on total resistance in formations are provided.
+
+- A regression model is established to predict the total resistance of the multi-ship formation system.
+
+Subsequently, the datasets investigated in our research are introduced in Section II. Section III explains the proposed research approach. The analysis results for the impacts of different factors are presented, and the regression model is built in Section IV. In the last, Section V concludes the main findings and recommendations for further research.
+
+## II. DATA DESCRIPTION
+
+## A. Source of data
+
+In this research, the dataset consists entirely of CFD simulation data. All the simulation is calculated via commercial software STAR CCM+ V13.06. Before the systematic simulation, verification and validation have been done. Therefore, the accuracy of the CFD results is guaranteed.
+
+## B. Studied ship in dataset
+
+In our CFD simulation conditions, the three-ship isomorphic formation is composed of three identically bare hulls of the full-swing tugboat 'WillLead I'. The parameters of the ship are shown in Table 1, and the side view is presented in Figure 1.
+
+
+
+Fig. 1. Side view of the bare hull of 'Willlead I'
+
+TABLE I. PARAMETERS OF 'WILL LEAD I '
+
+ | $\lambda$ | ${\mathrm{L}}_{\mathrm{{OA}}}\left( \mathrm{m}\right)$ | ${\mathrm{L}}_{\mathrm{{PP}}}\left( \mathrm{m}\right)$ | B(m) | T(m) | ${\mathbf{A}}_{\mathbf{S}}\left( {\mathbf{m}}^{2}\right)$ |
| Full scale | 1.00 | 34.95 | 30.00 | 10.50 | 4.00 | 432.41 |
| Model scale | 17.475 | 2 | 1.72 | 0.674 | 0.211 | 0.672 |
+
+## C. Data composition
+
+The dataset comprises CFD simulation results in four different formation configurations: tandem formation, parallel formation, right triangle formation, and general formation. Besides, the longitudinal distance $\left( {{\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2}}\right)$ and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ are different. The illustration of formation configurations is shown in Figure 2. The range of ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , ${\mathrm{{SP}}}_{2}$ is shown in Table 2. In tandem formation, ${\mathrm{{SP}}}_{1}$ equals ${\mathrm{{SP}}}_{2}$ as zero; in parallel formation, ${\mathrm{{ST}}}_{1}$ equals ${\mathrm{{ST}}}_{2}$ as zero. In a right triangle formation, the bow of ship 2 aligns with ship 3, and the centerline of ship ${}_{1}$ aligns with ship 2 . In a general triangle formation, the bow of ship ${}_{1}$ aligns with ship 3 .
+
+TABLE II. RANGE OF ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}$
+
+| Configuration | ${\mathbf{{ST}}}_{1}\left( \mathbf{m}\right)$ | $\mathbf{S{T}_{2}\left( m\right) }$ | ${\mathbf{{SP}}}_{1}\left( \mathbf{m}\right)$ | ${\mathrm{{SP}}}_{2}\left( \mathrm{m}\right)$ |
| Tandem | 0.25-2.0 | 0.25-2.0 | / | / |
| Parallel | / | / | 0.1685-2.022 | 0.337-2.696 |
| Right triangle | ${0.25} - {1.0}$ | ${0.25} - {1.0}$ | 0.1685-0.674 | 0.1685-0.674 |
| General triangle | ${0.25} - {1.0}$ | ${0.25} - {1.0}$ | 0.1685 | 0.337-0.5055 |
+
+
+
+Fig. 2. Illustration of formation configurations
+
+## III. METHODOLOGY
+
+This research uses CFD data to investigate the influence of speed and spacing between adjacent ships in formations. In this section, the no-dimension coefficients of the formation and speed are illustrated in the coordinate system. The data analysis method is introduced, including data preparation.
+
+## A. Dimensionless coefficients and coordinate system
+
+The coordinate system to describe the motion and resistance of the formation is presented in Figure 3. The space-fixed coordinate system ${\mathrm{O}}_{\mathrm{o}} - {\mathrm{X}}_{\mathrm{o}}{\mathrm{Y}}_{\mathrm{o}}$ and the ship-fixed coordinate system O-xy constitute the global coordinate system. The space-fixed coordinate system is used to describe the motion of the formation, and the ship-fixed coordinate system is used to describe the resistance of the ship in formation. In the space-fixed coordinate system, the Xo direction points to the true north. In the ship-fixed coordinate system, the $\mathrm{x}$ direction indicates the bow of ship, and the $y$ direction points to the starboard side. The direction of no-dimension coefficients of resistance, including drag and the lateral force, are provided in Figure 3. ${\mathrm{X}}^{\prime }$ is the no-dimension coefficient of longitudinal resistance, and the direction of ${\mathrm{X}}^{\prime }$ from the bow to the stern is opposite to the $\mathrm{x}$ direction. ${\mathrm{Y}}^{\prime }$ is no dimension coefficient of lateral force and the direction of ${\mathrm{Y}}^{\prime }$ from the portside to the starboard side agrees with the y direction. The total dimensionless longitudinal resistance coefficient ${\mathrm{X}}_{\text{total }}^{\prime }$ can be obtained by summing ${\mathrm{X}}^{\prime }$ of each ship in the formation. In a similar vein the total dimensionless longitudinal resistance coefficient ${Y}_{\text{total }}^{\prime }$ can be obtained by summing ${Y}^{\prime }$ of each ship in the formation system. The equations of ${\mathrm{X}}_{\text{total }}$ and ${\mathrm{Y}}_{\text{total }}$ as follows:
+
+$$
+{X}_{\text{total }}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{3}{X}^{\prime } \tag{1}
+$$
+
+$$
+{Y}_{\text{total }}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{3}{Y}^{\prime } \tag{2}
+$$
+
+In the research, the fleet is assumed to sail in calm water. Therefore, the impact of wind and current is not considered.
+
+
+
+Fig. 3. Illustration of the coordinate system
+
+## B. Data preparation
+
+Since the CFD simulation via STAR CCM+ V13.06 needs to set up the numerical and physical layouts, longitudinal distances $\left( {{\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2}}\right)$ and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ mentioned in section 2 could only represent the geometric relationship between neighbor ships. To facilitate the learning of the characteristics of the data during the regression analysis, the longitudinal and transverse locations in the dataset are rearranged. ${\mathrm{{ST}}}_{\mathrm{i}}$ is specified to be the sum-of-signs value, when ship $i$ is in front of ship $i + 1$ , and ${\mathrm{{ST}}}_{\mathrm{i}}$ is specified to be the opposite of the geometric value when it is behind ${\mathrm{{ship}}}_{\mathrm{i} + 1},{\mathrm{{SP}}}_{\mathrm{i}}$ is specified to be the sum-of-signs value of geometric value when ship $i$ is located on ${\operatorname{ship}}_{\mathrm{i} + 1}$ ’s port side, and ${\mathrm{{SP}}}_{\mathrm{i}}$ is specified to be the opposite of the geometric value when ${\operatorname{ship}}_{\mathrm{i}}$ is located on ship $\mathrm{i} + 1$ ’s starboard side.
+
+## C. Data analysis method
+
+Figure 4 presents the steps of the regression analysis method.
+
+
+
+Fig. 4. Flow diagram of regression analysis.
+
+The hydrodynamic dataset of the ship formation is divided into different subsets to analyze the effects of speed and spacing between ships. The impacts of both longitudinal distances and lateral locations are considered on the total resistance of the ship formation system. The total resistance variations among the formation of different speeds have been observed. However, the direct relationship between total resistance and speed is still not revealed. The relationship between total resistance and speed is expected to be found using the tandem formation dataset. During the quantitative analysis of the speed impacts on total drag in tandem formation, the tandem formation dataset is split into subsets of different ${\mathrm{{ST}}}_{1}$ distances. Then, a correlation analysis between total resistance and speed is performed to highlight the strength of the correlation and determine which speed criterion more effectively characterizes variations in total resistance.
+
+Three steps are taken to quantify the impacts of longitudinal spacing and lateral locations. Firstly, the dataset is divided into six subsets based on different speeds. Each subset is further categorized into tandem formation, parallel formation, and triangle formation. After that, regression analysis is conducted on subsets of total resistance data at uniform speeds. The results will reveal if the impacts of ST and SP differ across various fleet speeds. Finally, overall functions will be defined to describe ST and SP impacts, incorporating speed variations, with coefficients estimated from the entire dataset.
+
+After correlation analysis with different factors, a model for the formation system's total drag regression formulation is developed, including the five features: speed, ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ . Multivariate polynomial and ridge regression methods are combined to build a regression model. Polynomial regression is a method of regression analysis based on polynomial functions for fitting non-linear relationships in data. Compared with linear regression, polynomial regression could model the non-linear characteristics of the data by introducing polynomial terms, thus increasing the flexibility and applicability of the model. In practice, data has many features, and polynomial regression for a single feature performs poorly on fitting data with many features. Thus, multivariate polynomial regression is used in this study to fit the total resistance dataset of ship formations.
+
+In practical applications of using multivariate polynomials for regression analysis, choosing the polynomial degree carefully is crucial. If the degree is too low, it may result in poor fitting performance. On the other hand, if the degree is too high, it can lead to overfitting issues where the model fits noise in the data rather than capturing the underlying trends. Therefore, when employing multivariate polynomials for regression analysis, it's crucial to select the degree of the polynomial judiciously. To address potential overfitting issues and improve the accuracy of data fitting when using multivariate polynomials to establish regression equations, this study introduces a combined approach of ridge regression with multivariate polynomial regression to build the regression model. Ridge regression is an improved least squares estimation method that addresses multicollinearity by introducing an L2 norm penalty term, thereby enhancing model stability and generalization capability. The penalty term is $\lambda$ times the sum of the squares of all regression coefficients (where $\lambda$ is the penalty coefficient). Combining ridge regression with multivariate polynomial regression can effectively control the complexity of the model and reduce the risk of overfitting by introducing a penalty term. This is particularly beneficial when input features are highly correlated or when the condition number of the data matrix is high. Such stability helps mitigate numerical computation issues that may arise in multivariate polynomial regression, thereby enhancing the reliability of the model.
+
+## IV. RESULTS AND DISCUSSION
+
+In this section, the impacts of speed, longitudinal location, and transverse spacing are analyzed to estimate the final regression model.
+
+## A. Variation of drag due to speed
+
+To estimate the relationship between speed and total resistance, the total resistance of the formation and the speed is provided in Figures 5 to 8. In these plots, the relationship between speed and total resistance of tandem formation under different longitudinal spacing ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ is depicted. Simultaneously, the combined resistance experienced by three individual ships sailing alone at various speeds is also provided.
+
+The blue dots in the graph represent the total resistance experienced by the formation system, while the red line indicates the combined resistance experienced by three individual ships sailing alone at different speeds. The purpose of marking the red line on the graph is to determine whether a three-ship tandem formation can achieve a resistance gain compared to three ships sailing individually. When ${\mathrm{{ST}}}_{1}$ is set as ${0.25}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , and ${2.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$ both ships sailing individually and ships sailing in formation, the resistance of 'WillLead I' ships decreases as ship speed increases. Simultaneously, the formation system benefits from resistance gains, with the maximum gain occurring at a speed of ${0.212}\mathrm{\;m}/\mathrm{s}$ , reaching up to ${4.85}\%$ in maximum resistance reduction.
+
+When ${\mathrm{{ST}}}_{1}$ is set as ${0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , the total resistance observed during sailing in formation decreases as speed increases. However, the formation system did not gain resistance benefits. Instead, it experienced resistance amplification, with the maximum increase reaching ${119.3}\%$ at ${\mathrm{{ST}}}_{1} = {0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ .
+
+When ${\mathrm{{ST}}}_{1}$ is set to ${1.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$ and ${1.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , the formation system experiences resistance gains. However, as ship speed increases, the resistance benefits gradually decrease. Additionally, when ${\mathrm{{ST}}}_{2}$ is smaller than ${\mathrm{{ST}}}_{1}$ , the resistance benefits of the formation system nearly disappear as the ship speed increases to 0.424m/s.
+
+
+
+(c) ${\mathrm{{ST}}}_{2} = {1.5}{\mathrm{L}}_{\mathrm{{OA}}}$
+
+
+
+Fig. 5. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {0.25}{\mathrm{\;L}}_{\mathrm{{OA}}}$
+
+
+
+
+
+Fig. 6. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$
+
+
+
+
+
+Fig. 7. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {1.0}{\mathrm{L}}_{\mathrm{{OA}}}$
+
+In tandem formations, the transverse distances SP1 and SP2 and the lateral forces do not affect the total resistance of the formation system. A correlation analysis between total resistance and speed of the formation is conducted. The results are shown in Table 3. All correlation coefficients are significant at 0.01 level of p-value(two-tailed).
+
+
+
+
+
+Fig. 8. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {1.5}{\mathrm{L}}_{\mathrm{{OA}}}$
+
+
+
+
+
+Fig. 9. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {2.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$
+
+## B. Quantification of longitudinal spacing and transverse location
+
+This section presents regression analysis results of spacing in adjacent ships in formations. The results reveal the impact of spacing in adjacent ships $\left( {{\mathrm{{ST}}}_{1}{\mathrm{{ST}}}_{2}{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ on total resistance. In tandem formation, the transverse locations ${\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ , are set as zero. Besides, both ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ are varied from ${0.25}{\mathrm{L}}_{\mathrm{{OA}}}$ to ${2.0}\mathrm{{Log}}$ . So, there is no need to standardize the coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ when calculating the coefficient in tandem formation subset.
+
+Similarly, ${\mathrm{{ST}}}_{1}$ , and ${\mathrm{{ST}}}_{2}$ , are set as zero in parallel formation. The effect of standardizing the coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ before calculating the coefficient in the parallel formation subset is insignificant. However, longitudinal distance and transverse spacing existed between the neighboring ships in the triangle formation. The longitudinal distance is much bigger than the transverse spacing. The unstandardized coefficients can not be compared directly. However, the standardized coefficients, derived from standardized regression analysis, are adjusted so that the variances of the variables are 1 . in triangle formation. Thus, considering the need for standardizing correlation analysis under triangular formation configurations, standardized regression analysis is adopted for correlation analysis in all conditions to unify the correlation coefficient analysis operations.
+
+The whole data set of the total resistance of tandem formation is split into different subsets with the same speed. The coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ for the total drag variable in each subset are presented in Fig 10. The results clarify whether ${\mathrm{{ST}}}_{1}$ or ${\mathrm{{ST}}}_{2}$ significantly impact total resistance in this multivariant regression model.
+
+Two comparisons are made to interpret the estimated standardized coefficients. For tandem formation within the same subset, the weights of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ are compared. The impact of ${\mathrm{{ST}}}_{1}$ on total resistance is more significant than that of ${\mathrm{{ST}}}_{2}$ .
+
+The other comparison involves analyzing coefficients for different speed groups, which reveals how external impacts vary among ships at different speeds. This analysis shows distinct trends in the effects of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ . on total resistance is flat when the speed gets bigger. The correlation coefficient of ${\mathrm{{ST}}}_{2}$ ranges between -0.083 and -0.075 , indicating a negative correlation between ${\mathrm{{ST}}}_{2}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{2}$ increasing, total resistance tends to decrease. It is suggested that increasing ${\mathrm{{ST}}}_{2}$ can help the formation system reduce total resistance. However, the influence of ${\mathrm{{ST}}}_{2}$ on total resistance is instinctive. The correlation coefficient of ${\mathrm{{ST}}}_{1}$ ranges between 0.42 and 0.435, indicating a positive correlation between ${\mathrm{{ST}}}_{1}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{1}$ increasing, the formation system may gain energy benefits. It is suggested that decreasing ${\mathrm{{ST}}}_{1}$ can help the formation system reduce total resistance. However, the influence of ${\mathrm{{ST}}}_{1}$ on total resistance is significant. Thus, choosing ${\mathrm{{ST}}}_{1}$ carefully is more effective than selecting ${\mathrm{{ST}}}_{2}$ in obtaining total resistance benefits in tandem formation.
+
+
+
+Fig. 10. The standardized coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ on total resistance in tandem formation.
+
+The whole data set of the total resistance of parallel formation is split into different subsets with the same speed. The coefficients of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ for total resistance in each subset are presented in Fig 11.
+
+Examining the standardized coefficients for parallel formation within the same subset allows for comparing the effects of SP1 and SP2. For parallel formation, both SP1 and SP2 have a significant impact on total resistance. The impact of SP1 is slightly higher than that of ${\mathrm{{SP}}}_{2}$ . In parallel formation, controlling the lateral spacing ${\mathrm{{SP}}}_{1}$ between ${\mathrm{{Ship}}}_{1}$ and ${\mathrm{{Ship}}}_{2}$ is more effective in gaining resistance benefits compared to controlling the lateral spacing ${\mathrm{{SP}}}_{2}$ between ${\mathrm{{Ship}}}_{2}$ and $\mathrm{{Ship}}3$ . It also can be observed that the trends of both impacts of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ on total resistance are undulatory with speed varying. The correlation coefficient of ${\mathrm{{SP}}}_{1}$ ranges between 0.823 and 0.844, indicating a positive correlation between ${\mathrm{{SP}}}_{1}$ and total resistance in parallel formation. With ${\mathrm{{SP}}}_{1}$ increasing, resistance benefits tend to decrease. The influence of ${\mathrm{{ST}}}_{2}$ on total resistance is positive. The correlation coefficient of ${\mathrm{{SP}}}_{2}$ varies from 0.700 to 0.722, indicating a positive correlation between ${\mathrm{{ST}}}_{1}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{1}$ increasing, the formation may gain resistance reduction benefits too.
+
+
+
+Fig. 11. The standardized coefficients of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ on total resistance in parallel formation.
+
+The whole data set of the total resistance of right triangle formation is split into different subsets with the same speed. The coefficients of ST and SP for total resistance in each subset are presented in Fig 12. Analyzing the standardized coefficients for right triangle formation within the same subset reveals that the impact of ST is less significant compared to SP Besides, the impact of both ST and SP on total resistance is positive. The Impacts of SP is more significant than ST. It also can be observed that the effect of ST on total resistance changes more gradually with speed compared to the impact of SP on total resistance. The correlation coefficient of ST ranges remains at 0.43 , nearly unchanged, and the correlation coefficient of SP varies from 0.70 to 0.72 , similar to the standardized correlation coefficient of ${\mathrm{{SP}}}_{2}$ in parallel formation.
+
+Regression models have been developed to quantitatively assess the effects of speed, ST, and SP on total resistance for tandem, parallel, and triangle formations. This paper presents the final regression models established using the complete dataset. Multivariant polynomial and ridge regression methods are combined to build the regression model. Due to the limited sample size, k-fold cross-validation was employed to enhance the robustness of the regression model.
+
+The 4th-order regression functions are listed as equation (3)
+
+$$
+{X}_{\text{total }} = {0.01S}{P}_{1}^{4} - {0.13S}{P}_{1}^{3}S{P}_{2} + {0.81S}{P}_{1}^{3}S{T}_{1} + {0.81S}{P}_{1}^{3}S{T}_{2} + {1.6S}{P}_{1}^{3} + {0.12S}{P}_{1}^{2}S{P}_{2}^{2} + {0.6S}{P}_{1}^{2}S{P}_{2}S{T}_{1} + {0.6S}{P}_{1}^{2}S{P}_{2}S{T}_{2}
+$$
+
+$$
+- {0.01S}{P}_{1}^{2}S{P}_{2}U + {0.98S}{P}_{1}^{2}S{P}_{2} + {2.22S}{P}_{1}^{2}S{T}_{1}^{2} - {0.12S}{P}_{1}^{2}S{T}_{1}S{T}_{2} + {0.03S}{P}_{1}^{2}S{T}_{1}U + {0.26S}{P}_{1}^{2}S{T}_{1} - {0.19S}{P}_{1}^{2}S{T}_{2}^{2}
+$$
+
+$$
++ {0.01S}{P}_{1}^{2}S{T}_{2}U + {0.26S}{P}_{1}^{2}S{T}_{2} + {0.05S}{P}_{1}^{2}U - {1.28S}{P}_{1}^{2} - {0.24S}{P}_{1}S{P}_{2}^{3} + {0.85S}{P}_{1}S{P}_{2}^{2} + {2.01S}{P}_{1}S{P}_{2}S{T}_{1}^{2}
+$$
+
+$$
+- {0.52S}{P}_{1}S{P}_{2}S{T}_{1}S{T}_{2} + {0.02S}{P}_{1}S{P}_{2}S{T}_{1}U + {0.45S}{P}_{1}S{P}_{2}S{T}_{1} - {0.59S}{P}_{1}S{P}_{2}S{T}_{2}{}^{2} + {0.45S}{P}_{1}S{P}_{2}S{T}_{2} + {0.04S}{P}_{1}S{P}_{2}U \tag{3}
+$$
+
+$$
+- {0.74S}{P}_{1}S{P}_{2} + {3.0S}{P}_{1}S{T}_{1}^{3} - {1.11S}{P}_{1}S{T}_{1}^{2}S{T}_{2} + {0.08S}{P}_{1}S{T}_{1}^{2}U - {2.08S}{P}_{1}S{T}_{1}^{2} - {1.19S}{P}_{1}S{T}_{1}S{T}_{2}^{2} - {0.06S}{P}_{1}S{T}_{1}S{T}_{2}U
+$$
+
+$$
++ {0.98S}{P}_{1}S{T}_{1}S{T}_{2} - {0.02S}{P}_{1}S{T}_{1}U - {0.29S}{P}_{1}S{T}_{1} - {1.29S}{P}_{1}S{T}_{2}^{3} - {0.07S}{P}_{1}S{T}_{2}^{2}U + {1.06S}{P}_{1}S{T}_{2}^{2} + {0.01S}{P}_{1}S{T}_{2}U
+$$
+
+$$
+- {0.29S}{P}_{1}S{T}_{2} - {0.02S}{P}_{1}U - {0.45S}{P}_{1} + {0.1S}{P}_{2}^{4} + {0.27S}{P}_{2}^{3}S{T}_{1} + {0.27S}{P}_{2}^{3}S{T}_{2} + {0.02S}{P}_{2}^{3}U + {0.03S}{P}_{2}^{3} + {2.41S}{P}_{2}^{2}S{T}_{1}^{2}
+$$
+
+$$
+- {0.33S}{P}_{2}^{2}S{T}_{1}S{T}_{2} + {0.06S}{P}_{2}^{2}S{T}_{1}U + {0.21S}{P}_{2}^{2}S{T}_{1} - {0.4S}{P}_{2}^{2}S{T}_{2}^{2} + {0.04S}{P}_{2}^{2}S{T}_{2}U + {0.21S}{P}_{2}^{2}S{T}_{2} + {0.02S}{P}_{2}^{2}U
+$$
+
+$$
+- {0.35S}{P}_{2}{}^{2} + {3.26S}{P}_{2}S{T}_{1}^{3} - {1.18S}{P}_{2}S{T}_{1}{}^{2}S{T}_{2} + {0.23S}{P}_{2}S{T}_{1}^{2}U - {2.6S}{P}_{2}S{T}_{1}{}^{2} - {1.27S}{P}_{2}S{T}_{1}S{T}_{2}{}^{2} + {0.09S}{P}_{2}S{T}_{1}S{T}_{2}U
+$$
+
+$$
++ {0.7S}{P}_{2}S{T}_{1}S{T}_{2} + {0.01S}{P}_{2}S{T}_{1}{U}^{2} + {0.04S}{P}_{2}S{T}_{1}U - {0.06S}{P}_{2}S{T}_{1} - {1.38S}{P}_{2}S{T}_{2}^{3} + {0.08S}{P}_{2}S{T}_{2}^{2}U + {0.8S}{P}_{2}S{T}_{2}^{2}
+$$
+
+$$
++ {0.01S}{P}_{2}S{T}_{2}{U}^{2} + {0.07S}{P}_{2}S{T}_{2}U - {0.06S}{P}_{2}S{T}_{2} + {0.02S}{P}_{2}{U}^{2} - {0.14S}{P}_{2}U + {0.18S}{P}_{2} + {2.1S}{T}_{1}^{4} - {0.68S}{T}_{1}^{3}S{T}_{2}
+$$
+
+$$
++ {0.12S}{T}_{1}^{3}U - {4.17S}{T}_{1}^{3} - {0.75S}{T}_{1}^{2}S{T}_{2}^{2} - {0.02S}{T}_{1}^{2}S{T}_{2}U + {1.18S}{T}_{1}^{2}S{T}_{2} - {0.08S}{T}_{1}^{2}U + {2.5S}{T}_{1}^{2} - {0.76S}{T}_{1}S{T}_{2}^{3}
+$$
+
+$$
+- {0.02S}{T}_{1}S{T}_{2}^{2}U + {1.29S}{T}_{1}S{T}_{2}^{2} + {0.09S}{T}_{1}S{T}_{2}U - {1.48S}{T}_{1}S{T}_{2} + {0.01S}{T}_{1}{U}^{2} + {0.01S}{T}_{1}U - {0.17S}{T}_{1} - {0.83S}{T}_{2}^{4}
+$$
+
+$$
+- {0.03S}{T}_{2}^{3}U + {1.42S}{T}_{2}^{3} + {0.11S}{T}_{2}^{2}U - {1.6S}{T}_{2}^{2} - {0.02S}{T}_{2}U - {0.17S}{T}_{2} - {0.02}{U}^{4} + {0.01}{U}^{3} + {0.02}{U}^{2} + {0.15U} + {0.62}
+$$
+
+The results of the estimation of the regression analysis are shown in Table 4. According to the regression analysis results, about ${98.2}\%$ of the variance in the total power of the training systems can be explained by fleet speed. ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}\left( {\mathrm{R}}^{2}\right.$ is 0.982 for the whole dataset). Besides, speed has an estimate of 0.273 , indicating a positive but relatively small effect on the dependent variable.
+
+The standard error is 0.836 , which is relatively large and suggests high uncertainty in the estimate. The t-statistic is 0.327 , falling below common critical values (such as 1.96), indicating that the effect of this feature may not be significant. The standardized estimate of 0.327 aligns with the t-statistic, reinforcing that the standardized impact is also relatively modest. Feature ${\mathrm{{ST}}}_{1}$ has an estimate of -0.171, reflecting a negative effect on the dependent variable. With a standard error of 0.157 , the precision of this estimate is relatively high. However, the t-statistic of -1.089 is below common critical values, suggesting that the impact of ${\mathrm{{ST}}}_{1}$ might also be nonsignificant. The standardized estimate of -1.089 confirms the direction of the effect but similarly indicates that its significance is weak. Feature ${\mathrm{{ST}}}_{2}$ has an estimate of -0.167, suggesting a negative effect on the dependent variable. The standard error is 0.157 , indicating high precision in the forecast. The t-statistic of -1.069 implies that this feature's impact may not be significant. The standardized estimate of -1.069 supports the direction of the effect but demonstrates that the impact is not substantial. Feature ${\mathrm{{SP}}}_{1}$ is estimated at -0.501, indicating a strong negative impact on the dependent variable.
+
+TABLE IV. ESTIMATION RESULTS OF THE FINAL REGRESSION MODEL
+
+ | ${\mathbf{R}}^{2}$ | F-state | $\mathbf{{Estimate}}$ | Std.error | t-stat |
| 0.982 | 168.045 | 0.603 | 0.089 | 6.759 |
| ${\mathrm{C}}_{\mathrm{U}}$ | / | / | 0.273 | 0.836 | 0.327 |
| ${\mathrm{C}}_{\mathrm{{ST}}1}$ | / | / | -0.171 | 0.157 | -1.09 |
| ${\mathrm{C}}_{\mathrm{{ST}}2}$ | / | / | -0.167 | 0.157 | -1.07 |
| ${\mathrm{C}}_{\mathrm{{SP}}1}$ | / | / | -0.501 | 0.156 | -3.205 |
| ${\mathrm{C}}_{\mathrm{{SP}}2}$ | / | / | 0.128 | 0.159 | 0.806 |
+
+The standard error is 0.156 , which is relatively small, suggesting high accuracy in the estimate. The t-statistic of - 3.205 exceeds common critical values, demonstrating that the effect of ${\mathrm{{SP}}}_{1}$ is significant. The standardized estimate of -3.205 confirms that the impact remains strong even after standardization. Feature ${\mathrm{{SP}}}_{2}$ has an estimate of 0.128, showing a positive but small effect on the dependent variable. The standard error is 0.159 , which is relatively large, reflecting higher uncertainty in the estimate. The t-statistic of 0.806 is below common critical values, indicating that the effect of ${\mathrm{{SP}}}_{2}$ is insignificant. The standardized estimate of 0.806 suggests that the impact is also small after standardization.
+
+
+
+Fig. 12. The standardized coefficients of ST and SP on total resistance in triangle formation.
+
+## V. CONCLUSION
+
+The paper established a regression model to analyze the effects of factors including speed, longitudinal distances $\left( {\mathrm{{ST}}}_{1}\right.$ , ${\mathrm{{ST}}}_{2}$ ), and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ on the total resistance of ship formations derived from CFD data. The variation of total resistance in tandem formation due to speed can be observed. The correlation analysis shows a strong correlation between speed and total resistance. The longitudinal spacing and transverse location impact on total resistance vary for different formation configurations. For tandem formation, both ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ have a more significant influence on total resistance. For parallel formation, the impact of both ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ slightly fluctuates with growing ship speed. However, for triangle formation, the impact of SP on total resistance shows a strong positive correlation. The ST impact on total resistance is negative. The regression analysis results revealed that about ${98.2}\%$ of the variance in the total resistance of various ship formation systems was mainly explained by the factors that influenced its formation speed, ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ .
+
+This paper investigates the impact of different factors in the formation of total resistance. The estimated result indicates that more CFD data should be used in the regression analysis process. More intelligent methods can be used for regression analysis.
+
+## ACKNOWLEDGMENT
+
+The work presented in this study is financially supported by the National Natural Science Foundation of China under grants 52271364, 52101402, and 52271367.
+
+## REFERENCES
+
+[1] Z.-M. Yuan, M. Chen, L. Jia, C. Ji, and A. Incecik, "Wave-riding and wave-passing by ducklings in formation swimming," Journal of Fluid Mechanics, vol. 928, 2021.
+
+[2] Chen BO, and. Wu Jiankang, "Wave Interactions Generated by Multi-Ship Unite Moving in Shallow Water ", CHINESE JOURNAL OF APPLIED MECHANICS, vol. 22, no. 02, pp. 159-163+329, 2005.
+
+[3] ZHENG Yi and Li Jian-bo," An investigation into the possibility of resistance reduction for multiple ships in given formations," SHIP SCIENCE AND TECHNOLOGY, vol. 42, no. 17, pp. 12-16, 2020.
+
+[4] Y. Qin, C. Yao, Y. Zheng, J. Huang, and E. Amer Soc Mechanical, "STUDY ON HYDRODYNAMIC PERFORMANCE OF A CONCEPTIONAL SEA-TRAIN,", Hamburg, GERMANY, 2022 Jun 05- 10 2022.
+
+[5] Z. Liu, C. Dai, X. Cui, Y. Wang, H. Liu, and B. Zhou, "Hydrodynamic Interactions between Ships in a Fleet," Journal of Marine Science and Engineering, Article vol. 12, no. 1, Jan 2024.
+
+[6] Y. He, J. Mou, L. Chen, Q. Zeng, Y. Huang, P. Chen, and S. Zhang, "Will sailing in formation reduce energy consumption? Numerical prediction of resistance for ships in different formation configurations," Applied Energy, vol. 312, Apr 152022.
+
+[7] Y. He, L. Chen, J. Mou, Q. Zeng, Y. Huang, P. Chen, and S. Zhang, "Ship Emission Reduction via Energy-Saving Formation," IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 3, pp. 2599-2614, 2024.
+
+[8] F. Jaffar, T. Farid, M. Sajid, Y. Ayaz, and M. J. Khan, "Prediction of Drag Force on Vehicles in a Platoon Configuration Using Machine Learning," Ieee Access, Article vol. 8, pp. 201823-201834, 2020.
+
+[9] D. Zhang, L. Chao, and G. Pan, "Analysis of hydrodynamic interaction impacts on a two-AUV system," Ships and Offshore Structures, vol. 14, no. 1, pp. 23-34, 2018.
+
+[10] L. Zou and L. Larsson, "Numerical predictions of ship-to-ship interaction in shallow water," Ocean Engineering, Article vol. 72, pp. 386-402, Nov 1 2013.
+
+[11] L. Zou, Z.-j. Zou, and Y. Liu, "CFD-based predictions of hydrodynamic forces in ship-tug boat interactions," Ships and Offshore Structures, Article vol. 14, pp. S300-S310, Oct 32019.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..94e903d13364da65370d0a3980c616d0ef8ca41a
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,369 @@
+§ IMPACTS OF SPEED AND SPACING ON RESISTANCE IN SHIP FORMATIONS
+
+Linying Chen
+
+State Key Laboratory of Maritime
+
+Technology and Safety,
+
+School of Navigation, Wuhan
+
+University of Technology
+
+Wuhan, China
+
+LinyingChen@whut.edu.cn
+
+Linhao Xue
+
+School of Navigation, Wuhan
+
+University of Technology
+
+Wuhan, China
+
+xue_lh@whut.edu.cn
+
+Yangying He
+
+School of Intelligent Sports
+
+Engineering, Wuhan Sports
+
+University
+
+Wuhan, China
+
+yangyinghe@whsu.edu.cn
+
+Pengfei Chen
+
+State Key Laboratory of Maritime
+
+Technology and Safety,
+
+School of Navigation, Wuhan
+
+University of Technology
+
+Wuhan, China
+
+Chenpf@whut.edu.cn
+
+Junmin Mou
+
+State Key Laboratory of Maritime Technology and Safety, School of Navigation, Wuhan University of Technology Wuhan, China Moujm@whut.edu.cn
+
+Yamin Huang
+
+State Key Laboratory of Maritime
+
+Technology and Safety,
+
+School of Navigation, Wuhan
+
+University of Technology
+
+Wuhan, China
+
+YaminHuang@whut.edu.cn
+
+Abstract-Sailing in formation has the benefits of drag reduction. In current studies of hydrodynamic analysis of ship formations, the impacts of speed and spacing between adjacent ships on total resistance are seldom considered. To estimate the weight of different factors in formation on total resistance variation, the impacts of speed, longitudinal distance, and transverse locations on the observed total resistance of formations are investigated by analyzing hydrodynamic data in tandem, parallel, and triangle formation. The relation between resistance variation and speed is revealed. The regression analysis results on different formations indicate the differences between longitudinal spacing and transverse impacts. The regression formulation can be adopted to predict total resistance in formations.
+
+Keywords-drag reduction, formation, regression analysis
+
+§ I. INTRODUCTION
+
+Nowadays, saving energy, reducing atmospheric pollutant emissions, and lowering carbon emissions are key concerns in the shipping industry. Increasingly, scholars are focusing on reducing ship resistance to save energy. Inspired by observing and analyzing duck flock swimming behavior [1], scholars have drawn insights from biomimicry and begun researching drag reduction through ship formations.
+
+Chen et al. [2] studied the wave interference characteristics of two ships sailing in parallel and following each other and a three-ship "V" formation in shallow water using the bare hull of Series 60. The results indicate that when the two ships follow each other, the wave resistance for both ships decreases. In a three-ship "V" formation, the waves from the trailing ship provide additional thrust, significantly reducing the wave resistance of the leading ship. However, the additional reactive force from the wave crests of the leading ship increases the resistance of the trailing ship. Zheng et al. [3] used the second-order source method based on the Dawson method to calculate the wave resistance of four Wigley ships in three common formations: single-ship, two-ship formation, and three-body ship formation. They identified optimal ship formations for drag reduction in different speed ranges, and adjusting the relative positions of the ships in the Wigley formation can achieve drag reduction. Qin Yan et al. [4] first performed a numerical analysis of the drag characteristics of a single Wigley ship at different speeds. They compared the results with the hydrodynamic performance of a "train" formation at various longitudinal spacings. The analysis showed that, under all conditions, the total drag of the train formation was about ${10}\%$ to ${20}\%$ less than that of a single ship. For lower speeds, reducing the longitudinal distance can achieve drag reduction, but at higher speeds, increasing the longitudinal spacing helps maintain drag reduction. Liu et al. [5] used CFD to study the drag reduction effects of a KCS ship model in a twin-ship "train" formation at different speeds, showing that the drag reduction for the following ship could reach up to 24.3%. He et al. [6][7] focused on the hydrodynamic performance of three-ship formations at low speeds, analyzing linear, parallel, and triangular formations with equal and unequal spacing. The optimal ship formation configuration for drag reduction under different formations was ultimately identified. A regression model [8] was also developed to predict total resistance in different formation systems. Meanwhile, machine learning methods have also been applied to vehicle platooning problems to predict the drag of each vehicle in platoons of varying numbers (varies from 2 - 4). In summary, sailing in formation has the potential for drag reduction. Existing work [9][10][11] mainly focuses on observing drag reduction benefits at different speeds and formations configurations. However, the impact of factors on the resistance reduction of ship formation is unclear. Further research should be investigated to understand how different factors affect the total drag in ship formations.
+
+Therefore, this paper aims to clarify the direct relationship between speed, spacing, and total resistance in ship formations. The primary innovation of this paper lies in employing regression analysis to quantitatively assess the ship formation CFD database, aiming to determine the extent to which speed and distance influence the resistance encountered during ship formation navigation.
+
+The main contributions of the paper are as follows:
+
+National Natural Science Foundation of China
+
+ * Quantitative analysis and estimation of the effects of factors (speed, longitudinal distances, and transverse locations) on total resistance in formations are provided.
+
+ * A regression model is established to predict the total resistance of the multi-ship formation system.
+
+Subsequently, the datasets investigated in our research are introduced in Section II. Section III explains the proposed research approach. The analysis results for the impacts of different factors are presented, and the regression model is built in Section IV. In the last, Section V concludes the main findings and recommendations for further research.
+
+§ II. DATA DESCRIPTION
+
+§ A. SOURCE OF DATA
+
+In this research, the dataset consists entirely of CFD simulation data. All the simulation is calculated via commercial software STAR CCM+ V13.06. Before the systematic simulation, verification and validation have been done. Therefore, the accuracy of the CFD results is guaranteed.
+
+§ B. STUDIED SHIP IN DATASET
+
+In our CFD simulation conditions, the three-ship isomorphic formation is composed of three identically bare hulls of the full-swing tugboat 'WillLead I'. The parameters of the ship are shown in Table 1, and the side view is presented in Figure 1.
+
+ < g r a p h i c s >
+
+Fig. 1. Side view of the bare hull of 'Willlead I'
+
+TABLE I. PARAMETERS OF 'WILL LEAD I '
+
+max width=
+
+X $\lambda$ ${\mathrm{L}}_{\mathrm{{OA}}}\left( \mathrm{m}\right)$ ${\mathrm{L}}_{\mathrm{{PP}}}\left( \mathrm{m}\right)$ B(m) T(m) ${\mathbf{A}}_{\mathbf{S}}\left( {\mathbf{m}}^{2}\right)$
+
+1-7
+Full scale 1.00 34.95 30.00 10.50 4.00 432.41
+
+1-7
+Model scale 17.475 2 1.72 0.674 0.211 0.672
+
+1-7
+
+§ C. DATA COMPOSITION
+
+The dataset comprises CFD simulation results in four different formation configurations: tandem formation, parallel formation, right triangle formation, and general formation. Besides, the longitudinal distance $\left( {{\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2}}\right)$ and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ are different. The illustration of formation configurations is shown in Figure 2. The range of ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , ${\mathrm{{SP}}}_{2}$ is shown in Table 2. In tandem formation, ${\mathrm{{SP}}}_{1}$ equals ${\mathrm{{SP}}}_{2}$ as zero; in parallel formation, ${\mathrm{{ST}}}_{1}$ equals ${\mathrm{{ST}}}_{2}$ as zero. In a right triangle formation, the bow of ship 2 aligns with ship 3, and the centerline of ship ${}_{1}$ aligns with ship 2 . In a general triangle formation, the bow of ship ${}_{1}$ aligns with ship 3 .
+
+TABLE II. RANGE OF ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}$
+
+max width=
+
+Configuration ${\mathbf{{ST}}}_{1}\left( \mathbf{m}\right)$ $\mathbf{S{T}_{2}\left( m\right) }$ ${\mathbf{{SP}}}_{1}\left( \mathbf{m}\right)$ ${\mathrm{{SP}}}_{2}\left( \mathrm{m}\right)$
+
+1-5
+Tandem 0.25-2.0 0.25-2.0 / /
+
+1-5
+Parallel / / 0.1685-2.022 0.337-2.696
+
+1-5
+Right triangle ${0.25} - {1.0}$ ${0.25} - {1.0}$ 0.1685-0.674 0.1685-0.674
+
+1-5
+General triangle ${0.25} - {1.0}$ ${0.25} - {1.0}$ 0.1685 0.337-0.5055
+
+1-5
+
+ < g r a p h i c s >
+
+Fig. 2. Illustration of formation configurations
+
+§ III. METHODOLOGY
+
+This research uses CFD data to investigate the influence of speed and spacing between adjacent ships in formations. In this section, the no-dimension coefficients of the formation and speed are illustrated in the coordinate system. The data analysis method is introduced, including data preparation.
+
+§ A. DIMENSIONLESS COEFFICIENTS AND COORDINATE SYSTEM
+
+The coordinate system to describe the motion and resistance of the formation is presented in Figure 3. The space-fixed coordinate system ${\mathrm{O}}_{\mathrm{o}} - {\mathrm{X}}_{\mathrm{o}}{\mathrm{Y}}_{\mathrm{o}}$ and the ship-fixed coordinate system O-xy constitute the global coordinate system. The space-fixed coordinate system is used to describe the motion of the formation, and the ship-fixed coordinate system is used to describe the resistance of the ship in formation. In the space-fixed coordinate system, the Xo direction points to the true north. In the ship-fixed coordinate system, the $\mathrm{x}$ direction indicates the bow of ship, and the $y$ direction points to the starboard side. The direction of no-dimension coefficients of resistance, including drag and the lateral force, are provided in Figure 3. ${\mathrm{X}}^{\prime }$ is the no-dimension coefficient of longitudinal resistance, and the direction of ${\mathrm{X}}^{\prime }$ from the bow to the stern is opposite to the $\mathrm{x}$ direction. ${\mathrm{Y}}^{\prime }$ is no dimension coefficient of lateral force and the direction of ${\mathrm{Y}}^{\prime }$ from the portside to the starboard side agrees with the y direction. The total dimensionless longitudinal resistance coefficient ${\mathrm{X}}_{\text{ total }}^{\prime }$ can be obtained by summing ${\mathrm{X}}^{\prime }$ of each ship in the formation. In a similar vein the total dimensionless longitudinal resistance coefficient ${Y}_{\text{ total }}^{\prime }$ can be obtained by summing ${Y}^{\prime }$ of each ship in the formation system. The equations of ${\mathrm{X}}_{\text{ total }}$ and ${\mathrm{Y}}_{\text{ total }}$ as follows:
+
+$$
+{X}_{\text{ total }}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{3}{X}^{\prime } \tag{1}
+$$
+
+$$
+{Y}_{\text{ total }}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{3}{Y}^{\prime } \tag{2}
+$$
+
+In the research, the fleet is assumed to sail in calm water. Therefore, the impact of wind and current is not considered.
+
+ < g r a p h i c s >
+
+Fig. 3. Illustration of the coordinate system
+
+§ B. DATA PREPARATION
+
+Since the CFD simulation via STAR CCM+ V13.06 needs to set up the numerical and physical layouts, longitudinal distances $\left( {{\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2}}\right)$ and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ mentioned in section 2 could only represent the geometric relationship between neighbor ships. To facilitate the learning of the characteristics of the data during the regression analysis, the longitudinal and transverse locations in the dataset are rearranged. ${\mathrm{{ST}}}_{\mathrm{i}}$ is specified to be the sum-of-signs value, when ship $i$ is in front of ship $i + 1$ , and ${\mathrm{{ST}}}_{\mathrm{i}}$ is specified to be the opposite of the geometric value when it is behind ${\mathrm{{ship}}}_{\mathrm{i} + 1},{\mathrm{{SP}}}_{\mathrm{i}}$ is specified to be the sum-of-signs value of geometric value when ship $i$ is located on ${\operatorname{ship}}_{\mathrm{i} + 1}$ ’s port side, and ${\mathrm{{SP}}}_{\mathrm{i}}$ is specified to be the opposite of the geometric value when ${\operatorname{ship}}_{\mathrm{i}}$ is located on ship $\mathrm{i} + 1$ ’s starboard side.
+
+§ C. DATA ANALYSIS METHOD
+
+Figure 4 presents the steps of the regression analysis method.
+
+ < g r a p h i c s >
+
+Fig. 4. Flow diagram of regression analysis.
+
+The hydrodynamic dataset of the ship formation is divided into different subsets to analyze the effects of speed and spacing between ships. The impacts of both longitudinal distances and lateral locations are considered on the total resistance of the ship formation system. The total resistance variations among the formation of different speeds have been observed. However, the direct relationship between total resistance and speed is still not revealed. The relationship between total resistance and speed is expected to be found using the tandem formation dataset. During the quantitative analysis of the speed impacts on total drag in tandem formation, the tandem formation dataset is split into subsets of different ${\mathrm{{ST}}}_{1}$ distances. Then, a correlation analysis between total resistance and speed is performed to highlight the strength of the correlation and determine which speed criterion more effectively characterizes variations in total resistance.
+
+Three steps are taken to quantify the impacts of longitudinal spacing and lateral locations. Firstly, the dataset is divided into six subsets based on different speeds. Each subset is further categorized into tandem formation, parallel formation, and triangle formation. After that, regression analysis is conducted on subsets of total resistance data at uniform speeds. The results will reveal if the impacts of ST and SP differ across various fleet speeds. Finally, overall functions will be defined to describe ST and SP impacts, incorporating speed variations, with coefficients estimated from the entire dataset.
+
+After correlation analysis with different factors, a model for the formation system's total drag regression formulation is developed, including the five features: speed, ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ . Multivariate polynomial and ridge regression methods are combined to build a regression model. Polynomial regression is a method of regression analysis based on polynomial functions for fitting non-linear relationships in data. Compared with linear regression, polynomial regression could model the non-linear characteristics of the data by introducing polynomial terms, thus increasing the flexibility and applicability of the model. In practice, data has many features, and polynomial regression for a single feature performs poorly on fitting data with many features. Thus, multivariate polynomial regression is used in this study to fit the total resistance dataset of ship formations.
+
+In practical applications of using multivariate polynomials for regression analysis, choosing the polynomial degree carefully is crucial. If the degree is too low, it may result in poor fitting performance. On the other hand, if the degree is too high, it can lead to overfitting issues where the model fits noise in the data rather than capturing the underlying trends. Therefore, when employing multivariate polynomials for regression analysis, it's crucial to select the degree of the polynomial judiciously. To address potential overfitting issues and improve the accuracy of data fitting when using multivariate polynomials to establish regression equations, this study introduces a combined approach of ridge regression with multivariate polynomial regression to build the regression model. Ridge regression is an improved least squares estimation method that addresses multicollinearity by introducing an L2 norm penalty term, thereby enhancing model stability and generalization capability. The penalty term is $\lambda$ times the sum of the squares of all regression coefficients (where $\lambda$ is the penalty coefficient). Combining ridge regression with multivariate polynomial regression can effectively control the complexity of the model and reduce the risk of overfitting by introducing a penalty term. This is particularly beneficial when input features are highly correlated or when the condition number of the data matrix is high. Such stability helps mitigate numerical computation issues that may arise in multivariate polynomial regression, thereby enhancing the reliability of the model.
+
+§ IV. RESULTS AND DISCUSSION
+
+In this section, the impacts of speed, longitudinal location, and transverse spacing are analyzed to estimate the final regression model.
+
+§ A. VARIATION OF DRAG DUE TO SPEED
+
+To estimate the relationship between speed and total resistance, the total resistance of the formation and the speed is provided in Figures 5 to 8. In these plots, the relationship between speed and total resistance of tandem formation under different longitudinal spacing ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ is depicted. Simultaneously, the combined resistance experienced by three individual ships sailing alone at various speeds is also provided.
+
+The blue dots in the graph represent the total resistance experienced by the formation system, while the red line indicates the combined resistance experienced by three individual ships sailing alone at different speeds. The purpose of marking the red line on the graph is to determine whether a three-ship tandem formation can achieve a resistance gain compared to three ships sailing individually. When ${\mathrm{{ST}}}_{1}$ is set as ${0.25}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , and ${2.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$ both ships sailing individually and ships sailing in formation, the resistance of 'WillLead I' ships decreases as ship speed increases. Simultaneously, the formation system benefits from resistance gains, with the maximum gain occurring at a speed of ${0.212}\mathrm{\;m}/\mathrm{s}$ , reaching up to ${4.85}\%$ in maximum resistance reduction.
+
+When ${\mathrm{{ST}}}_{1}$ is set as ${0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , the total resistance observed during sailing in formation decreases as speed increases. However, the formation system did not gain resistance benefits. Instead, it experienced resistance amplification, with the maximum increase reaching ${119.3}\%$ at ${\mathrm{{ST}}}_{1} = {0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ .
+
+When ${\mathrm{{ST}}}_{1}$ is set to ${1.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$ and ${1.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , the formation system experiences resistance gains. However, as ship speed increases, the resistance benefits gradually decrease. Additionally, when ${\mathrm{{ST}}}_{2}$ is smaller than ${\mathrm{{ST}}}_{1}$ , the resistance benefits of the formation system nearly disappear as the ship speed increases to 0.424m/s.
+
+ < g r a p h i c s >
+
+(c) ${\mathrm{{ST}}}_{2} = {1.5}{\mathrm{L}}_{\mathrm{{OA}}}$
+
+ < g r a p h i c s >
+
+Fig. 5. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {0.25}{\mathrm{\;L}}_{\mathrm{{OA}}}$
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Fig. 6. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Fig. 7. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {1.0}{\mathrm{L}}_{\mathrm{{OA}}}$
+
+In tandem formations, the transverse distances SP1 and SP2 and the lateral forces do not affect the total resistance of the formation system. A correlation analysis between total resistance and speed of the formation is conducted. The results are shown in Table 3. All correlation coefficients are significant at 0.01 level of p-value(two-tailed).
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Fig. 8. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {1.5}{\mathrm{L}}_{\mathrm{{OA}}}$
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Fig. 9. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {2.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$
+
+§ B. QUANTIFICATION OF LONGITUDINAL SPACING AND TRANSVERSE LOCATION
+
+This section presents regression analysis results of spacing in adjacent ships in formations. The results reveal the impact of spacing in adjacent ships $\left( {{\mathrm{{ST}}}_{1}{\mathrm{{ST}}}_{2}{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ on total resistance. In tandem formation, the transverse locations ${\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ , are set as zero. Besides, both ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ are varied from ${0.25}{\mathrm{L}}_{\mathrm{{OA}}}$ to ${2.0}\mathrm{{Log}}$ . So, there is no need to standardize the coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ when calculating the coefficient in tandem formation subset.
+
+Similarly, ${\mathrm{{ST}}}_{1}$ , and ${\mathrm{{ST}}}_{2}$ , are set as zero in parallel formation. The effect of standardizing the coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ before calculating the coefficient in the parallel formation subset is insignificant. However, longitudinal distance and transverse spacing existed between the neighboring ships in the triangle formation. The longitudinal distance is much bigger than the transverse spacing. The unstandardized coefficients can not be compared directly. However, the standardized coefficients, derived from standardized regression analysis, are adjusted so that the variances of the variables are 1 . in triangle formation. Thus, considering the need for standardizing correlation analysis under triangular formation configurations, standardized regression analysis is adopted for correlation analysis in all conditions to unify the correlation coefficient analysis operations.
+
+The whole data set of the total resistance of tandem formation is split into different subsets with the same speed. The coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ for the total drag variable in each subset are presented in Fig 10. The results clarify whether ${\mathrm{{ST}}}_{1}$ or ${\mathrm{{ST}}}_{2}$ significantly impact total resistance in this multivariant regression model.
+
+Two comparisons are made to interpret the estimated standardized coefficients. For tandem formation within the same subset, the weights of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ are compared. The impact of ${\mathrm{{ST}}}_{1}$ on total resistance is more significant than that of ${\mathrm{{ST}}}_{2}$ .
+
+The other comparison involves analyzing coefficients for different speed groups, which reveals how external impacts vary among ships at different speeds. This analysis shows distinct trends in the effects of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ . on total resistance is flat when the speed gets bigger. The correlation coefficient of ${\mathrm{{ST}}}_{2}$ ranges between -0.083 and -0.075, indicating a negative correlation between ${\mathrm{{ST}}}_{2}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{2}$ increasing, total resistance tends to decrease. It is suggested that increasing ${\mathrm{{ST}}}_{2}$ can help the formation system reduce total resistance. However, the influence of ${\mathrm{{ST}}}_{2}$ on total resistance is instinctive. The correlation coefficient of ${\mathrm{{ST}}}_{1}$ ranges between 0.42 and 0.435, indicating a positive correlation between ${\mathrm{{ST}}}_{1}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{1}$ increasing, the formation system may gain energy benefits. It is suggested that decreasing ${\mathrm{{ST}}}_{1}$ can help the formation system reduce total resistance. However, the influence of ${\mathrm{{ST}}}_{1}$ on total resistance is significant. Thus, choosing ${\mathrm{{ST}}}_{1}$ carefully is more effective than selecting ${\mathrm{{ST}}}_{2}$ in obtaining total resistance benefits in tandem formation.
+
+ < g r a p h i c s >
+
+Fig. 10. The standardized coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ on total resistance in tandem formation.
+
+The whole data set of the total resistance of parallel formation is split into different subsets with the same speed. The coefficients of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ for total resistance in each subset are presented in Fig 11.
+
+Examining the standardized coefficients for parallel formation within the same subset allows for comparing the effects of SP1 and SP2. For parallel formation, both SP1 and SP2 have a significant impact on total resistance. The impact of SP1 is slightly higher than that of ${\mathrm{{SP}}}_{2}$ . In parallel formation, controlling the lateral spacing ${\mathrm{{SP}}}_{1}$ between ${\mathrm{{Ship}}}_{1}$ and ${\mathrm{{Ship}}}_{2}$ is more effective in gaining resistance benefits compared to controlling the lateral spacing ${\mathrm{{SP}}}_{2}$ between ${\mathrm{{Ship}}}_{2}$ and $\mathrm{{Ship}}3$ . It also can be observed that the trends of both impacts of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ on total resistance are undulatory with speed varying. The correlation coefficient of ${\mathrm{{SP}}}_{1}$ ranges between 0.823 and 0.844, indicating a positive correlation between ${\mathrm{{SP}}}_{1}$ and total resistance in parallel formation. With ${\mathrm{{SP}}}_{1}$ increasing, resistance benefits tend to decrease. The influence of ${\mathrm{{ST}}}_{2}$ on total resistance is positive. The correlation coefficient of ${\mathrm{{SP}}}_{2}$ varies from 0.700 to 0.722, indicating a positive correlation between ${\mathrm{{ST}}}_{1}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{1}$ increasing, the formation may gain resistance reduction benefits too.
+
+ < g r a p h i c s >
+
+Fig. 11. The standardized coefficients of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ on total resistance in parallel formation.
+
+The whole data set of the total resistance of right triangle formation is split into different subsets with the same speed. The coefficients of ST and SP for total resistance in each subset are presented in Fig 12. Analyzing the standardized coefficients for right triangle formation within the same subset reveals that the impact of ST is less significant compared to SP Besides, the impact of both ST and SP on total resistance is positive. The Impacts of SP is more significant than ST. It also can be observed that the effect of ST on total resistance changes more gradually with speed compared to the impact of SP on total resistance. The correlation coefficient of ST ranges remains at 0.43, nearly unchanged, and the correlation coefficient of SP varies from 0.70 to 0.72, similar to the standardized correlation coefficient of ${\mathrm{{SP}}}_{2}$ in parallel formation.
+
+Regression models have been developed to quantitatively assess the effects of speed, ST, and SP on total resistance for tandem, parallel, and triangle formations. This paper presents the final regression models established using the complete dataset. Multivariant polynomial and ridge regression methods are combined to build the regression model. Due to the limited sample size, k-fold cross-validation was employed to enhance the robustness of the regression model.
+
+The 4th-order regression functions are listed as equation (3)
+
+$$
+{X}_{\text{ total }} = {0.01S}{P}_{1}^{4} - {0.13S}{P}_{1}^{3}S{P}_{2} + {0.81S}{P}_{1}^{3}S{T}_{1} + {0.81S}{P}_{1}^{3}S{T}_{2} + {1.6S}{P}_{1}^{3} + {0.12S}{P}_{1}^{2}S{P}_{2}^{2} + {0.6S}{P}_{1}^{2}S{P}_{2}S{T}_{1} + {0.6S}{P}_{1}^{2}S{P}_{2}S{T}_{2}
+$$
+
+$$
+- {0.01S}{P}_{1}^{2}S{P}_{2}U + {0.98S}{P}_{1}^{2}S{P}_{2} + {2.22S}{P}_{1}^{2}S{T}_{1}^{2} - {0.12S}{P}_{1}^{2}S{T}_{1}S{T}_{2} + {0.03S}{P}_{1}^{2}S{T}_{1}U + {0.26S}{P}_{1}^{2}S{T}_{1} - {0.19S}{P}_{1}^{2}S{T}_{2}^{2}
+$$
+
+$$
++ {0.01S}{P}_{1}^{2}S{T}_{2}U + {0.26S}{P}_{1}^{2}S{T}_{2} + {0.05S}{P}_{1}^{2}U - {1.28S}{P}_{1}^{2} - {0.24S}{P}_{1}S{P}_{2}^{3} + {0.85S}{P}_{1}S{P}_{2}^{2} + {2.01S}{P}_{1}S{P}_{2}S{T}_{1}^{2}
+$$
+
+$$
+- {0.52S}{P}_{1}S{P}_{2}S{T}_{1}S{T}_{2} + {0.02S}{P}_{1}S{P}_{2}S{T}_{1}U + {0.45S}{P}_{1}S{P}_{2}S{T}_{1} - {0.59S}{P}_{1}S{P}_{2}S{T}_{2}{}^{2} + {0.45S}{P}_{1}S{P}_{2}S{T}_{2} + {0.04S}{P}_{1}S{P}_{2}U \tag{3}
+$$
+
+$$
+- {0.74S}{P}_{1}S{P}_{2} + {3.0S}{P}_{1}S{T}_{1}^{3} - {1.11S}{P}_{1}S{T}_{1}^{2}S{T}_{2} + {0.08S}{P}_{1}S{T}_{1}^{2}U - {2.08S}{P}_{1}S{T}_{1}^{2} - {1.19S}{P}_{1}S{T}_{1}S{T}_{2}^{2} - {0.06S}{P}_{1}S{T}_{1}S{T}_{2}U
+$$
+
+$$
++ {0.98S}{P}_{1}S{T}_{1}S{T}_{2} - {0.02S}{P}_{1}S{T}_{1}U - {0.29S}{P}_{1}S{T}_{1} - {1.29S}{P}_{1}S{T}_{2}^{3} - {0.07S}{P}_{1}S{T}_{2}^{2}U + {1.06S}{P}_{1}S{T}_{2}^{2} + {0.01S}{P}_{1}S{T}_{2}U
+$$
+
+$$
+- {0.29S}{P}_{1}S{T}_{2} - {0.02S}{P}_{1}U - {0.45S}{P}_{1} + {0.1S}{P}_{2}^{4} + {0.27S}{P}_{2}^{3}S{T}_{1} + {0.27S}{P}_{2}^{3}S{T}_{2} + {0.02S}{P}_{2}^{3}U + {0.03S}{P}_{2}^{3} + {2.41S}{P}_{2}^{2}S{T}_{1}^{2}
+$$
+
+$$
+- {0.33S}{P}_{2}^{2}S{T}_{1}S{T}_{2} + {0.06S}{P}_{2}^{2}S{T}_{1}U + {0.21S}{P}_{2}^{2}S{T}_{1} - {0.4S}{P}_{2}^{2}S{T}_{2}^{2} + {0.04S}{P}_{2}^{2}S{T}_{2}U + {0.21S}{P}_{2}^{2}S{T}_{2} + {0.02S}{P}_{2}^{2}U
+$$
+
+$$
+- {0.35S}{P}_{2}{}^{2} + {3.26S}{P}_{2}S{T}_{1}^{3} - {1.18S}{P}_{2}S{T}_{1}{}^{2}S{T}_{2} + {0.23S}{P}_{2}S{T}_{1}^{2}U - {2.6S}{P}_{2}S{T}_{1}{}^{2} - {1.27S}{P}_{2}S{T}_{1}S{T}_{2}{}^{2} + {0.09S}{P}_{2}S{T}_{1}S{T}_{2}U
+$$
+
+$$
++ {0.7S}{P}_{2}S{T}_{1}S{T}_{2} + {0.01S}{P}_{2}S{T}_{1}{U}^{2} + {0.04S}{P}_{2}S{T}_{1}U - {0.06S}{P}_{2}S{T}_{1} - {1.38S}{P}_{2}S{T}_{2}^{3} + {0.08S}{P}_{2}S{T}_{2}^{2}U + {0.8S}{P}_{2}S{T}_{2}^{2}
+$$
+
+$$
++ {0.01S}{P}_{2}S{T}_{2}{U}^{2} + {0.07S}{P}_{2}S{T}_{2}U - {0.06S}{P}_{2}S{T}_{2} + {0.02S}{P}_{2}{U}^{2} - {0.14S}{P}_{2}U + {0.18S}{P}_{2} + {2.1S}{T}_{1}^{4} - {0.68S}{T}_{1}^{3}S{T}_{2}
+$$
+
+$$
++ {0.12S}{T}_{1}^{3}U - {4.17S}{T}_{1}^{3} - {0.75S}{T}_{1}^{2}S{T}_{2}^{2} - {0.02S}{T}_{1}^{2}S{T}_{2}U + {1.18S}{T}_{1}^{2}S{T}_{2} - {0.08S}{T}_{1}^{2}U + {2.5S}{T}_{1}^{2} - {0.76S}{T}_{1}S{T}_{2}^{3}
+$$
+
+$$
+- {0.02S}{T}_{1}S{T}_{2}^{2}U + {1.29S}{T}_{1}S{T}_{2}^{2} + {0.09S}{T}_{1}S{T}_{2}U - {1.48S}{T}_{1}S{T}_{2} + {0.01S}{T}_{1}{U}^{2} + {0.01S}{T}_{1}U - {0.17S}{T}_{1} - {0.83S}{T}_{2}^{4}
+$$
+
+$$
+- {0.03S}{T}_{2}^{3}U + {1.42S}{T}_{2}^{3} + {0.11S}{T}_{2}^{2}U - {1.6S}{T}_{2}^{2} - {0.02S}{T}_{2}U - {0.17S}{T}_{2} - {0.02}{U}^{4} + {0.01}{U}^{3} + {0.02}{U}^{2} + {0.15U} + {0.62}
+$$
+
+The results of the estimation of the regression analysis are shown in Table 4. According to the regression analysis results, about ${98.2}\%$ of the variance in the total power of the training systems can be explained by fleet speed. ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}\left( {\mathrm{R}}^{2}\right.$ is 0.982 for the whole dataset). Besides, speed has an estimate of 0.273, indicating a positive but relatively small effect on the dependent variable.
+
+The standard error is 0.836, which is relatively large and suggests high uncertainty in the estimate. The t-statistic is 0.327, falling below common critical values (such as 1.96), indicating that the effect of this feature may not be significant. The standardized estimate of 0.327 aligns with the t-statistic, reinforcing that the standardized impact is also relatively modest. Feature ${\mathrm{{ST}}}_{1}$ has an estimate of -0.171, reflecting a negative effect on the dependent variable. With a standard error of 0.157, the precision of this estimate is relatively high. However, the t-statistic of -1.089 is below common critical values, suggesting that the impact of ${\mathrm{{ST}}}_{1}$ might also be nonsignificant. The standardized estimate of -1.089 confirms the direction of the effect but similarly indicates that its significance is weak. Feature ${\mathrm{{ST}}}_{2}$ has an estimate of -0.167, suggesting a negative effect on the dependent variable. The standard error is 0.157, indicating high precision in the forecast. The t-statistic of -1.069 implies that this feature's impact may not be significant. The standardized estimate of -1.069 supports the direction of the effect but demonstrates that the impact is not substantial. Feature ${\mathrm{{SP}}}_{1}$ is estimated at -0.501, indicating a strong negative impact on the dependent variable.
+
+TABLE IV. ESTIMATION RESULTS OF THE FINAL REGRESSION MODEL
+
+max width=
+
+X ${\mathbf{R}}^{2}$ F-state $\mathbf{{Estimate}}$ Std.error t-stat
+
+1-6
+X 0.982 168.045 0.603 0.089 6.759
+
+1-6
+${\mathrm{C}}_{\mathrm{U}}$ / / 0.273 0.836 0.327
+
+1-6
+${\mathrm{C}}_{\mathrm{{ST}}1}$ / / -0.171 0.157 -1.09
+
+1-6
+${\mathrm{C}}_{\mathrm{{ST}}2}$ / / -0.167 0.157 -1.07
+
+1-6
+${\mathrm{C}}_{\mathrm{{SP}}1}$ / / -0.501 0.156 -3.205
+
+1-6
+${\mathrm{C}}_{\mathrm{{SP}}2}$ / / 0.128 0.159 0.806
+
+1-6
+
+The standard error is 0.156, which is relatively small, suggesting high accuracy in the estimate. The t-statistic of - 3.205 exceeds common critical values, demonstrating that the effect of ${\mathrm{{SP}}}_{1}$ is significant. The standardized estimate of -3.205 confirms that the impact remains strong even after standardization. Feature ${\mathrm{{SP}}}_{2}$ has an estimate of 0.128, showing a positive but small effect on the dependent variable. The standard error is 0.159, which is relatively large, reflecting higher uncertainty in the estimate. The t-statistic of 0.806 is below common critical values, indicating that the effect of ${\mathrm{{SP}}}_{2}$ is insignificant. The standardized estimate of 0.806 suggests that the impact is also small after standardization.
+
+ < g r a p h i c s >
+
+Fig. 12. The standardized coefficients of ST and SP on total resistance in triangle formation.
+
+§ V. CONCLUSION
+
+The paper established a regression model to analyze the effects of factors including speed, longitudinal distances $\left( {\mathrm{{ST}}}_{1}\right.$ , ${\mathrm{{ST}}}_{2}$ ), and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ on the total resistance of ship formations derived from CFD data. The variation of total resistance in tandem formation due to speed can be observed. The correlation analysis shows a strong correlation between speed and total resistance. The longitudinal spacing and transverse location impact on total resistance vary for different formation configurations. For tandem formation, both ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ have a more significant influence on total resistance. For parallel formation, the impact of both ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ slightly fluctuates with growing ship speed. However, for triangle formation, the impact of SP on total resistance shows a strong positive correlation. The ST impact on total resistance is negative. The regression analysis results revealed that about ${98.2}\%$ of the variance in the total resistance of various ship formation systems was mainly explained by the factors that influenced its formation speed, ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ .
+
+This paper investigates the impact of different factors in the formation of total resistance. The estimated result indicates that more CFD data should be used in the regression analysis process. More intelligent methods can be used for regression analysis.
+
+§ ACKNOWLEDGMENT
+
+The work presented in this study is financially supported by the National Natural Science Foundation of China under grants 52271364, 52101402, and 52271367.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..cdbf1708cd2ca031aa741aa7b4916648680afe4f
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,597 @@
+# Lyapunov Matrix-Based Guaranteed Cost Dynamic Positioning Control for Unmanned Marine Vehicles With Time Delay
+
+${1}^{\text{st }}$ Xin Yang
+
+College of Navigation
+
+Dalian Maritime University
+
+Dalian, China
+
+yangxin3541@163.com
+
+${2}^{\text{nd }}$ Li-Ying Hao*
+
+College of
+
+Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+haoliying_0305@163.com
+
+${3}^{\text{rd }}$ Tieshan Li*
+
+College of Automation Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chengdu, China
+
+tieshanli@126.com
+
+${4}^{\text{th }}$ Yang Xiao
+
+Department of Computer Science
+
+The University of Alabama
+
+Tuscaloosa, USA
+
+yangxiao@ieee.org
+
+${5}^{\text{th }}$ Guoyong Liu
+
+College of
+
+Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+liuguoyong0806@163.com
+
+Abstract-This paper presents a Lyapunov matrix-based guaranteed cost dynamic positioning controller for unmanned marine vehicles (UMVs) with time delays. A novel Lyapunov-Krasovskii functional (LKF) is introduced, which enhances the analysis of time delays and system states. The controller design leverages the LMI framework alongside Jensen's inequality to determine sufficient criteria for its feasibility, ensuring that the UMVs' state errors gradually reduce to zero and providing an adaptive ${H}_{\infty }$ performance guarantee. Additionally, the cost function is upper-bounded, and the effectiveness of the method is demonstrated through simulation results.
+
+Index Terms-Lyapunov matrix, time delays, guaranteed cost control (GCC), dynamic positioning (DP), unmanned marine vehicles (UMVs)
+
+## I. INTRODUCTION
+
+Unmanned Marine Vehicles (UMVs) play a pivotal role in enhancing maritime safety and security by performing high-risk operations effectively without compromising human lives, thereby revolutionizing search and rescue missions and coastal surveillance [1]-[3]. Compared to traditional anchor mooring, dynamic positioning (DP) offers a more versatile, precise, and environmentally friendly method for positioning vessels, making it particularly suitable for use in complex or dynamic marine environments [4]. Over the years, numerous control strategies have been proposed to ensure robust DP control in UMVs. For instance, [5] introduces a dynamic output feedback control method, specifically tailored for DP ships to counter denial of service attacks. In [6], the design of an adaptive sliding mode fault-tolerant compensation mechanism is presented, targeting the maintenance of DP control in UMVs despite thruster faults and unknown ocean disturbances. It is crucial to recognize that time delays are typically inevitable [7]-[9]. Consequently, there is an urgent need to develop a strategy to compensate for these time delays.
+
+In DP systems for UMVs, time delays due to network-mediated signal and control command transmission represents a significant challenge that often compromises system stability and performance [10], [11]. This issue has led to the development of various advanced time delays compensation methods [12]-[14]. Among these methods, enhanced time delays compensation approaches for autonomous underwater vehicles have shown promise [12]. In [13], model-free proportional-derivative controllers are innovatively incorporated into the Lyapunov-Krasovskii functional (LKF) framework to effectively counteract the impacts of delays. Advanced strategies utilizing Lyapunov matrix-based LKF methods have proven particularly effective. These approaches leverage comprehensive information about time delays and system states, providing control strategy that efficiently accommodates time delays systems. The primary motivation of this paper is to develop a complete LKF based on the Lyapunov matrix to mitigate the effects of time delays on UMVs.
+
+On another research front, guaranteed cost control (GCC) has been extensively studied [15]-[17]. This strategy offers the advantage of setting an upper limit on a specified performance index, ensuring that any system performance degradation remains below this predefined cost threshold. As vessels often navigate in complex and varied ocean environments, the impact of wind and wave disturbances becomes significant [17]. In response,[18] investigated a robust ${H}_{\infty }$ guaranteed cost controller aimed at enhancing path-following performance. The GCC method presented in [19] offers a way to reduce energy consumption for surface vessels in DP, thereby increasing its practical applicability. These results have inspired our research into GCC theory, particularly its application to DP ships. Thus, how to propose a guaranteed cost controller based on the Lyapunov matrix to achieve effective DP control for UMVs is the second research motivation of this paper.
+
+---
+
+This work was supported by the National Natural Science Foundation of China (Grant Nos: 51939001, 52171292, 61976033); Dalian Outstanding Young Talents Program(2022RJ05)
+
+* Corresponding authors. Emails: haoliying_0305@163.com;tieshanli@12 6.com
+
+---
+
+The primary objective of this paper is to design a Lyapunov matrix-based guaranteed cost dynamic positioning controller, utilizing the LMI method to ensure stability. The paper's main contributions are evaluated in comparison to recent advancements in the field.
+
+1) We propose a novel time delays compensation method for UMVs that incorporates more detailed time delays and state information by employing a Lyapunov matrix-based complete-type LKF, which reduces conservatism compared to conventional time delays compensation techniques.
+
+2) A novel guaranteed cost DP control strategy is designed, which ensuring the stability of DP systems for UMVs while providing an upper bound on a prespecified cost function.
+
+The remainder of this paper is structured as follows: Section II describes the UMVs model with time delays. Section 3 reviews basic concepts and preliminary results, which serve as the theoretical basis for the proposed LKF method based on the Lyapunov matrix. A complete-type LKF based on the Lyapunov matrix is presented in Section 4. Section 5 introduces guaranteed cost dynamic positioning controller. Finally, Section 6 presents simulations to illustrate the validity of the theoretical results.
+
+## II. UMVs MODELING AND PROBLEM DESCRIPTION
+
+## A. Dynamic modeling for UMVs
+
+The UMVs model typically employs a three degrees of freedom motion equation to describe its dynamic behavior in the marine environment. These three degrees of freedom include yaw, surge, and sway. Therefore, the dynamic equations of the UMVs are often simplified and expressed in the following form [20]:
+
+$$
+\xi \dot{v}\left( t\right) + \mathcal{C}v\left( t\right) + \mathcal{D}\lambda \left( t\right) = \mathcal{G}u\left( t\right) , \tag{1}
+$$
+
+$$
+\dot{\lambda }\left( t\right) = \mathcal{S}\left( {\theta \left( t\right) }\right) v\left( t\right) , \tag{2}
+$$
+
+where matrix $\xi$ represents the inertia matrix, and the velocity vector $v\left( t\right) = {\left\lbrack {v}_{1}\left( t\right) ,{v}_{2}\left( t\right) ,{v}_{3}\left( t\right) \right\rbrack }^{\mathrm{T}}$ describes the ship’s motion in different directions, where ${v}_{1}\left( t\right)$ represents the surge velocity, ${v}_{2}\left( t\right)$ indicates the sway velocity, and ${v}_{3}\left( t\right)$ corresponds to the yaw rate. The position vector $\lambda \left( t\right) =$ ${\left\lbrack {x}_{o}\left( t\right) ,{y}_{o}\left( t\right) ,\theta \left( t\right) \right\rbrack }^{\mathrm{T}}$ is used to describe the ship’s position and orientation on the water surface, where ${x}_{o}\left( t\right)$ and ${y}_{o}\left( t\right)$ represent the coordinates of the ship in the horizontal plane, and $\theta \left( t\right)$ denotes the ship’s heading angle. The matrix $\mathcal{C}$ is the damping matrix. The matrix $\mathcal{D}$ represents the mooring moment matrix, which models external disturbances such as wind, waves, and ocean currents acting on the UMVs. The matrix $\mathcal{G}$ is the thrust allocation matrix, responsible for distributing thrust to the ship's propellers. Additionally, the rotation matrix $\mathcal{S}\left( {\theta \left( t\right) }\right)$ is given by:
+
+$$
+\mathcal{S}\left( {\theta \left( t\right) }\right) = \left\lbrack \begin{matrix} \cos \left( {\theta \left( t\right) }\right) & - \sin \left( {\theta \left( t\right) }\right) & 0 \\ \sin \left( {\theta \left( t\right) }\right) & \cos \left( {\theta \left( t\right) }\right) & 0 \\ 0 & 0 & I \end{matrix}\right\rbrack ,
+$$
+
+For the control of UMVs in the northern region, where the yaw angle $\theta \left( t\right)$ is small, the matrix $\mathcal{S}\left( {\theta \left( t\right) }\right)$ can be approximated by the identity matrix $I$ . We define the following matrices ${\mathcal{A}}_{1} = - {\xi }^{-1}\mathcal{C},\mathcal{B} = {\xi }^{-1}\mathcal{G}$ , and $\mathcal{F} = - {\xi }^{-1}\mathcal{D}$ . let $x\left( t\right) = {\left\lbrack {\lambda }^{\mathrm{T}}\left( t\right) ,{v}^{\mathrm{T}}\left( t\right) \right\rbrack }^{\mathrm{T}}$ . Thus, the dynamic equation of UMVs can be written as follows:
+
+$$
+\dot{x}\left( t\right) = {Ax}\left( t\right) + {B}_{1}u\left( t\right) + {Fg}\left( {t, v\left( t\right) }\right) + \varpi \left( t\right) , \tag{3}
+$$
+
+where $A = \left\lbrack \begin{matrix} 0 & I \\ 0 & {\mathcal{A}}_{1} \end{matrix}\right\rbrack ,{B}_{1} = \left\lbrack \begin{array}{l} 0 \\ \mathcal{B} \end{array}\right\rbrack , F = \left\lbrack \begin{matrix} 0 \\ \mathcal{F} \end{matrix}\right\rbrack .\varpi \left( t\right) \in$ ${L}_{2}\lbrack 0,\infty )$ represents disturbance. Defined reference signal ${x}_{\text{ref }} = \left\lbrack \begin{array}{l} {\lambda }_{\text{ref }} \\ {v}_{\text{ref }} \end{array}\right\rbrack$ , the error vector is defined as $e\left( t\right) = x\left( t\right) -$ ${x}_{\text{ref }}$ . The error dynamics of the UMVs can be expressed as follows:
+
+$$
+\dot{e}\left( t\right) = {Ae}\left( t\right) + {B}_{1}u\left( t\right) + {Fg}\left( {t, e\left( t\right) }\right) + {B}_{2}\omega \left( t\right) . \tag{4}
+$$
+
+let $e\left( t\right) \in {\mathbb{R}}^{n}$ denote the state vector, $u \in {\mathbb{R}}^{p}$ represent the control input vector. The term ${B}_{2}\omega \left( t\right)$ is defined as $A{x}_{\text{ref }} + \varpi \left( t\right)$ , where $\omega \left( t\right) = \left\lbrack \begin{array}{l} {x}_{\text{ref }} \\ \varpi \left( t\right) \end{array}\right\rbrack$ , and ${B}_{2} = \left\lbrack \begin{array}{ll} A & I \end{array}\right\rbrack$ . Considering the unavoidable time delay during signal transmission, it follows from equation (4) that:
+
+$$
+\dot{e}\left( t\right) = {Ae}\left( t\right) + {A}_{1}e\left( {t - d}\right) + {B}_{1}u\left( t\right) + {Fg}\left( {e\left( t\right) , e\left( {t - d}\right) }\right)
+$$
+
+$$
++ {B}_{2}\omega \left( t\right) \text{,} \tag{5}
+$$
+
+where $d > 0$ represents the time delay, and $g : {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{m}$ is assumed to satisfy the following inequality.
+
+Assumption 1: Let matrices $\mathbb{N} > 0$ and $\mathbb{Y} > 0$ , where $\mathbb{N} \in$ ${\mathbb{R}}^{m \times m}$ and $\mathbb{Y} \in {\mathbb{R}}^{{2n} \times {2n}}$ . The nonlinear function $g\left( \cdot \right)$ satisfies the following inequality:
+
+$$
+{g}^{\mathrm{T}}\left( {e\left( t\right) , e\left( {t - d}\right) }\right) {\mathbb{N}}^{-1}g\left( {e\left( t\right) , e\left( {t - d}\right) }\right)
+$$
+
+$$
+\leq \left\lbrack \begin{array}{ll} {e}^{\mathrm{T}}\left( t\right) & {e}^{\mathrm{T}}\left( {t - d}\right) \end{array}\right\rbrack \mathbb{Y}{\left\lbrack \begin{array}{ll} {e}^{\mathrm{T}}\left( t\right) & {e}^{\mathrm{T}}\left( {t - d}\right) \end{array}\right\rbrack }^{\mathrm{T}}.
+$$
+
+Remark 1: Assumption 1 ensures that the function $g\left( t\right)$ is bounded. When $e\left( t\right) = 0$ or $e\left( {t - d}\right) = 0$ , Assumption 1 in this article is the general form of Assumption 1 in reference [17].
+
+To bring both linear and angular velocities to zero and minimize the impact of external disturbances such as wind, waves, and currents, the output $\mathcal{Z}\left( t\right)$ , can be formulated as follows:
+
+$$
+\mathcal{Z}\left( t\right) = {C}_{z}e\left( t\right) \tag{6}
+$$
+
+Definition 1: [21] The system is described by
+
+$$
+\dot{x}\left( t\right) = {A}_{d}x\left( t\right) + {B}_{d}\omega \left( t\right) ,
+$$
+
+$$
+\mathcal{Z}\left( t\right) = {C}_{d}x\left( t\right) , x\left( 0\right) = 0. \tag{7}
+$$
+
+Given a constant ${\gamma }_{0} > 0,\omega \left( t\right) \in {L}_{2}\lbrack 0,\infty )$ , if for any $\epsilon > 0$ , the following condition
+
+$$
+{\int }_{0}^{\infty }{\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) \mathrm{d}t \leq {\gamma }_{0}^{2}{\int }_{0}^{\infty }{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \mathrm{d}t + \epsilon ,
+$$
+
+is satisfied, then the system (7) is said to achieve an adaptive ${H}_{\infty }$ performance index that does not exceed ${\gamma }_{0}$ .
+
+Definition 2: The cost function related to system (5) is described as follows:
+
+$$
+J = {\int }_{0}^{\infty }\left\lbrack {{e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right) }\right\rbrack \mathrm{d}t. \tag{8}
+$$
+
+where ${\Omega }^{\mathrm{T}} = \Omega \geq 0$ and ${\mathbb{R}}_{q}^{\mathrm{T}} = {\mathbb{R}}_{q} \geq 0$ .
+
+A stabilization controller $u\left( t\right)$ for system (5) is called a guaranteed cost controller if it ensures that $J \leq {J}^{ * }$ , where ${J}^{ * }$ is a positive scalar. The value ${J}^{ * }$ is known as the guaranteed cost.
+
+## B. Control Objective
+
+For UMVs (5) affected by time delays, this paper proposes a guaranteed cost DP controller based on the Lyapunov matrix. The controller is designed to drive the state error of the UMVs asymptotically converges to zero, while also satisfying the specified ${H}_{\infty }$ performance criteria and guaranteeing an upper limit on the predefined cost function.
+
+## III. PRELIMINARIES
+
+We will construct a complete-type LKF for UMVs (5) based on Lyapunov matrix. In the following section, we begin by defining the Lyapunov matrix.
+
+## A. Lyapunov matrix
+
+We will now present relevant concepts related to linear time-delay systems as follows [22]:
+
+$$
+\dot{e}\left( t\right) = {Ae}\left( t\right) + {A}_{1}e\left( {t - d}\right) ,
+$$
+
+$$
+e\left( \iota \right) = \phi \left( \iota \right) ,\iota \in \left\lbrack {-d,0}\right\rbrack , \tag{9}
+$$
+
+where $e\left( t\right) \in {\mathbb{R}}^{n}$ represents the state vector, $d > 0$ is the time delay. $A,{A}_{1} \in {\mathbb{R}}^{n \times n}$ are system matrices.
+
+Definition 3: [22] Given a matrix $\mathcal{P} > 0$ , if the matrix $Q : \left\lbrack {-d, d}\right\rbrack \rightarrow {\mathbb{R}}^{n \times n}$ meets the following conditions:
+
+$$
+\dot{Q}\left( \pi \right) = Q\left( \pi \right) A + Q\left( {\pi - d}\right) {A}_{1},
+$$
+
+$$
+Q\left( {-\pi }\right) = {Q}^{\mathrm{T}}\left( \pi \right) ,
+$$
+
+$$
+- \mathcal{P} = Q\left( 0\right) A + Q\left( {-d}\right) {A}_{1} + {A}^{\mathrm{T}}Q\left( 0\right) + {A}_{1}^{\mathrm{T}}Q\left( d\right) , \tag{10}
+$$
+
+Definition 4: [22] If the system (9) is asymptotically stable, there exists a Lyapunov matrix $Q\left( \cdot \right)$ associated with matrix $\mathcal{P}$ for system (9).
+
+Lemma 1: Suppose there are matrices $H = {H}^{\mathrm{T}} > 0$ and ${K}_{11} \in {\mathbb{R}}^{p \times n}$ , and for any $U > 0$ , the following LMI condition is satisfied:
+
+$$
+\left\lbrack \begin{matrix} {\Lambda }_{2} & {A}_{1}X \\ {\left( {A}_{1}X\right) }^{\mathrm{T}} & - U \end{matrix}\right\rbrack < 0 \tag{11}
+$$
+
+where ${\Lambda }_{2} = {AX} - {B}_{1}{Y}_{1} + {\left( AX - {B}_{1}{Y}_{1}\right) }^{\mathrm{T}} + U, X = {H}^{-1}$ , ${Y}_{1} = {K}_{11}{H}^{-1}$ , and $U = {H}^{-1}L{H}^{-1}$ , then there exists a controller ${u}_{1}\left( t\right) = - {K}_{11}e\left( t\right)$ that guarantees system (9) is asymptotically stable.
+
+Proof 1: Select the Lyapunov function:
+
+$$
+{V}_{c}\left( {e\left( t\right) }\right) = {e}^{\mathrm{T}}\left( t\right) {He}\left( t\right) + {\int }_{t - d}^{t}{e}^{\mathrm{T}}\left( \theta \right) {Le}\left( \theta \right) \mathrm{d}\theta .
+$$
+
+We can derive:
+
+$$
+{\left. \frac{\mathrm{d}{V}_{c}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 9\right) } = {\Lambda }_{0}^{\mathrm{T}}{\Omega }_{1}{\Lambda }_{0}
+$$
+
+where
+
+$$
+{\Lambda }_{0} = {\left\lbrack {e}^{\mathrm{T}}\left( t\right) ,{e}^{\mathrm{T}}\left( t - d\right) \right\rbrack }^{\mathrm{T}},
+$$
+
+$$
+{\Omega }_{1} = \left\lbrack \begin{matrix} {\Lambda }_{2} & {A}_{1}X \\ {\left( {A}_{1}X\right) }^{\mathrm{T}} & - U \end{matrix}\right\rbrack ,
+$$
+
+$$
+{\Lambda }_{2} = {AX} - {B}_{1}{Y}_{1} + {\left( AX - {B}_{1}{Y}_{1}\right) }^{\mathrm{T}} + U,
+$$
+
+$$
+X = {H}^{-1},{Y}_{1} = {K}_{11}{H}^{-1}, U = {H}^{-1}L{H}^{-1}.
+$$
+
+Using Lyapunov stability theory, the controller ${u}_{1}\left( t\right) =$ $- {K}_{11}e\left( t\right)$ guarantees the asymptotic stability of system (9).
+
+### IV.A COMPLETE-TYPE LKF
+
+We construct a LKF $\mathfrak{V}\left( \cdot \right)$ :
+
+$$
+\mathfrak{V}\left( {e\left( t\right) }\right) = {\mathfrak{V}}_{1}\left( {e\left( t\right) }\right) + {\mathfrak{V}}_{2}\left( {e\left( t\right) }\right) , e \in {C}_{p}\left( {\left\lbrack {-d,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) \tag{12}
+$$
+
+where
+
+$$
+{\mathfrak{V}}_{1}\left( {e\left( t\right) }\right) = {e}^{\mathrm{T}}\left( t\right) Q\left( 0\right) e\left( t\right) + 2{e}^{\mathrm{T}}\left( t\right) {\Gamma }_{1}\left( {e\left( t\right) }\right)
+$$
+
+$$
++ {\int }_{-d}^{0}{\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + {\tau }_{1}}\right) {A}_{1}^{\mathrm{T}}Q\left( {{\tau }_{1} - {\tau }_{2}}\right) {A}_{1}e\left( {t + {\tau }_{2}}\right) \mathrm{d}{\tau }_{1}\mathrm{\;d}{\tau }_{2},
+$$
+
+$$
+{\mathfrak{V}}_{2}\left( {e\left( t\right) }\right) = {\int }_{-d}^{0}{\int }_{\tau }^{0}{e}^{\mathrm{T}}\left( {t + s}\right) {A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right)
+$$
+
+$$
+\times {A}_{1}e\left( {t + s}\right) \mathrm{d}s\mathrm{\;d}\tau + {\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + \tau }\right) {\mathcal{Q}}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau ,
+$$
+
+(13)
+
+where ${\Gamma }_{1}\left( {e\left( t\right) }\right) = {\int }_{-d}^{0}Q\left( {-d - \tau }\right) {A}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau$ and matrices $\mathcal{R},{\mathcal{Q}}_{1}$ satisfying the ${\mathcal{R}}^{\mathrm{T}} = \mathcal{R} > 0,{\mathcal{Q}}_{1}^{\mathrm{T}} = {\mathcal{Q}}_{1} > 0$ .
+
+## V. CONTROLLER DESIGN AND STABILITY ANALYSIS
+
+In this section, we will provide a detailed explanation of the controller design process and conduct a systematic analysis of its stability.
+
+## A. Controller Design
+
+We propose the following guaranteed cost DP controller for UMVs in (5):
+
+$$
+u\left( t\right) = {u}_{1}\left( t\right) + {u}_{2}\left( t\right) ,
+$$
+
+$$
+{u}_{1}\left( t\right) = - {K}_{11}e\left( t\right) ,
+$$
+
+$$
+{u}_{2}\left( t\right) = \frac{1}{2}{K}_{21}{B}_{1}^{\mathrm{T}}\left\lbrack {Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( {e\left( t\right) }\right) }\right\rbrack + \frac{1}{2}{K}_{22}e\left( {t - d}\right) ,
+$$
+
+(14)
+
+where ${K}_{11},{K}_{21},{K}_{22}$ are feedback gain matrices. ${K}_{11}$ is already determined in Lemma 1, while ${K}_{21}$ and ${K}_{22}$ will be provided in Theorem 1.
+
+Theorem 1: Consider the UMVs (5) under Assumption 1. The guaranteed cost DP controller is defined by (14). For the given positive definite matrices $\mathbb{N} \in {\mathbb{R}}^{m \times m},\mathbb{Y} \mathrel{\text{:=}}$ $\left\lbrack \begin{array}{ll} {\mathbb{Y}}_{11} & {\mathbb{Y}}_{12} \\ {\mathbb{Y}}_{12}^{\mathrm{T}} & {\mathbb{Y}}_{22} \end{array}\right\rbrack \in {\mathbb{R}}^{{2n} \times {2n}},\mathcal{P} \in {\mathbb{R}}^{n \times n}$ , and a positive constant ${\gamma }_{0}$ , if there exist positive definite matrices $\mathcal{R},{\mathcal{Q}}_{1} \in {\mathbb{R}}^{n \times n}$ , and matrices ${K}_{21} \in {\mathbb{R}}^{p \times p},{K}_{22} \in {\mathbb{R}}^{p \times n}$ such that $\mathcal{P} - {\mathcal{Q}}_{1} - {\mathcal{P}}_{1} > 0$ and the following inequality holds,
+
+$$
+E \mathrel{\text{:=}} \left\lbrack \begin{matrix} \mathcal{P} + {\mathcal{Q}}_{1} + {\mathcal{P}}_{1} - {E}_{1} & {E}_{2} & {E}_{3} \\ {E}_{2}^{\mathrm{T}} & - {\mathcal{Q}}_{1} + {\mathbb{Y}}_{22} & \frac{1}{2}{K}_{22}^{\mathrm{T}}{B}_{1}^{\mathrm{T}} \\ {E}_{3}^{\mathrm{T}} & \frac{1}{2}{B}_{1}{K}_{22} & {E}_{4} \end{matrix}\right\rbrack < 0,
+$$
+
+(15)
+
+where
+
+$$
+{E}_{1} = \frac{1}{2}Q\left( 0\right) {B}_{1}\left( {{K}_{21} + {K}_{21}^{\mathrm{T}}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) - {\mathbb{Y}}_{11} - {C}_{z}^{\mathrm{T}}{C}_{z}
+$$
+
+$$
+- {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}}Q\left( 0\right) - Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}}Q\left( 0\right) ,
+$$
+
+$$
+{E}_{2} = \frac{1}{2}Q\left( 0\right) {B}_{1}{K}_{22} + {\mathbb{Y}}_{12},
+$$
+
+$$
+{E}_{3} = Q\left( 0\right) {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}},
+$$
+
+$$
+{E}_{4} = - \frac{\mathcal{R}}{d} + {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}{B}_{2}{B}_{2}^{\mathrm{T}},
+$$
+
+then, the state of the UMVs in system (5) asymptotically converge to zero, while maintaining an ${H}_{\infty }$ norm bound of ${\gamma }_{0}$ .
+
+Proof 2: The time derivative of $\mathfrak{V}\left( {e\left( t\right) }\right)$ along the trajectory of the UMVs (5) can be calculated as follows:
+
+$$
+{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right)
+$$
+
+$$
+= - {U}_{0}\left( {e\left( t\right) }\right) + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right)
+$$
+
+$$
++ 2{g}^{\mathrm{T}}\left( {e\left( t\right) , e\left( {t - d}\right) }\right) {F}^{\mathrm{T}}\left\lbrack {Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( {e\left( t\right) }\right) }\right\rbrack
+$$
+
+$$
++ 2{\left\lbrack Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}}{B}_{2}\omega \left( t\right)
+$$
+
+$$
++ 2{\left\lbrack Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}}{B}_{1}u\left( t\right) \tag{16}
+$$
+
+where
+
+$$
+{U}_{0}\left( e\right) = {e}^{\mathrm{T}}\left( t\right) \left( {\mathcal{P} - {\mathcal{Q}}_{1} - {\mathcal{P}}_{1}}\right) e\left( t\right) + {e}^{\mathrm{T}}\left( {t - d}\right) {\mathcal{Q}}_{1}e\left( {t - d}\right)
+$$
+
+$$
++ {\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + \tau }\right) {A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right) {A}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau .
+$$
+
+$$
+{\mathcal{P}}_{1} = {\int }_{-d}^{0}{A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right) {A}_{1}\mathrm{\;d}\tau .
+$$
+
+Substituting (14) into (16), we have
+
+$$
+{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \leq {\Gamma }^{\mathrm{T}}\left( t\right) {E\Gamma }\left( t\right)
+$$
+
+(17)
+
+where
+
+$$
+\Gamma \left( t\right) = {\left\lbrack {e}^{\mathrm{T}}\left( t\right) {e}^{\mathrm{T}}\left( t - d\right) {\Gamma }_{1}^{\mathrm{T}}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}},
+$$
+
+$$
+E \mathrel{\text{:=}} \left\lbrack \begin{matrix} \mathcal{P} + {\mathcal{Q}}_{1} + {\mathcal{P}}_{1} - {E}_{1} & {E}_{2} & {E}_{3} \\ {E}_{2}^{\mathrm{T}} & - {\mathcal{Q}}_{1} + {\mathbb{Y}}_{22} & \frac{1}{2}{K}_{22}^{\mathrm{T}}{B}_{1}^{\mathrm{T}} \\ {E}_{3}^{\mathrm{T}} & \frac{1}{2}{B}_{1}{K}_{22} & {E}_{4} \end{matrix}\right\rbrack ,
+$$
+
+where
+
+$$
+{E}_{1} = \frac{1}{2}Q\left( 0\right) {B}_{1}\left( {{K}_{21} + {K}_{21}^{\mathrm{T}}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) - {\mathbb{Y}}_{11} - {C}_{z}^{\mathrm{T}}{C}_{z}
+$$
+
+$$
+- {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}}Q\left( 0\right) - Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}}Q\left( 0\right) ,
+$$
+
+$$
+{E}_{2} = \frac{1}{2}Q\left( 0\right) {B}_{1}{K}_{22} + {\mathbb{Y}}_{12},
+$$
+
+$$
+{E}_{3} = Q\left( 0\right) {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}},
+$$
+
+$$
+{E}_{4} = - \frac{\mathcal{R}}{d} + {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}{B}_{2}{B}_{2}^{\mathrm{T}},
+$$
+
+For $E < 0$ , it implies that
+
+$$
+{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \leq 0. \tag{18}
+$$
+
+If Theorem 1 holds, then the ${\int }_{{t}_{0}}^{t}{\Gamma }^{\mathrm{T}}\left( \tau \right) {E\Gamma }\left( \tau \right) \mathrm{d}\tau < 0$ is satisfied:
+
+$$
+0 \leq {\epsilon }_{\min }\parallel e\left( t\right) {\parallel }^{2} \leq \mathfrak{V}\left( e\right) \leq \mathfrak{V}\left( {e\left( {t}_{0}\right) }\right) - {\int }_{{t}_{0}}^{t}{\mathcal{Z}}^{\mathrm{T}}\left( \tau \right) \mathcal{Z}\left( \tau \right) \mathrm{d}\tau
+$$
+
+$$
++ {\gamma }_{0}^{2}{\int }_{{t}_{0}}^{t}{\omega }^{\mathrm{T}}\left( \tau \right) \omega \left( \tau \right) \mathrm{d}\tau , t > {t}_{0}.
+$$
+
+(19)
+
+Clearly
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow \infty }}{\int }_{{t}_{0}}^{t}{\Gamma }^{\mathrm{T}}\left( \tau \right) {E\Gamma }\left( \tau \right) \mathrm{d}\tau \leq \mathfrak{V}\left( {e\left( {t}_{0}\right) }\right) . \tag{20}
+$$
+
+We obtain
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow \infty }}\parallel e\left( t\right) \parallel = 0 \tag{21}
+$$
+
+By integrating equation (18)) from 0 to $\infty$ , we obtain
+
+$$
+{\int }_{0}^{\infty }{\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) \mathrm{d}t \leq {\gamma }_{0}^{2}{\int }_{0}^{\infty }{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \mathrm{d}t + \mathfrak{V}\left( 0\right) . \tag{22}
+$$
+
+## B. Guaranteed Cost Analysis
+
+When the disturbance $\omega \left( t\right)$ is absent, combining (8),(14), and (18) yields:
+
+$$
+{\left. \frac{d\mathfrak{V}\left( {e\left( t\right) }\right) }{dt}\right| }_{\left( 5\right) } + {e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right)
+$$
+
+$$
+\leq {\Gamma }^{\mathrm{T}}\left( t\right) \left( {E + \operatorname{diag}\left( {\Omega ,0,0}\right) + \frac{1}{4}{O}^{\mathrm{T}}{\mathbb{R}}_{q}O}\right) \Gamma \left( t\right)
+$$
+
+(23)where
+
+$$
+O = \left\lbrack \begin{array}{lll} - \left( {\mathbb{Y} + {K}_{21}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) & {K}_{22} & - \left( {\mathbb{Y} + {K}_{21}}\right) {B}_{1}^{\mathrm{T}} \end{array}\right\rbrack .
+$$
+
+We have
+
+$$
+\left\lbrack \begin{matrix} E + \operatorname{diag}\left( {\Omega ,0,0}\right) & {O}^{\mathrm{T}} \\ O & - 4{\mathbb{R}}_{q}^{-1} \end{matrix}\right\rbrack < 0
+$$
+
+Hence,
+
+$$
+{\int }_{0}^{\infty }\left\lbrack {{e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right) }\right\rbrack \mathrm{d}t \leq {J}^{ * }.
+$$
+
+where ${J}^{ * } = \mathfrak{V}\left( {e\left( t\right) }\right)$ , with $\mathfrak{V}\left( {e\left( t\right) }\right)$ defined in (12).
+
+## VI. Simulation Example
+
+The proposed control method's effectiveness is demonstrated through a standard floating production vessel model, as referenced in [23]. The matrices $\xi ,\mathcal{C}$ , and $\mathcal{D}$ are specified in [23], and the thruster configuration matrix $\mathcal{G}$ is derived from [24].
+
+The initial condition is given as $\phi \left( s\right) = {\left\lbrack \begin{array}{llllll} 0 & 0 & 0 & 0 & 0 & {0.2} \end{array}\right\rbrack }^{\mathrm{T}}$ , with the reference signal set to ${x}_{\text{ref }} = \left\lbrack \begin{array}{ll} {0.01} & - \end{array}\right.$ ${0.010.050.010.040.01}{\rbrack }^{\mathrm{T}}$ . The time delay is $d = 1$ , and the ${H}_{\infty }$ performance index ${\gamma }_{0} = 2$ .
+
+The controller gain matrix ${K}_{11}$ is obtained by solving the LMI (11) from Lemma 1, as follows:
+
+$$
+{K}_{11} = \left\lbrack \begin{matrix} {3.7401} & - {1.0550} & {1.6703} \\ {3.5625} & - {0.3782} & {0.8900} \\ - {1.8457} & {7.7381} & - {7.8852} \\ - {1.7986} & {7.5585} & - {7.6782} \\ - {0.2156} & {1.5274} & - {0.7243} \\ - {0.4379} & {2.3744} & - {1.7009} \end{matrix}\right.
+$$
+
+$$
+\left. \begin{matrix} {3.8794} & - {0.4071} & {0.6533} \\ {3.8305} & {0.0888} & {0.2145} \\ - {0.4836} & {5.9344} & - {4.2105} \\ - {0.4706} & {5.8028} & - {4.0941} \\ - {0.0351} & {1.3831} & - {0.1833} \\ - {0.0963} & {2.0038} & - {0.7325} \end{matrix}\right\rbrack
+$$
+
+We set the matrix $\mathcal{Q} = I$ . The(i, j)-th element of the matrix $Q\left( \theta \right)$ , denoted as ${Q}_{ij}\left( \theta \right)$ , is determined using the method proposed in [22]. Figures 1-2 show the values of ${Q}_{ij}\left( \theta \right)$ for $\theta \in \left\lbrack {0,1}\right\rbrack$ .
+
+Finally, by solving LMI (15) as described in Theorem 1, the controller gain matrices ${K}_{21}$ and ${K}_{22}$ are computed as:
+
+$$
+{K}_{21} = 1 \times {10}^{4}\left\lbrack \begin{matrix} {0.0284} & {0.0561} & {0.0446} \\ - {0.0249} & - {0.0535} & - {0.0615} \\ - {0.0160} & {0.0215} & {0.0366} \\ {0.0187} & - {0.0010} & - {0.0542} \\ - {0.2113} & {0.2496} & - {0.1101} \\ - {0.0871} & {0.0356} & - {0.0328} \end{matrix}\right.
+$$
+
+$$
+\left. \begin{matrix} {0.0381} & - {0.0108} & - {0.0257} \\ - {0.0140} & {0.0119} & {0.0273} \\ {0.0723} & - {0.0709} & - {0.0315} \\ - {0.0511} & {0.0035} & {0.1249} \\ - {0.0808} & - {0.9459} & {1.2040} \\ {0.1207} & {0.5283} & - {0.6940} \end{matrix}\right\rbrack
+$$
+
+
+
+Figure 1. Lyapunov matrix ${Q}_{ij}\left( \theta \right) ,\left( {\mathrm{i} = 1,2,3\mathrm{j} = 1,2,3,4,5,6}\right)$ .
+
+
+
+Figure 2. Lyapunov matrix ${Q}_{ij}\left( \theta \right) ,\left( {\mathrm{i} = 4,5,6\mathrm{j} = 1,2,3,4,5,6}\right)$ .
+
+$$
+{K}_{22} = \left\lbrack \begin{matrix} - {15.4416} & {8.9036} & {66.6063} \\ - {18.9989} & - {43.3441} & - {101.0469} \\ {22.5784} & {53.9648} & {21.5477} \\ - {82.2859} & - {141.7415} & {16.5537} \\ - {118.7051} & - {303.3277} & {414.2256} \\ {118.3731} & {331.0541} & - {512.3389} \end{matrix}\right.
+$$
+
+$$
+\left. \begin{matrix} {35.6011} & {15.1773} & - {22.0347} \\ - {70.0417} & - {49.6179} & - {12.4059} \\ - {26.9017} & {10.6883} & {43.2947} \\ - {69.8399} & - {34.0587} & - {92.0165} \\ - {396.6715} & {76.7366} & - {41.8016} \\ {433.3628} & - {113.3949} & {30.4866} \end{matrix}\right\rbrack .
+$$
+
+Figures 3-4 illustrate the trajectories of the position error, yaw angle error, and velocity error for UMVs (5). Figure 5 shows the control inputs produced by the controller as defined in (14).
+
+
+
+Figure 3. Response curves of UMVs position and yaw angle error.
+
+
+
+Figure 4. Response curves of UMVs velocity error.
+
+
+
+Figure 5. The comparison of response curves for $u\left( t\right)$
+
+In Figure 3, it is clear that the error curves under the proposed control initially exhibit small fluctuations before gradually converging to zero. This demonstrates the effectiveness of the proposed control strategy. Figure 5 illustrates the response curves of the guaranteed cost DP controller $u\left( t\right)$ .
+
+## CONCLUSION
+
+In this paper, we have addressed the guaranteed cost dynamic positioning control problem for UMVs with time delays. First, we propose a complete-type LKF for UMVs with time delays, which leads to less conservativeness. Furthermore, a novel approach for designing guaranteed cost dynamic positioning controller for DP systems is proposed. The specific form of this controller is derived from feasible solutions of LMIs. The proposed method was validated through simulation, demonstrating its effectiveness. Future work will focus on extending the control strategy to systems with time-varying delays, further enhancing the robustness of DP control for UMVs.
+
+## REFERENCES
+
+[1] X. Hu, G. Zhu, Y. Ma, Z. Li, R. Malekian, and M. Á. Sotelo, "Event-triggered adaptive fuzzy setpoint regulation of surface vessels with unmeasured velocities under thruster saturation constraints," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 13463-13472, 2021.
+
+[2] V. Bertram, "Unmanned surface vehicles-a survey," Skibsteknisk Selskab, Copenhagen, Denmark, vol. 1, pp. 1-14, 2008.
+
+[3] L.-Y. Hao, H. Zhang, T.-S. Li, B. Lin, and C. P. Chen, "Fault tolerant control for dynamic positioning of unmanned marine vehicles based on ts fuzzy model with unknown membership functions," IEEE Transactions on Vehicular Technology, vol. 70, no. 1, pp. 146-157, 2021.
+
+[4] Y.-L. Wang, Q.-L. Han, M.-R. Fei, and C. Peng, "Network-based t-s fuzzy dynamic positioning controller design for unmanned marine vehicles," IEEE transactions on cybernetics, vol. 48, no. 9, pp. 2750- 2763, 2018.
+
+[5] Z. Ye, D. Zhang, and Z.-G. Wu, "Adaptive event-based tracking control of unmanned marine vehicle systems with dos attack," Journal of the Franklin Institute, vol. 358, no. 3, pp. 1915-1939, 2021.
+
+[6] L.-Y. Hao, H. Zhang, W. Yue, and H. Li, "Fault-tolerant compensation control based on sliding mode technique of unmanned marine vehicles subject to unknown persistent ocean disturbances," International Journal of Control, Automation and Systems, vol. 18, no. 3, pp. 739-752, 2020.
+
+[7] X. Yang, Y. Wang, and X. Zhang, "Lyapunov matrix-based method to guaranteed cost control for a class of delayed continuous-time nonlinear systems," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 1, pp. 554-560, 2020.
+
+[8] X. Wang and G.-H. Yang, "Fault-tolerant consensus tracking control for linear multiagent systems under switching directed network," IEEE transactions on cybernetics, vol. 50, no. 5, pp. 1921-1930, 2019.
+
+[9] X. Yang, Y. Wang, X. Yang, and X. Zhang, "Lyapunov matrix-based method to global robust practical exponential r-stability for a class of delayed continuous-time nonlinear systems: Theory and applications," International Journal of Robust and Nonlinear Control, vol. 32, no. 18, pp. 10234-10250, 2022.
+
+[10] L.-Y. Hao, H. Zhang, H. Li, and T.-S. Li, "Sliding mode fault-tolerant control for unmanned marine vehicles with signal quantization and time-delay," Ocean Engineering, vol. 215, p. 107882, 2020.
+
+[11] X. Yang, L.-Y. Hao, T. Li, and Y. Xiao, "Dynamic positioning control for unmanned marine vehicles with thruster faults and time delay: A lyapunov matrix-based method," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024.
+
+[12] J. Kim, H. Joe, S.-c. Yu, J. S. Lee, and M. Kim, "Time-delay controller design for position control of autonomous underwater vehicle under disturbances," IEEE Transactions on Industrial Electronics, vol. 63, no. 2, pp. 1052-1061, 2015.
+
+[13] J. Yan, J. Gao, X. Yang, X. Luo, and X. Guan, "Position tracking control of remotely operated underwater vehicles with communication delay," IEEE Transactions on Control Systems Technology, vol. 28, no. 6, pp. 2506-2514, 2019.
+
+[14] T. Zhang and G. Liu, "Predictive tracking control of network-based agents with communication delays," IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 6, pp. 1150-1156, 2018.
+
+[15] S. Chang and T. Peng, "Adaptive guaranteed cost control of systems with uncertain parameters," IEEE Transactions on Automatic Control, vol. 17, no. 4, pp. 474-483, 1972.
+
+[16] D. Wang and D. Liu, "Learning and guaranteed cost control with event-based adaptive critic implementation," IEEE transactions on neural networks and learning systems, vol. 29, no. 12, pp. 6004-6014, 2018.
+
+[17] J.-Q. Wang, Z.-J. Zou, and T. Wang, "Path following of a surface ship sailing in restricted waters under wind effect using robust ${h}_{\infty }$ guaranteed cost control," International Journal of Naval Architecture and Ocean Engineering, vol. 11, no. 1, pp. 606-623, 2019.
+
+[18] R. Lu, H. Cheng, and J. Bai, "Fuzzy-model-based quantized guaranteed cost control of nonlinear networked systems," IEEE Transactions on Fuzzy Systems, vol. 23, no. 3, pp. 567-575, 2014.
+
+[19] T. Liu, Y. Xiao, Y. Feng, J. Li, and B. Huang, "Guaranteed cost control for dynamic positioning of marine surface vessels with input saturation," Applied Ocean Research, vol. 116, p. 102868, 2021.
+
+[20] L.-Y. Hao, H. Zhang, G. Guo, and H. Li, "Quantized sliding mode control of unmanned marine vehicles: Various thruster faults tolerated with a unified model," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 3, pp. 2012-2026, 2019.
+
+[21] X. Wang and G.-H. Yang, "Cooperative adaptive fault-tolerant tracking control for a class of multi-agent systems with actuator failures and mismatched parameter uncertainties," IET Control Theory & Applications, vol. 9, no. 8, pp. 1274-1284, 2015.
+
+[22] V. Kharitonov, Time-delay systems: Lyapunov functionals and matrices. Springer Science & Business Media, 2012.
+
+[23] M. Breivik and T. I. Fossen, "Guidance laws for autonomous underwater vehicles," Underwater vehicles, vol. 4, pp. 51-76, 2009.
+
+[24] T. I. Fossen, S. I. Sagatun, and A. J. Sørensen, "Identification of dynamically positioned ships," Control Engineering Practice, vol. 4, no. 3, pp. 369-376, 1996.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..79cb5c7032ec9a08a2ac6119114ca83d808c7ba7
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,543 @@
+§ LYAPUNOV MATRIX-BASED GUARANTEED COST DYNAMIC POSITIONING CONTROL FOR UNMANNED MARINE VEHICLES WITH TIME DELAY
+
+${1}^{\text{ st }}$ Xin Yang
+
+College of Navigation
+
+Dalian Maritime University
+
+Dalian, China
+
+yangxin3541@163.com
+
+${2}^{\text{ nd }}$ Li-Ying Hao*
+
+College of
+
+Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+haoliying_0305@163.com
+
+${3}^{\text{ rd }}$ Tieshan Li*
+
+College of Automation Engineering
+
+University of Electronic Science
+
+and Technology of China
+
+Chengdu, China
+
+tieshanli@126.com
+
+${4}^{\text{ th }}$ Yang Xiao
+
+Department of Computer Science
+
+The University of Alabama
+
+Tuscaloosa, USA
+
+yangxiao@ieee.org
+
+${5}^{\text{ th }}$ Guoyong Liu
+
+College of
+
+Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+liuguoyong0806@163.com
+
+Abstract-This paper presents a Lyapunov matrix-based guaranteed cost dynamic positioning controller for unmanned marine vehicles (UMVs) with time delays. A novel Lyapunov-Krasovskii functional (LKF) is introduced, which enhances the analysis of time delays and system states. The controller design leverages the LMI framework alongside Jensen's inequality to determine sufficient criteria for its feasibility, ensuring that the UMVs' state errors gradually reduce to zero and providing an adaptive ${H}_{\infty }$ performance guarantee. Additionally, the cost function is upper-bounded, and the effectiveness of the method is demonstrated through simulation results.
+
+Index Terms-Lyapunov matrix, time delays, guaranteed cost control (GCC), dynamic positioning (DP), unmanned marine vehicles (UMVs)
+
+§ I. INTRODUCTION
+
+Unmanned Marine Vehicles (UMVs) play a pivotal role in enhancing maritime safety and security by performing high-risk operations effectively without compromising human lives, thereby revolutionizing search and rescue missions and coastal surveillance [1]-[3]. Compared to traditional anchor mooring, dynamic positioning (DP) offers a more versatile, precise, and environmentally friendly method for positioning vessels, making it particularly suitable for use in complex or dynamic marine environments [4]. Over the years, numerous control strategies have been proposed to ensure robust DP control in UMVs. For instance, [5] introduces a dynamic output feedback control method, specifically tailored for DP ships to counter denial of service attacks. In [6], the design of an adaptive sliding mode fault-tolerant compensation mechanism is presented, targeting the maintenance of DP control in UMVs despite thruster faults and unknown ocean disturbances. It is crucial to recognize that time delays are typically inevitable [7]-[9]. Consequently, there is an urgent need to develop a strategy to compensate for these time delays.
+
+In DP systems for UMVs, time delays due to network-mediated signal and control command transmission represents a significant challenge that often compromises system stability and performance [10], [11]. This issue has led to the development of various advanced time delays compensation methods [12]-[14]. Among these methods, enhanced time delays compensation approaches for autonomous underwater vehicles have shown promise [12]. In [13], model-free proportional-derivative controllers are innovatively incorporated into the Lyapunov-Krasovskii functional (LKF) framework to effectively counteract the impacts of delays. Advanced strategies utilizing Lyapunov matrix-based LKF methods have proven particularly effective. These approaches leverage comprehensive information about time delays and system states, providing control strategy that efficiently accommodates time delays systems. The primary motivation of this paper is to develop a complete LKF based on the Lyapunov matrix to mitigate the effects of time delays on UMVs.
+
+On another research front, guaranteed cost control (GCC) has been extensively studied [15]-[17]. This strategy offers the advantage of setting an upper limit on a specified performance index, ensuring that any system performance degradation remains below this predefined cost threshold. As vessels often navigate in complex and varied ocean environments, the impact of wind and wave disturbances becomes significant [17]. In response,[18] investigated a robust ${H}_{\infty }$ guaranteed cost controller aimed at enhancing path-following performance. The GCC method presented in [19] offers a way to reduce energy consumption for surface vessels in DP, thereby increasing its practical applicability. These results have inspired our research into GCC theory, particularly its application to DP ships. Thus, how to propose a guaranteed cost controller based on the Lyapunov matrix to achieve effective DP control for UMVs is the second research motivation of this paper.
+
+This work was supported by the National Natural Science Foundation of China (Grant Nos: 51939001, 52171292, 61976033); Dalian Outstanding Young Talents Program(2022RJ05)
+
+* Corresponding authors. Emails: haoliying_0305@163.com;tieshanli@12 6.com
+
+The primary objective of this paper is to design a Lyapunov matrix-based guaranteed cost dynamic positioning controller, utilizing the LMI method to ensure stability. The paper's main contributions are evaluated in comparison to recent advancements in the field.
+
+1) We propose a novel time delays compensation method for UMVs that incorporates more detailed time delays and state information by employing a Lyapunov matrix-based complete-type LKF, which reduces conservatism compared to conventional time delays compensation techniques.
+
+2) A novel guaranteed cost DP control strategy is designed, which ensuring the stability of DP systems for UMVs while providing an upper bound on a prespecified cost function.
+
+The remainder of this paper is structured as follows: Section II describes the UMVs model with time delays. Section 3 reviews basic concepts and preliminary results, which serve as the theoretical basis for the proposed LKF method based on the Lyapunov matrix. A complete-type LKF based on the Lyapunov matrix is presented in Section 4. Section 5 introduces guaranteed cost dynamic positioning controller. Finally, Section 6 presents simulations to illustrate the validity of the theoretical results.
+
+§ II. UMVS MODELING AND PROBLEM DESCRIPTION
+
+§ A. DYNAMIC MODELING FOR UMVS
+
+The UMVs model typically employs a three degrees of freedom motion equation to describe its dynamic behavior in the marine environment. These three degrees of freedom include yaw, surge, and sway. Therefore, the dynamic equations of the UMVs are often simplified and expressed in the following form [20]:
+
+$$
+\xi \dot{v}\left( t\right) + \mathcal{C}v\left( t\right) + \mathcal{D}\lambda \left( t\right) = \mathcal{G}u\left( t\right) , \tag{1}
+$$
+
+$$
+\dot{\lambda }\left( t\right) = \mathcal{S}\left( {\theta \left( t\right) }\right) v\left( t\right) , \tag{2}
+$$
+
+where matrix $\xi$ represents the inertia matrix, and the velocity vector $v\left( t\right) = {\left\lbrack {v}_{1}\left( t\right) ,{v}_{2}\left( t\right) ,{v}_{3}\left( t\right) \right\rbrack }^{\mathrm{T}}$ describes the ship’s motion in different directions, where ${v}_{1}\left( t\right)$ represents the surge velocity, ${v}_{2}\left( t\right)$ indicates the sway velocity, and ${v}_{3}\left( t\right)$ corresponds to the yaw rate. The position vector $\lambda \left( t\right) =$ ${\left\lbrack {x}_{o}\left( t\right) ,{y}_{o}\left( t\right) ,\theta \left( t\right) \right\rbrack }^{\mathrm{T}}$ is used to describe the ship’s position and orientation on the water surface, where ${x}_{o}\left( t\right)$ and ${y}_{o}\left( t\right)$ represent the coordinates of the ship in the horizontal plane, and $\theta \left( t\right)$ denotes the ship’s heading angle. The matrix $\mathcal{C}$ is the damping matrix. The matrix $\mathcal{D}$ represents the mooring moment matrix, which models external disturbances such as wind, waves, and ocean currents acting on the UMVs. The matrix $\mathcal{G}$ is the thrust allocation matrix, responsible for distributing thrust to the ship's propellers. Additionally, the rotation matrix $\mathcal{S}\left( {\theta \left( t\right) }\right)$ is given by:
+
+$$
+\mathcal{S}\left( {\theta \left( t\right) }\right) = \left\lbrack \begin{matrix} \cos \left( {\theta \left( t\right) }\right) & - \sin \left( {\theta \left( t\right) }\right) & 0 \\ \sin \left( {\theta \left( t\right) }\right) & \cos \left( {\theta \left( t\right) }\right) & 0 \\ 0 & 0 & I \end{matrix}\right\rbrack ,
+$$
+
+For the control of UMVs in the northern region, where the yaw angle $\theta \left( t\right)$ is small, the matrix $\mathcal{S}\left( {\theta \left( t\right) }\right)$ can be approximated by the identity matrix $I$ . We define the following matrices ${\mathcal{A}}_{1} = - {\xi }^{-1}\mathcal{C},\mathcal{B} = {\xi }^{-1}\mathcal{G}$ , and $\mathcal{F} = - {\xi }^{-1}\mathcal{D}$ . let $x\left( t\right) = {\left\lbrack {\lambda }^{\mathrm{T}}\left( t\right) ,{v}^{\mathrm{T}}\left( t\right) \right\rbrack }^{\mathrm{T}}$ . Thus, the dynamic equation of UMVs can be written as follows:
+
+$$
+\dot{x}\left( t\right) = {Ax}\left( t\right) + {B}_{1}u\left( t\right) + {Fg}\left( {t,v\left( t\right) }\right) + \varpi \left( t\right) , \tag{3}
+$$
+
+where $A = \left\lbrack \begin{matrix} 0 & I \\ 0 & {\mathcal{A}}_{1} \end{matrix}\right\rbrack ,{B}_{1} = \left\lbrack \begin{array}{l} 0 \\ \mathcal{B} \end{array}\right\rbrack ,F = \left\lbrack \begin{matrix} 0 \\ \mathcal{F} \end{matrix}\right\rbrack .\varpi \left( t\right) \in$ ${L}_{2}\lbrack 0,\infty )$ represents disturbance. Defined reference signal ${x}_{\text{ ref }} = \left\lbrack \begin{array}{l} {\lambda }_{\text{ ref }} \\ {v}_{\text{ ref }} \end{array}\right\rbrack$ , the error vector is defined as $e\left( t\right) = x\left( t\right) -$ ${x}_{\text{ ref }}$ . The error dynamics of the UMVs can be expressed as follows:
+
+$$
+\dot{e}\left( t\right) = {Ae}\left( t\right) + {B}_{1}u\left( t\right) + {Fg}\left( {t,e\left( t\right) }\right) + {B}_{2}\omega \left( t\right) . \tag{4}
+$$
+
+let $e\left( t\right) \in {\mathbb{R}}^{n}$ denote the state vector, $u \in {\mathbb{R}}^{p}$ represent the control input vector. The term ${B}_{2}\omega \left( t\right)$ is defined as $A{x}_{\text{ ref }} + \varpi \left( t\right)$ , where $\omega \left( t\right) = \left\lbrack \begin{array}{l} {x}_{\text{ ref }} \\ \varpi \left( t\right) \end{array}\right\rbrack$ , and ${B}_{2} = \left\lbrack \begin{array}{ll} A & I \end{array}\right\rbrack$ . Considering the unavoidable time delay during signal transmission, it follows from equation (4) that:
+
+$$
+\dot{e}\left( t\right) = {Ae}\left( t\right) + {A}_{1}e\left( {t - d}\right) + {B}_{1}u\left( t\right) + {Fg}\left( {e\left( t\right) ,e\left( {t - d}\right) }\right)
+$$
+
+$$
++ {B}_{2}\omega \left( t\right) \text{ , } \tag{5}
+$$
+
+where $d > 0$ represents the time delay, and $g : {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{m}$ is assumed to satisfy the following inequality.
+
+Assumption 1: Let matrices $\mathbb{N} > 0$ and $\mathbb{Y} > 0$ , where $\mathbb{N} \in$ ${\mathbb{R}}^{m \times m}$ and $\mathbb{Y} \in {\mathbb{R}}^{{2n} \times {2n}}$ . The nonlinear function $g\left( \cdot \right)$ satisfies the following inequality:
+
+$$
+{g}^{\mathrm{T}}\left( {e\left( t\right) ,e\left( {t - d}\right) }\right) {\mathbb{N}}^{-1}g\left( {e\left( t\right) ,e\left( {t - d}\right) }\right)
+$$
+
+$$
+\leq \left\lbrack \begin{array}{ll} {e}^{\mathrm{T}}\left( t\right) & {e}^{\mathrm{T}}\left( {t - d}\right) \end{array}\right\rbrack \mathbb{Y}{\left\lbrack \begin{array}{ll} {e}^{\mathrm{T}}\left( t\right) & {e}^{\mathrm{T}}\left( {t - d}\right) \end{array}\right\rbrack }^{\mathrm{T}}.
+$$
+
+Remark 1: Assumption 1 ensures that the function $g\left( t\right)$ is bounded. When $e\left( t\right) = 0$ or $e\left( {t - d}\right) = 0$ , Assumption 1 in this article is the general form of Assumption 1 in reference [17].
+
+To bring both linear and angular velocities to zero and minimize the impact of external disturbances such as wind, waves, and currents, the output $\mathcal{Z}\left( t\right)$ , can be formulated as follows:
+
+$$
+\mathcal{Z}\left( t\right) = {C}_{z}e\left( t\right) \tag{6}
+$$
+
+Definition 1: [21] The system is described by
+
+$$
+\dot{x}\left( t\right) = {A}_{d}x\left( t\right) + {B}_{d}\omega \left( t\right) ,
+$$
+
+$$
+\mathcal{Z}\left( t\right) = {C}_{d}x\left( t\right) ,x\left( 0\right) = 0. \tag{7}
+$$
+
+Given a constant ${\gamma }_{0} > 0,\omega \left( t\right) \in {L}_{2}\lbrack 0,\infty )$ , if for any $\epsilon > 0$ , the following condition
+
+$$
+{\int }_{0}^{\infty }{\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) \mathrm{d}t \leq {\gamma }_{0}^{2}{\int }_{0}^{\infty }{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \mathrm{d}t + \epsilon ,
+$$
+
+is satisfied, then the system (7) is said to achieve an adaptive ${H}_{\infty }$ performance index that does not exceed ${\gamma }_{0}$ .
+
+Definition 2: The cost function related to system (5) is described as follows:
+
+$$
+J = {\int }_{0}^{\infty }\left\lbrack {{e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right) }\right\rbrack \mathrm{d}t. \tag{8}
+$$
+
+where ${\Omega }^{\mathrm{T}} = \Omega \geq 0$ and ${\mathbb{R}}_{q}^{\mathrm{T}} = {\mathbb{R}}_{q} \geq 0$ .
+
+A stabilization controller $u\left( t\right)$ for system (5) is called a guaranteed cost controller if it ensures that $J \leq {J}^{ * }$ , where ${J}^{ * }$ is a positive scalar. The value ${J}^{ * }$ is known as the guaranteed cost.
+
+§ B. CONTROL OBJECTIVE
+
+For UMVs (5) affected by time delays, this paper proposes a guaranteed cost DP controller based on the Lyapunov matrix. The controller is designed to drive the state error of the UMVs asymptotically converges to zero, while also satisfying the specified ${H}_{\infty }$ performance criteria and guaranteeing an upper limit on the predefined cost function.
+
+§ III. PRELIMINARIES
+
+We will construct a complete-type LKF for UMVs (5) based on Lyapunov matrix. In the following section, we begin by defining the Lyapunov matrix.
+
+§ A. LYAPUNOV MATRIX
+
+We will now present relevant concepts related to linear time-delay systems as follows [22]:
+
+$$
+\dot{e}\left( t\right) = {Ae}\left( t\right) + {A}_{1}e\left( {t - d}\right) ,
+$$
+
+$$
+e\left( \iota \right) = \phi \left( \iota \right) ,\iota \in \left\lbrack {-d,0}\right\rbrack , \tag{9}
+$$
+
+where $e\left( t\right) \in {\mathbb{R}}^{n}$ represents the state vector, $d > 0$ is the time delay. $A,{A}_{1} \in {\mathbb{R}}^{n \times n}$ are system matrices.
+
+Definition 3: [22] Given a matrix $\mathcal{P} > 0$ , if the matrix $Q : \left\lbrack {-d,d}\right\rbrack \rightarrow {\mathbb{R}}^{n \times n}$ meets the following conditions:
+
+$$
+\dot{Q}\left( \pi \right) = Q\left( \pi \right) A + Q\left( {\pi - d}\right) {A}_{1},
+$$
+
+$$
+Q\left( {-\pi }\right) = {Q}^{\mathrm{T}}\left( \pi \right) ,
+$$
+
+$$
+- \mathcal{P} = Q\left( 0\right) A + Q\left( {-d}\right) {A}_{1} + {A}^{\mathrm{T}}Q\left( 0\right) + {A}_{1}^{\mathrm{T}}Q\left( d\right) , \tag{10}
+$$
+
+Definition 4: [22] If the system (9) is asymptotically stable, there exists a Lyapunov matrix $Q\left( \cdot \right)$ associated with matrix $\mathcal{P}$ for system (9).
+
+Lemma 1: Suppose there are matrices $H = {H}^{\mathrm{T}} > 0$ and ${K}_{11} \in {\mathbb{R}}^{p \times n}$ , and for any $U > 0$ , the following LMI condition is satisfied:
+
+$$
+\left\lbrack \begin{matrix} {\Lambda }_{2} & {A}_{1}X \\ {\left( {A}_{1}X\right) }^{\mathrm{T}} & - U \end{matrix}\right\rbrack < 0 \tag{11}
+$$
+
+where ${\Lambda }_{2} = {AX} - {B}_{1}{Y}_{1} + {\left( AX - {B}_{1}{Y}_{1}\right) }^{\mathrm{T}} + U,X = {H}^{-1}$ , ${Y}_{1} = {K}_{11}{H}^{-1}$ , and $U = {H}^{-1}L{H}^{-1}$ , then there exists a controller ${u}_{1}\left( t\right) = - {K}_{11}e\left( t\right)$ that guarantees system (9) is asymptotically stable.
+
+Proof 1: Select the Lyapunov function:
+
+$$
+{V}_{c}\left( {e\left( t\right) }\right) = {e}^{\mathrm{T}}\left( t\right) {He}\left( t\right) + {\int }_{t - d}^{t}{e}^{\mathrm{T}}\left( \theta \right) {Le}\left( \theta \right) \mathrm{d}\theta .
+$$
+
+We can derive:
+
+$$
+{\left. \frac{\mathrm{d}{V}_{c}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 9\right) } = {\Lambda }_{0}^{\mathrm{T}}{\Omega }_{1}{\Lambda }_{0}
+$$
+
+where
+
+$$
+{\Lambda }_{0} = {\left\lbrack {e}^{\mathrm{T}}\left( t\right) ,{e}^{\mathrm{T}}\left( t - d\right) \right\rbrack }^{\mathrm{T}},
+$$
+
+$$
+{\Omega }_{1} = \left\lbrack \begin{matrix} {\Lambda }_{2} & {A}_{1}X \\ {\left( {A}_{1}X\right) }^{\mathrm{T}} & - U \end{matrix}\right\rbrack ,
+$$
+
+$$
+{\Lambda }_{2} = {AX} - {B}_{1}{Y}_{1} + {\left( AX - {B}_{1}{Y}_{1}\right) }^{\mathrm{T}} + U,
+$$
+
+$$
+X = {H}^{-1},{Y}_{1} = {K}_{11}{H}^{-1},U = {H}^{-1}L{H}^{-1}.
+$$
+
+Using Lyapunov stability theory, the controller ${u}_{1}\left( t\right) =$ $- {K}_{11}e\left( t\right)$ guarantees the asymptotic stability of system (9).
+
+§ IV.A COMPLETE-TYPE LKF
+
+We construct a LKF $\mathfrak{V}\left( \cdot \right)$ :
+
+$$
+\mathfrak{V}\left( {e\left( t\right) }\right) = {\mathfrak{V}}_{1}\left( {e\left( t\right) }\right) + {\mathfrak{V}}_{2}\left( {e\left( t\right) }\right) ,e \in {C}_{p}\left( {\left\lbrack {-d,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) \tag{12}
+$$
+
+where
+
+$$
+{\mathfrak{V}}_{1}\left( {e\left( t\right) }\right) = {e}^{\mathrm{T}}\left( t\right) Q\left( 0\right) e\left( t\right) + 2{e}^{\mathrm{T}}\left( t\right) {\Gamma }_{1}\left( {e\left( t\right) }\right)
+$$
+
+$$
++ {\int }_{-d}^{0}{\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + {\tau }_{1}}\right) {A}_{1}^{\mathrm{T}}Q\left( {{\tau }_{1} - {\tau }_{2}}\right) {A}_{1}e\left( {t + {\tau }_{2}}\right) \mathrm{d}{\tau }_{1}\mathrm{\;d}{\tau }_{2},
+$$
+
+$$
+{\mathfrak{V}}_{2}\left( {e\left( t\right) }\right) = {\int }_{-d}^{0}{\int }_{\tau }^{0}{e}^{\mathrm{T}}\left( {t + s}\right) {A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right)
+$$
+
+$$
+\times {A}_{1}e\left( {t + s}\right) \mathrm{d}s\mathrm{\;d}\tau + {\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + \tau }\right) {\mathcal{Q}}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau ,
+$$
+
+(13)
+
+where ${\Gamma }_{1}\left( {e\left( t\right) }\right) = {\int }_{-d}^{0}Q\left( {-d - \tau }\right) {A}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau$ and matrices $\mathcal{R},{\mathcal{Q}}_{1}$ satisfying the ${\mathcal{R}}^{\mathrm{T}} = \mathcal{R} > 0,{\mathcal{Q}}_{1}^{\mathrm{T}} = {\mathcal{Q}}_{1} > 0$ .
+
+§ V. CONTROLLER DESIGN AND STABILITY ANALYSIS
+
+In this section, we will provide a detailed explanation of the controller design process and conduct a systematic analysis of its stability.
+
+§ A. CONTROLLER DESIGN
+
+We propose the following guaranteed cost DP controller for UMVs in (5):
+
+$$
+u\left( t\right) = {u}_{1}\left( t\right) + {u}_{2}\left( t\right) ,
+$$
+
+$$
+{u}_{1}\left( t\right) = - {K}_{11}e\left( t\right) ,
+$$
+
+$$
+{u}_{2}\left( t\right) = \frac{1}{2}{K}_{21}{B}_{1}^{\mathrm{T}}\left\lbrack {Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( {e\left( t\right) }\right) }\right\rbrack + \frac{1}{2}{K}_{22}e\left( {t - d}\right) ,
+$$
+
+(14)
+
+where ${K}_{11},{K}_{21},{K}_{22}$ are feedback gain matrices. ${K}_{11}$ is already determined in Lemma 1, while ${K}_{21}$ and ${K}_{22}$ will be provided in Theorem 1.
+
+Theorem 1: Consider the UMVs (5) under Assumption 1. The guaranteed cost DP controller is defined by (14). For the given positive definite matrices $\mathbb{N} \in {\mathbb{R}}^{m \times m},\mathbb{Y} \mathrel{\text{ := }}$ $\left\lbrack \begin{array}{ll} {\mathbb{Y}}_{11} & {\mathbb{Y}}_{12} \\ {\mathbb{Y}}_{12}^{\mathrm{T}} & {\mathbb{Y}}_{22} \end{array}\right\rbrack \in {\mathbb{R}}^{{2n} \times {2n}},\mathcal{P} \in {\mathbb{R}}^{n \times n}$ , and a positive constant ${\gamma }_{0}$ , if there exist positive definite matrices $\mathcal{R},{\mathcal{Q}}_{1} \in {\mathbb{R}}^{n \times n}$ , and matrices ${K}_{21} \in {\mathbb{R}}^{p \times p},{K}_{22} \in {\mathbb{R}}^{p \times n}$ such that $\mathcal{P} - {\mathcal{Q}}_{1} - {\mathcal{P}}_{1} > 0$ and the following inequality holds,
+
+$$
+E \mathrel{\text{ := }} \left\lbrack \begin{matrix} \mathcal{P} + {\mathcal{Q}}_{1} + {\mathcal{P}}_{1} - {E}_{1} & {E}_{2} & {E}_{3} \\ {E}_{2}^{\mathrm{T}} & - {\mathcal{Q}}_{1} + {\mathbb{Y}}_{22} & \frac{1}{2}{K}_{22}^{\mathrm{T}}{B}_{1}^{\mathrm{T}} \\ {E}_{3}^{\mathrm{T}} & \frac{1}{2}{B}_{1}{K}_{22} & {E}_{4} \end{matrix}\right\rbrack < 0,
+$$
+
+(15)
+
+where
+
+$$
+{E}_{1} = \frac{1}{2}Q\left( 0\right) {B}_{1}\left( {{K}_{21} + {K}_{21}^{\mathrm{T}}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) - {\mathbb{Y}}_{11} - {C}_{z}^{\mathrm{T}}{C}_{z}
+$$
+
+$$
+- {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}}Q\left( 0\right) - Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}}Q\left( 0\right) ,
+$$
+
+$$
+{E}_{2} = \frac{1}{2}Q\left( 0\right) {B}_{1}{K}_{22} + {\mathbb{Y}}_{12},
+$$
+
+$$
+{E}_{3} = Q\left( 0\right) {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}},
+$$
+
+$$
+{E}_{4} = - \frac{\mathcal{R}}{d} + {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}{B}_{2}{B}_{2}^{\mathrm{T}},
+$$
+
+then, the state of the UMVs in system (5) asymptotically converge to zero, while maintaining an ${H}_{\infty }$ norm bound of ${\gamma }_{0}$ .
+
+Proof 2: The time derivative of $\mathfrak{V}\left( {e\left( t\right) }\right)$ along the trajectory of the UMVs (5) can be calculated as follows:
+
+$$
+{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right)
+$$
+
+$$
+= - {U}_{0}\left( {e\left( t\right) }\right) + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right)
+$$
+
+$$
++ 2{g}^{\mathrm{T}}\left( {e\left( t\right) ,e\left( {t - d}\right) }\right) {F}^{\mathrm{T}}\left\lbrack {Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( {e\left( t\right) }\right) }\right\rbrack
+$$
+
+$$
++ 2{\left\lbrack Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}}{B}_{2}\omega \left( t\right)
+$$
+
+$$
++ 2{\left\lbrack Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}}{B}_{1}u\left( t\right) \tag{16}
+$$
+
+where
+
+$$
+{U}_{0}\left( e\right) = {e}^{\mathrm{T}}\left( t\right) \left( {\mathcal{P} - {\mathcal{Q}}_{1} - {\mathcal{P}}_{1}}\right) e\left( t\right) + {e}^{\mathrm{T}}\left( {t - d}\right) {\mathcal{Q}}_{1}e\left( {t - d}\right)
+$$
+
+$$
++ {\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + \tau }\right) {A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right) {A}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau .
+$$
+
+$$
+{\mathcal{P}}_{1} = {\int }_{-d}^{0}{A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right) {A}_{1}\mathrm{\;d}\tau .
+$$
+
+Substituting (14) into (16), we have
+
+$$
+{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \leq {\Gamma }^{\mathrm{T}}\left( t\right) {E\Gamma }\left( t\right)
+$$
+
+(17)
+
+where
+
+$$
+\Gamma \left( t\right) = {\left\lbrack {e}^{\mathrm{T}}\left( t\right) {e}^{\mathrm{T}}\left( t - d\right) {\Gamma }_{1}^{\mathrm{T}}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}},
+$$
+
+$$
+E \mathrel{\text{ := }} \left\lbrack \begin{matrix} \mathcal{P} + {\mathcal{Q}}_{1} + {\mathcal{P}}_{1} - {E}_{1} & {E}_{2} & {E}_{3} \\ {E}_{2}^{\mathrm{T}} & - {\mathcal{Q}}_{1} + {\mathbb{Y}}_{22} & \frac{1}{2}{K}_{22}^{\mathrm{T}}{B}_{1}^{\mathrm{T}} \\ {E}_{3}^{\mathrm{T}} & \frac{1}{2}{B}_{1}{K}_{22} & {E}_{4} \end{matrix}\right\rbrack ,
+$$
+
+where
+
+$$
+{E}_{1} = \frac{1}{2}Q\left( 0\right) {B}_{1}\left( {{K}_{21} + {K}_{21}^{\mathrm{T}}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) - {\mathbb{Y}}_{11} - {C}_{z}^{\mathrm{T}}{C}_{z}
+$$
+
+$$
+- {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}}Q\left( 0\right) - Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}}Q\left( 0\right) ,
+$$
+
+$$
+{E}_{2} = \frac{1}{2}Q\left( 0\right) {B}_{1}{K}_{22} + {\mathbb{Y}}_{12},
+$$
+
+$$
+{E}_{3} = Q\left( 0\right) {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}},
+$$
+
+$$
+{E}_{4} = - \frac{\mathcal{R}}{d} + {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}{B}_{2}{B}_{2}^{\mathrm{T}},
+$$
+
+For $E < 0$ , it implies that
+
+$$
+{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \leq 0. \tag{18}
+$$
+
+If Theorem 1 holds, then the ${\int }_{{t}_{0}}^{t}{\Gamma }^{\mathrm{T}}\left( \tau \right) {E\Gamma }\left( \tau \right) \mathrm{d}\tau < 0$ is satisfied:
+
+$$
+0 \leq {\epsilon }_{\min }\parallel e\left( t\right) {\parallel }^{2} \leq \mathfrak{V}\left( e\right) \leq \mathfrak{V}\left( {e\left( {t}_{0}\right) }\right) - {\int }_{{t}_{0}}^{t}{\mathcal{Z}}^{\mathrm{T}}\left( \tau \right) \mathcal{Z}\left( \tau \right) \mathrm{d}\tau
+$$
+
+$$
++ {\gamma }_{0}^{2}{\int }_{{t}_{0}}^{t}{\omega }^{\mathrm{T}}\left( \tau \right) \omega \left( \tau \right) \mathrm{d}\tau ,t > {t}_{0}.
+$$
+
+(19)
+
+Clearly
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow \infty }}{\int }_{{t}_{0}}^{t}{\Gamma }^{\mathrm{T}}\left( \tau \right) {E\Gamma }\left( \tau \right) \mathrm{d}\tau \leq \mathfrak{V}\left( {e\left( {t}_{0}\right) }\right) . \tag{20}
+$$
+
+We obtain
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow \infty }}\parallel e\left( t\right) \parallel = 0 \tag{21}
+$$
+
+By integrating equation (18)) from 0 to $\infty$ , we obtain
+
+$$
+{\int }_{0}^{\infty }{\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) \mathrm{d}t \leq {\gamma }_{0}^{2}{\int }_{0}^{\infty }{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \mathrm{d}t + \mathfrak{V}\left( 0\right) . \tag{22}
+$$
+
+§ B. GUARANTEED COST ANALYSIS
+
+When the disturbance $\omega \left( t\right)$ is absent, combining (8),(14), and (18) yields:
+
+$$
+{\left. \frac{d\mathfrak{V}\left( {e\left( t\right) }\right) }{dt}\right| }_{\left( 5\right) } + {e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right)
+$$
+
+$$
+\leq {\Gamma }^{\mathrm{T}}\left( t\right) \left( {E + \operatorname{diag}\left( {\Omega ,0,0}\right) + \frac{1}{4}{O}^{\mathrm{T}}{\mathbb{R}}_{q}O}\right) \Gamma \left( t\right)
+$$
+
+(23)where
+
+$$
+O = \left\lbrack \begin{array}{lll} - \left( {\mathbb{Y} + {K}_{21}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) & {K}_{22} & - \left( {\mathbb{Y} + {K}_{21}}\right) {B}_{1}^{\mathrm{T}} \end{array}\right\rbrack .
+$$
+
+We have
+
+$$
+\left\lbrack \begin{matrix} E + \operatorname{diag}\left( {\Omega ,0,0}\right) & {O}^{\mathrm{T}} \\ O & - 4{\mathbb{R}}_{q}^{-1} \end{matrix}\right\rbrack < 0
+$$
+
+Hence,
+
+$$
+{\int }_{0}^{\infty }\left\lbrack {{e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right) }\right\rbrack \mathrm{d}t \leq {J}^{ * }.
+$$
+
+where ${J}^{ * } = \mathfrak{V}\left( {e\left( t\right) }\right)$ , with $\mathfrak{V}\left( {e\left( t\right) }\right)$ defined in (12).
+
+§ VI. SIMULATION EXAMPLE
+
+The proposed control method's effectiveness is demonstrated through a standard floating production vessel model, as referenced in [23]. The matrices $\xi ,\mathcal{C}$ , and $\mathcal{D}$ are specified in [23], and the thruster configuration matrix $\mathcal{G}$ is derived from [24].
+
+The initial condition is given as $\phi \left( s\right) = {\left\lbrack \begin{array}{llllll} 0 & 0 & 0 & 0 & 0 & {0.2} \end{array}\right\rbrack }^{\mathrm{T}}$ , with the reference signal set to ${x}_{\text{ ref }} = \left\lbrack \begin{array}{ll} {0.01} & - \end{array}\right.$ ${0.010.050.010.040.01}{\rbrack }^{\mathrm{T}}$ . The time delay is $d = 1$ , and the ${H}_{\infty }$ performance index ${\gamma }_{0} = 2$ .
+
+The controller gain matrix ${K}_{11}$ is obtained by solving the LMI (11) from Lemma 1, as follows:
+
+$$
+{K}_{11} = \left\lbrack \begin{matrix} {3.7401} & - {1.0550} & {1.6703} \\ {3.5625} & - {0.3782} & {0.8900} \\ - {1.8457} & {7.7381} & - {7.8852} \\ - {1.7986} & {7.5585} & - {7.6782} \\ - {0.2156} & {1.5274} & - {0.7243} \\ - {0.4379} & {2.3744} & - {1.7009} \end{matrix}\right.
+$$
+
+$$
+\left. \begin{matrix} {3.8794} & - {0.4071} & {0.6533} \\ {3.8305} & {0.0888} & {0.2145} \\ - {0.4836} & {5.9344} & - {4.2105} \\ - {0.4706} & {5.8028} & - {4.0941} \\ - {0.0351} & {1.3831} & - {0.1833} \\ - {0.0963} & {2.0038} & - {0.7325} \end{matrix}\right\rbrack
+$$
+
+We set the matrix $\mathcal{Q} = I$ . The(i, j)-th element of the matrix $Q\left( \theta \right)$ , denoted as ${Q}_{ij}\left( \theta \right)$ , is determined using the method proposed in [22]. Figures 1-2 show the values of ${Q}_{ij}\left( \theta \right)$ for $\theta \in \left\lbrack {0,1}\right\rbrack$ .
+
+Finally, by solving LMI (15) as described in Theorem 1, the controller gain matrices ${K}_{21}$ and ${K}_{22}$ are computed as:
+
+$$
+{K}_{21} = 1 \times {10}^{4}\left\lbrack \begin{matrix} {0.0284} & {0.0561} & {0.0446} \\ - {0.0249} & - {0.0535} & - {0.0615} \\ - {0.0160} & {0.0215} & {0.0366} \\ {0.0187} & - {0.0010} & - {0.0542} \\ - {0.2113} & {0.2496} & - {0.1101} \\ - {0.0871} & {0.0356} & - {0.0328} \end{matrix}\right.
+$$
+
+$$
+\left. \begin{matrix} {0.0381} & - {0.0108} & - {0.0257} \\ - {0.0140} & {0.0119} & {0.0273} \\ {0.0723} & - {0.0709} & - {0.0315} \\ - {0.0511} & {0.0035} & {0.1249} \\ - {0.0808} & - {0.9459} & {1.2040} \\ {0.1207} & {0.5283} & - {0.6940} \end{matrix}\right\rbrack
+$$
+
+ < g r a p h i c s >
+
+Figure 1. Lyapunov matrix ${Q}_{ij}\left( \theta \right) ,\left( {\mathrm{i} = 1,2,3\mathrm{j} = 1,2,3,4,5,6}\right)$ .
+
+ < g r a p h i c s >
+
+Figure 2. Lyapunov matrix ${Q}_{ij}\left( \theta \right) ,\left( {\mathrm{i} = 4,5,6\mathrm{j} = 1,2,3,4,5,6}\right)$ .
+
+$$
+{K}_{22} = \left\lbrack \begin{matrix} - {15.4416} & {8.9036} & {66.6063} \\ - {18.9989} & - {43.3441} & - {101.0469} \\ {22.5784} & {53.9648} & {21.5477} \\ - {82.2859} & - {141.7415} & {16.5537} \\ - {118.7051} & - {303.3277} & {414.2256} \\ {118.3731} & {331.0541} & - {512.3389} \end{matrix}\right.
+$$
+
+$$
+\left. \begin{matrix} {35.6011} & {15.1773} & - {22.0347} \\ - {70.0417} & - {49.6179} & - {12.4059} \\ - {26.9017} & {10.6883} & {43.2947} \\ - {69.8399} & - {34.0587} & - {92.0165} \\ - {396.6715} & {76.7366} & - {41.8016} \\ {433.3628} & - {113.3949} & {30.4866} \end{matrix}\right\rbrack .
+$$
+
+Figures 3-4 illustrate the trajectories of the position error, yaw angle error, and velocity error for UMVs (5). Figure 5 shows the control inputs produced by the controller as defined in (14).
+
+ < g r a p h i c s >
+
+Figure 3. Response curves of UMVs position and yaw angle error.
+
+ < g r a p h i c s >
+
+Figure 4. Response curves of UMVs velocity error.
+
+ < g r a p h i c s >
+
+Figure 5. The comparison of response curves for $u\left( t\right)$
+
+In Figure 3, it is clear that the error curves under the proposed control initially exhibit small fluctuations before gradually converging to zero. This demonstrates the effectiveness of the proposed control strategy. Figure 5 illustrates the response curves of the guaranteed cost DP controller $u\left( t\right)$ .
+
+§ CONCLUSION
+
+In this paper, we have addressed the guaranteed cost dynamic positioning control problem for UMVs with time delays. First, we propose a complete-type LKF for UMVs with time delays, which leads to less conservativeness. Furthermore, a novel approach for designing guaranteed cost dynamic positioning controller for DP systems is proposed. The specific form of this controller is derived from feasible solutions of LMIs. The proposed method was validated through simulation, demonstrating its effectiveness. Future work will focus on extending the control strategy to systems with time-varying delays, further enhancing the robustness of DP control for UMVs.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ed78fd926afc081576d8d4b092556accc4656d1
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,347 @@
+# UVMS Trajectory Tracking Based on RBFNN and Sliding Mode Control
+
+Huiyi Luo
+
+Fuzhou Institute of Oceanography, Fuzhou University, Fuzhou 350108, China College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China 18278811826@163.com
+
+Weilin Luo
+
+Fuzhou Institute of Oceanography, Fuzhou U niversity, Fuzhou 350108, China College of Mechanical Engineering and Auto mation, Fuzhou University,
+
+Fuzhou 350108, China;
+
+wlluo@fzu.edu.cn
+
+Yuanjing Wang
+
+College of Mechanical Engineering and
+
+Automation, Fuzhou University,
+
+Fuzhou 350108, China
+
+xyjw325@163.com
+
+${Abstract}$ -This article spells out the UVMS trajectory tracking control problem under electric drive. Firstly, based a claim on Radial Basis Function Neural Networks (RBFNN) and Nonsingular Fast Terminal Sliding Mode (NFTSM) methods, the tracking strategy for UVMS is designed. Further, for singularity problem, a saturation-based tracking controller is obtained by means of the methods mentioned above. Lyapunov design is adopted to guarantee the asymptotic stability of the proposed controller. Simulation results show that the tracking consequence of NN-NFTSM is more thoroughly than PD approach and NN approach. The validity and advantages of the proposed controller is testified.
+
+Keywords-UVMS, electric drive, trajectory tracking, fast nonsingular terminal sliding mode, RBF neural network
+
+## I. INTRODUCTION
+
+Underwater Vehicle-Manipulator Systems (UVMS), which can control the underwater manipulator to complete the underwater task instead of human beings, is an effective means to develop underwater ocean energy at present. Usually, UVMS is constituted if there are n-link manipulators connected to an underwater robot for instance ROV (Remotely Operated Vehicle) and AUV (Autonomous Underwater Vehicle). As a vital tool of underwater vehicle, UVMS is pretty significant for these underwater operations, for example underwater real time shooting, underwater target reconnaissance and surveillance, marine resource exploitation, marine bioprospecting, etc. UVMS plays a supporting role in various marine underwater missions, and becomes the research focus of many scholars.
+
+How to settle the uncertainties in an underwater condition, like current, oceanic internal wave, is the biggest challenge for an UVMS to reach an ideal performance controller. For this reason, the effectiveness and robustness of controller is pretty crucial. Xu, et al. adopted fuzzy based control techniques to study a 6-DOF AUV which has a 3-DOF on-board manipulator [1]. Wei, et al. have applied a nonlinear disturbance observation for an UVMS to evaluate the external unpredictable disturbance in real time, and an adaptive sliding mode approach is utilized for compensating things [2]. Mobayen et al. adopted a continuous nonsingular fast terminal sliding mode control with timing delay evaluation, which can make full sure the satisfactory of tracking control performance and the sufficiency of robustness on an UVMS [3]. Wang et al. have selected the control plan which mixed sliding mode control and adaptive fuzzy control to constitute a multi-strategy fusion control that addressed the motion variable control issue of UVMS [4]. Luo et al. applied neural networks to a 3-link UVMS's tracking, the robustness of controller is verified by compared with PD control method [5]. Mofid et al. applied a fuzzy terminal sliding mode control approach with timing delay evaluation, which puts the focus on using fuzzy rules to adaptively fit the terminal sliding mode surface to eliminate the unpredictable internal and external disturbance running on manipulator [6]. Woolfrey et al. applied model predictive control plan to study kinematics things on UVMS which is affected by fluctuations, and the results show that the approach has excellent predictive consequence [7]. Han and Chung exposed an approach which uses restoring moments to explore the motion control under external disturbance of an UVMS [8].
+
+This article proposes a fast nonsingular terminal sliding mode cascade controller combined with RBF neural network method for manipulator control problem of UVMS. Due to the interaction induced by vehicle and manipulator, there is an external disturbance working on UVMS, which is the main source of external disturbance. Lyapunov approach is applied to verify the stability of the cascade controller. The effectiveness and robustness of the controller designed in this article is guaranteed by numerical simulation.
+
+## II. Problem formulation
+
+When UVMS moves to working area, it is sometimes necessary for the underwater robot body to maintain a stable hover state while the mechanical arm works according to the work requirements. At this time, the body-fixed reference coordinate system attached to the underwater vehicle body can be viewed as the inertial reference coordinate, which is constructed with the earth, and the motion of the entire UVMS can be regarded as the motion control of underwater robotic manipulators considering disturbance.
+
+Since the influence of underwater robot on underwater robotic manipulators is difficult to be expressed by mathematical model, it can be regarded as disturbance on underwater robotic manipulators. The nonlinear dynamics of the underwater robotic manipulators is written as
+
+$$
+M\left( q\right) \ddot{q} + C\left( {q,\dot{q}}\right) \dot{q} + D\left( {q,\dot{q}}\right) \dot{q} + G\left( q\right) + \Delta = {\tau }_{ms} \tag{1}
+$$
+
+where $\Delta$ denotes the uncertainty induced by the interaction of underwater vehicle and manipulator, $M$ denotes the inertial matrix, $C$ denotes the Coriolis-centripetal matrix, $D$ denotes the water resistance coefficient matrix, $G$ denotes the equivalent gravity vector matrix, ${\tau }_{ms}$ denotes the input of control.
+
+---
+
+Corresponding Author: W. Luo
+
+This work was supported by the Natural Science Foundation of Fujian Province, China through Grant 2023J011572, and Fuzhou Institute of Oceanography through Grants 2021F11 & 2022F13.
+
+---
+
+Fig. 1 displays underwater robotic manipulator combined with underwater vehicle to form a three-link UVMS. Fig. 1 shows the starting position of the underwater robotic manipulator, in which the joint at the hinge of the connecting rod is driven by a motor, so as to achieve the operational requirements of the three degree of freedom underwater robotic manipulator.
+
+
+
+Fig. 1 Three-link manipulator UVMS
+
+Since the underwater robotic manipulator's joint is run by a DC motor, the motor driving force can be described as
+
+$$
+{\tau }_{me} = {K}_{me}I \tag{2}
+$$
+
+where $I$ denotes the electrical current, ${K}_{me}$ denotes the coefficient matrix during the process of electrical current change to torque.
+
+The dynamics of the electrical circuit can be described as
+
+$$
+{\tau }_{e} = {L}_{e}\dot{I} + {R}_{e}I + {K}_{e}\dot{q} \tag{3}
+$$
+
+where ${\tau }_{e},{L}_{e},{R}_{e}$ denotes the vector matrix of the motor coil’s voltage, inductance and resistance, respectively. ${K}_{e}$ denotes the constant matrix of its voltage.
+
+Then, a cascaded system containing the subsystem of machinery and electricity consists of Equations (1) and (3).
+
+## III. CONTROLLER DESIGN
+
+### A.NN Based Controller
+
+According to Equation (2), an ideal trajectory design is carried out for the desired joint angle of the underwater robotic manipulator. By defining the desired joint angle as ${q}_{d}$ and considering Equations (1) and (3), the desired input signal of the electrical current can be described as
+
+$$
+{I}_{d} = {K}_{me}^{-1}\left( {M{\ddot{q}}_{d} + C\dot{q} + D\dot{q} + G + \Delta + {\tau }_{1}}\right) \tag{4}
+$$
+
+where ${\tau }_{1}$ denotes the auxiliary controller for dynamics of underwater robotic manipulator. Similarly, the auxiliary controller of electrical system can be designed as
+
+$$
+{\tau }_{e} = {R}_{e}{I}_{d} + {K}_{e}{\dot{q}}_{d} + {\tau }_{2} \tag{5}
+$$
+
+where ${\tau }_{2}$ represents the auxiliary controller for electrical system.
+
+Further, define joint tracking error as
+
+$$
+e = {q}_{d} - q \tag{6}
+$$
+
+To guarantee convergent quality, design the fast terminal sliding surface as
+
+$$
+s = \dot{e} + {\alpha }_{1}{\operatorname{sign}}^{{\gamma }_{1}}\left( e\right) + {\alpha }_{2}{\operatorname{sign}}^{{\gamma }_{2}}\left( e\right) \tag{7}
+$$
+
+where ${\operatorname{sign}}^{\Delta }\left( \cdot \right) = {\left| \cdot \right| }^{\Delta }\operatorname{sign}\left( \cdot \right) ,{\gamma }_{1} \geq 1,0 \leq {\gamma }_{2} \leq 1,{\alpha }_{1}$ and ${\alpha }_{2}$ are introduced as positive gain matrix.
+
+Derivative of fast terminal sliding surface is
+
+$$
+\dot{s} = \ddot{e} + \left( {{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}}\right) \dot{e} \tag{8}
+$$
+
+To facilitate calculation, auxiliary variables are introduced as
+
+$$
+\left\{ \begin{array}{l} \vartheta = {\alpha }_{1}{\operatorname{sign}}^{{\gamma }_{1}}\left( e\right) + {\alpha }_{2}{\operatorname{sign}}^{{\gamma }_{2}}\left( e\right) \\ \mu = {\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1} \end{array}\right. \tag{9}
+$$
+
+Substituting Equation (9) into Equations (7) and (8) yields
+
+$$
+\left\{ \begin{array}{l} s = \dot{e} + \vartheta \\ \dot{s} = \ddot{e} + \mu \dot{e} \end{array}\right. \tag{10}
+$$
+
+Define electrical current error as $\eta = {I}_{d} - I$ , one has
+
+$$
+M\left( q\right) \dot{s} = {M\mu }\dot{e} + M\ddot{e}
+$$
+
+$$
+= {M\mu }\dot{e} + M\left( {{\ddot{q}}_{d} - \ddot{q}}\right) \tag{11}
+$$
+
+$$
+= {M\mu }\dot{e} + {K}_{me}\eta - C\dot{e} + \Delta - {\tau }_{1}
+$$
+
+and
+
+$$
+L\dot{\eta } = L{\dot{I}}_{d} - L\dot{I} = - {R\eta } - K\left( {s - \vartheta }\right) - {\tau }_{2} + L{\dot{I}}_{d}. \tag{12}
+$$
+
+To reach the goal of letting the error Equation (11) and (12) attain to zero, Lyapunov design theorem is utilized and a positively definite Lyapunov function can be written as
+
+$$
+{V}_{1} = \frac{1}{2}\left( {{e}^{\mathrm{T}}e + {s}^{\mathrm{T}}{Ms} + {\eta }^{\mathrm{T}}{L\eta }}\right) \tag{13}
+$$
+
+The time derivative of keeps
+
+$$
+{\dot{V}}_{1} = {s}^{T}\left( {e + {M\mu }\dot{e} + {C\vartheta } + \Delta - {\tau }_{1}}\right) - {e}^{T}\vartheta \tag{14}
+$$
+
+$$
++ {\eta }^{T}\left\lbrack {-{R}_{e}\eta + {K}_{me}s + K\left( {s - \vartheta }\right) + {L}_{e}{\dot{I}}_{d} - {\tau }_{2}}\right\rbrack
+$$
+
+Since Equation (14) contains nonlinear terms, and for the trajectory tracking control of underwater robotic manipulator, the nonlinear terms have an impact on the control results. For this reason, RBF neural network is adopted to estimate the nonlinear term. In detail, let
+
+$$
+\left\{ \begin{array}{l} {f}_{1} = e + {\mu M}\dot{e} + {C\vartheta } + \Delta = {W}_{1}^{\mathrm{T}}{h}_{1}\left( x\right) + {\varepsilon }_{1} \\ {f}_{2} = {K}_{me}s - {R}_{e}\eta + {K}_{e}\left( {s - \vartheta }\right) + {L}_{e}{\dot{I}}_{d} = {W}_{2}^{\mathrm{T}}{h}_{2}\left( x\right) + {\varepsilon }_{2} \end{array}\right. \tag{15}
+$$
+
+where ${W}_{i},{h}_{i},{\varepsilon }_{i}$ denote weights, inputs and regression errors, respectively.
+
+The controllers ${\tau }_{1}$ and ${\tau }_{2}$ can be given as
+
+$$
+\left\{ \begin{array}{l} {\tau }_{1} = {W}_{1e}^{\mathrm{T}}{h}_{1}\left( x\right) + {\alpha }_{1}{Ms} \\ {\tau }_{2} = {W}_{2e}^{\mathrm{T}}{h}_{2}\left( x\right) + {\alpha }_{2}{L\eta } \end{array}\right. \tag{16}
+$$
+
+where ${W}_{ie}$ denote updated weight matrices.
+
+In order to achieve the excellent robustness of neural network controller, the weight is written as
+
+$$
+\left\{ \begin{array}{l} {\dot{W}}_{1e} = {k}_{1}{h}_{1}\left( {X}_{1}\right) {s}^{\mathrm{T}} - {k}_{2}{W}_{1e} \\ {\dot{W}}_{2e} = {k}_{1}{h}_{2}\left( {X}_{2}\right) {\eta }^{\mathrm{T}} - {k}_{2}{W}_{2e} \end{array}\right. \tag{17}
+$$
+
+As pointed out [9], in a conventional sliding approach, since the item ${\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}\dot{e}$ in Equation (8) exists, it happens that ${e}_{x} \rightarrow 0$ . In order to deal with the singular phenomena, one might use the following saturation
+
+$$
+\operatorname{sat}\left( {v}_{z}\right) = \left\{ \begin{matrix} {v}_{z} & \left| {v}_{z}\right| \leq \bar{w} \\ \bar{w}\operatorname{sign}\left( {v}_{z}\right) & \left| {v}_{z}\right| \geq \bar{w} \end{matrix}\right. \tag{18}
+$$
+
+where ${v}_{z} = {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}\dot{e},\bar{w}$ is a positive number.
+
+Substituting Equation (18) into Equations (7), and replacing the fast terminal sliding surface (FTSM) to the nonsingular fast terminal sliding surface (NFTSM) yields
+
+$$
+{\dot{s}}_{2} = \ddot{e} + {\alpha }_{1}\gamma {\dot{e}}_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} \tag{19}
+$$
+
+Similarly, we can get
+
+$$
+M\left( q\right) {\dot{s}}_{2} = M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + M\ddot{e}
+$$
+
+$$
+= M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + M\left( {{\ddot{q}}_{d} - \ddot{q}}\right) \tag{20}
+$$
+
+$$
+= M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + {K}_{me}\eta - C\dot{e} + \Delta - {\tau }_{1}
+$$
+
+To guarantee the stability, Lyapunov function is defined as
+
+$$
+{V}_{2} = \frac{1}{2}\left( {{e}^{T}e + {s}_{2}{}^{T}M{s}_{2} + {\eta }^{T}{L\eta }}\right) \tag{21}
+$$
+
+Its derivative is
+
+$$
+{\dot{V}}_{2} = {s}_{2}^{T}\left( {e + M{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1}\dot{e} + {v}_{z} + {C\vartheta } + \Delta - {\tau }_{1}}\right) - {e}^{T}\vartheta \tag{22}
+$$
+
+$$
++ {\eta }^{T}\left\lbrack {-{R\eta } + {K}_{m}s + k\left( {s - \vartheta }\right) + L{\dot{I}}_{d} - {\tau }_{2}}\right\rbrack
+$$
+
+Combined with (15), the nonlinear term in the above expression can be cast as
+
+$$
+{f}_{3} = e + M{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1}\dot{e} + {C\vartheta } + \Delta = {W}_{1N}^{\mathrm{T}}{h}_{1}\left( x\right) + {\varepsilon }_{1} \tag{23}
+$$
+
+The auxiliary controllers ${\bar{\tau }}_{1}$ and ${\tau }_{2}$ can be described as
+
+$$
+\left\{ \begin{array}{l} {\bar{\tau }}_{1} = {W}_{1Ne}^{T}{h}_{1}\left( x\right) + {\alpha }_{1}M{s}_{2} - M{v}_{z} \\ {\tau }_{2} = {W}_{2e}^{\mathrm{T}}{h}_{2}\left( x\right) + {\alpha }_{2}{L\eta } \end{array}\right. \tag{24}
+$$
+
+B. Stability analysis
+
+A Lyapunov function is designed as
+
+$$
+{V}_{3} = {V}_{2} + \frac{1}{2{k}_{1}}\mathop{\sum }\limits_{{i = 1}}^{2}{\begin{Vmatrix}{\widetilde{W}}_{i}\end{Vmatrix}}_{F}^{2} \tag{25}
+$$
+
+where ${\widetilde{W}}_{i} = {W}_{i} - {W}_{ie}$ represents weight error.
+
+Its derivative is
+
+$$
+{\dot{V}}_{3} \leq - 2{\alpha }_{0}{V}_{3} + {s}^{T}{\varepsilon }_{1} + {\eta }^{T}{\varepsilon }_{2} - a\left( {{\alpha }_{1}{s}^{T}{Ms} + {\alpha }_{2}{\eta }^{T}{L\eta }}\right)
+$$
+
+$$
++ {k}_{2}\left( {\mathop{\sum }\limits_{{i = 1}}^{2}{\left( {\widetilde{W}}_{i},{W}_{i}\right) }_{F} - a\mathop{\sum }\limits_{{i = 1}}^{2}{\begin{Vmatrix}{\widetilde{W}}_{i}\end{Vmatrix}}_{F}^{2}}\right) \tag{26}
+$$
+
+in which $0 \leq a \leq 1,{\alpha }_{0} = \min \left\{ {\left( {1 - a}\right) {\alpha }_{1},\left( {1 - a}\right) {\alpha }_{2},\left( {1 - a}\right) {k}_{2}}\right\}$ .
+
+In accordance with [10], it holds that
+
+$$
+{\dot{V}}_{2} \leq - 2{\alpha }_{0}{V}_{2} + \lambda ,\left( {\lambda > 0}\right) \tag{27}
+$$
+
+Further, shrink Equation (27) as
+
+$$
+{\dot{V}}_{2} \leq - 2{\alpha }_{0}{V}_{2} \leq 0 \tag{28}
+$$
+
+From Equation (27) and (28), a conclusion can be made that the tracking system is stable. Thus, the effectiveness of controller in the control of UVMS underwater robotic manipulator is verified.
+
+## IV. SIMULATION
+
+For the purpose of testifying the validity and advantages of the designed tracking controller, i.e., NN based the nonsingular fast terminal sliding mode (NN-NFTSM) controller, comparison is conducted with traditional PD control and neural network control approaches. TABLE I. displays the parameters of robotic manipulator and controller.
+
+TABLE I. PARAMETERS OF THE UVMS
+
+| Items | Rod1 | Rod2 | Rod3 |
| Length(m) | 1 | 1 | 1 |
| Mass(kg) | 1 | 1 | 2 |
| ${L}_{e}$ | 0.1 | 0.1 | 0.1 |
| ${R}_{e}$ | 1 | 1 | 1 |
| ${K}_{e}$ | 0.5 | 0.5 | 0.5 |
| ${K}_{me}$ | 1 | 1 | 1 |
| $\bar{w}$ | 0.5 | ${\alpha }_{1},{\alpha }_{2}$ | 200 |
| ${k}_{p},{k}_{d}$ | 300 | ${k}_{1},{k}_{2}$ | 50,0.8 |
+
+Since the underwater robotic manipulator is mounted on underwater vehicle to form the UVMS system, the first hinge of the underwater robotic manipulator has direct interference with the vehicle. In the simulation process, it is assumed that this interference is a transient interference signal: a force of ${200}\mathrm{\;N}$ is applied to the vehicle at $\mathrm{t} = {1.7}\mathrm{\;s}$ .
+
+Fig. 2 displays the spatial tracking effect of UVMS end effector. It can be seen that the proposed NN-NFTSM controller is obviously better than traditional PD control and neural network control methods.
+
+
+
+Fig. 2 Spatial tracking effect of UVMS end effector
+
+Fig. 3 shows the results of joint angle tracking control. It is obvious to get the result that both the nonsingular fast terminal sliding mode surface based on neural network and the proposed sliding mode controller based on neural network have higher tracking stability than PD control.
+
+
+
+Fig. 3 Results of joint angle tracking control
+
+In Fig. 4 and Fig. 5, the tracking effect of UVMS end effector in $x, y, z$ directions are displayed. It is easy to get that in the method with neural network control, the tracking effect of three directions can reach stability. The proposed nonsingular fast terminal sliding mode control method combined with the RBF neural network can track the desired trajectory more quickly and stably.
+
+
+
+Fig. 4 Tracking effect of UVMS end effector in x, y, z directions.
+
+
+
+Fig. 5 UVMS end effector tracking error
+
+Fig. 6 - Fig. 8 show the comparison of MAE and RMSE under three control schemes. It is easy to get that NN-NFTSM has higher accuracy than RBF neural network (NN) and PD control.
+
+
+
+Fig. 8 Error in z direction
+
+## V. CONCLUSION
+
+In this article, a RBFNN based fast nonsingular terminal sliding mode controller is designed for UVMS. Singular items of the UVMS system are approximated by RBF neural network. Lyapunov design is selected to test the stability and feasibility of the proposed controller. It is proved that the convergence of tracking errors falls into a small zero neighborhood within finite time. Finally, the simulation results confirm that the proposed controller performs an excellent role in UVMS system.
+
+## ACKNOWLEDGMENT
+
+The work in the paper is partly supported by the Natural Science Foundation of Fujian Province of China, Grant 2023J011572, and partly supported by Fuzhou Institute of Oceanography, Grants 2021F11 & 2022F13.
+
+## REFERENCES
+
+[1] Xu, B. Pandian, S.R. Sakagami, N. Petry, F. Neuro-fuzzy control of underwater vehicle-manipulator systems (Article)[J]. Journal of the Franklin Institute, 2012, Vol. 349(3): 1125-1138.
+
+[2] Wei Chen, Ming Wei, Yuhang Zhang, Di Lu and Shilin Hu. Research on Adaptive Sliding Mode Control of UVMS Based on Nonlinear Disturbance Observation[J]. Mathematical Problems in Engineering, 2022, Vol.
+
+[3] S. Mobayen, O. Mofid, S. U. Din, and ABartoszewicz, "Finite time tracking controller design of perturbed robotic manipulator based on adaptive second-order sliding mode control method," IEEE Access, vol. 9, Article ID 71159, 2021.
+
+[4] Y. Wang, B. Chen, and H. Wu," Joint space tracking control of underwater vehicle-manipulator systems using continuous nonsingular fast terminal sliding mode," Proceedings of the Institution of Mechanical Engineers- Part M: Journal of Engineering for the Maritime Environment, vol. 232, no. 4, pp. 448-458, 2018.
+
+[5] Luo, WL; Cong, HC. Robust NN Control of the Manipulator in the Underwater Vehicle-Manipulator System[J]. ADVANCES IN NEURAL NETWORKS, PT II,2017, Vol. 10262: 75-82.
+
+[6] O. Mofid, S. Mobayen, and A. Fekih, "Adaptive integral-type terminal sliding mode control for unmanned aerial vehicle under model uncertainties and external disturbances," IEEE Access, vol. 9, Article ID 53255, 2021.
+
+[7] Woolfrey, J., Liu, D., Carmichael, M. Kinematic control of an autonomous underwater vehicle-manipulator system (AUVMS) using autoregressive prediction of vehicle motion and model predictive control. In: 2016 IEEE International Conference on Robotics and Automation, pp. 4591-4596. IEEE Press, New York (2016)
+
+[8] Han, J., Chung, W.K., Sakagami, N., Petry, F.: Active use of restoring moments for motion control of an underwater vehicle-manipulator system. IEEE J. Ocean. Eng. 39(1), 100-109 (2014)
+
+[9] Chen Z, Yang X, Liu X. RBFNN-based nonsingular fast terminal sliding mode control for robotic manipulators including actuator dynamics[J]. Neurocomputing, 2019, 362: 72-82.
+
+[10] Luo, WL. A new neural network control method for electrically driven rigid manipulator [D]. Fuzhou University,2002.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..cfe64a9333b3754cf39e247812a037acf2e221e7
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,349 @@
+§ UVMS TRAJECTORY TRACKING BASED ON RBFNN AND SLIDING MODE CONTROL
+
+Huiyi Luo
+
+Fuzhou Institute of Oceanography, Fuzhou University, Fuzhou 350108, China College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China 18278811826@163.com
+
+Weilin Luo
+
+Fuzhou Institute of Oceanography, Fuzhou U niversity, Fuzhou 350108, China College of Mechanical Engineering and Auto mation, Fuzhou University,
+
+Fuzhou 350108, China;
+
+wlluo@fzu.edu.cn
+
+Yuanjing Wang
+
+College of Mechanical Engineering and
+
+Automation, Fuzhou University,
+
+Fuzhou 350108, China
+
+xyjw325@163.com
+
+${Abstract}$ -This article spells out the UVMS trajectory tracking control problem under electric drive. Firstly, based a claim on Radial Basis Function Neural Networks (RBFNN) and Nonsingular Fast Terminal Sliding Mode (NFTSM) methods, the tracking strategy for UVMS is designed. Further, for singularity problem, a saturation-based tracking controller is obtained by means of the methods mentioned above. Lyapunov design is adopted to guarantee the asymptotic stability of the proposed controller. Simulation results show that the tracking consequence of NN-NFTSM is more thoroughly than PD approach and NN approach. The validity and advantages of the proposed controller is testified.
+
+Keywords-UVMS, electric drive, trajectory tracking, fast nonsingular terminal sliding mode, RBF neural network
+
+§ I. INTRODUCTION
+
+Underwater Vehicle-Manipulator Systems (UVMS), which can control the underwater manipulator to complete the underwater task instead of human beings, is an effective means to develop underwater ocean energy at present. Usually, UVMS is constituted if there are n-link manipulators connected to an underwater robot for instance ROV (Remotely Operated Vehicle) and AUV (Autonomous Underwater Vehicle). As a vital tool of underwater vehicle, UVMS is pretty significant for these underwater operations, for example underwater real time shooting, underwater target reconnaissance and surveillance, marine resource exploitation, marine bioprospecting, etc. UVMS plays a supporting role in various marine underwater missions, and becomes the research focus of many scholars.
+
+How to settle the uncertainties in an underwater condition, like current, oceanic internal wave, is the biggest challenge for an UVMS to reach an ideal performance controller. For this reason, the effectiveness and robustness of controller is pretty crucial. Xu, et al. adopted fuzzy based control techniques to study a 6-DOF AUV which has a 3-DOF on-board manipulator [1]. Wei, et al. have applied a nonlinear disturbance observation for an UVMS to evaluate the external unpredictable disturbance in real time, and an adaptive sliding mode approach is utilized for compensating things [2]. Mobayen et al. adopted a continuous nonsingular fast terminal sliding mode control with timing delay evaluation, which can make full sure the satisfactory of tracking control performance and the sufficiency of robustness on an UVMS [3]. Wang et al. have selected the control plan which mixed sliding mode control and adaptive fuzzy control to constitute a multi-strategy fusion control that addressed the motion variable control issue of UVMS [4]. Luo et al. applied neural networks to a 3-link UVMS's tracking, the robustness of controller is verified by compared with PD control method [5]. Mofid et al. applied a fuzzy terminal sliding mode control approach with timing delay evaluation, which puts the focus on using fuzzy rules to adaptively fit the terminal sliding mode surface to eliminate the unpredictable internal and external disturbance running on manipulator [6]. Woolfrey et al. applied model predictive control plan to study kinematics things on UVMS which is affected by fluctuations, and the results show that the approach has excellent predictive consequence [7]. Han and Chung exposed an approach which uses restoring moments to explore the motion control under external disturbance of an UVMS [8].
+
+This article proposes a fast nonsingular terminal sliding mode cascade controller combined with RBF neural network method for manipulator control problem of UVMS. Due to the interaction induced by vehicle and manipulator, there is an external disturbance working on UVMS, which is the main source of external disturbance. Lyapunov approach is applied to verify the stability of the cascade controller. The effectiveness and robustness of the controller designed in this article is guaranteed by numerical simulation.
+
+§ II. PROBLEM FORMULATION
+
+When UVMS moves to working area, it is sometimes necessary for the underwater robot body to maintain a stable hover state while the mechanical arm works according to the work requirements. At this time, the body-fixed reference coordinate system attached to the underwater vehicle body can be viewed as the inertial reference coordinate, which is constructed with the earth, and the motion of the entire UVMS can be regarded as the motion control of underwater robotic manipulators considering disturbance.
+
+Since the influence of underwater robot on underwater robotic manipulators is difficult to be expressed by mathematical model, it can be regarded as disturbance on underwater robotic manipulators. The nonlinear dynamics of the underwater robotic manipulators is written as
+
+$$
+M\left( q\right) \ddot{q} + C\left( {q,\dot{q}}\right) \dot{q} + D\left( {q,\dot{q}}\right) \dot{q} + G\left( q\right) + \Delta = {\tau }_{ms} \tag{1}
+$$
+
+where $\Delta$ denotes the uncertainty induced by the interaction of underwater vehicle and manipulator, $M$ denotes the inertial matrix, $C$ denotes the Coriolis-centripetal matrix, $D$ denotes the water resistance coefficient matrix, $G$ denotes the equivalent gravity vector matrix, ${\tau }_{ms}$ denotes the input of control.
+
+Corresponding Author: W. Luo
+
+This work was supported by the Natural Science Foundation of Fujian Province, China through Grant 2023J011572, and Fuzhou Institute of Oceanography through Grants 2021F11 & 2022F13.
+
+Fig. 1 displays underwater robotic manipulator combined with underwater vehicle to form a three-link UVMS. Fig. 1 shows the starting position of the underwater robotic manipulator, in which the joint at the hinge of the connecting rod is driven by a motor, so as to achieve the operational requirements of the three degree of freedom underwater robotic manipulator.
+
+ < g r a p h i c s >
+
+Fig. 1 Three-link manipulator UVMS
+
+Since the underwater robotic manipulator's joint is run by a DC motor, the motor driving force can be described as
+
+$$
+{\tau }_{me} = {K}_{me}I \tag{2}
+$$
+
+where $I$ denotes the electrical current, ${K}_{me}$ denotes the coefficient matrix during the process of electrical current change to torque.
+
+The dynamics of the electrical circuit can be described as
+
+$$
+{\tau }_{e} = {L}_{e}\dot{I} + {R}_{e}I + {K}_{e}\dot{q} \tag{3}
+$$
+
+where ${\tau }_{e},{L}_{e},{R}_{e}$ denotes the vector matrix of the motor coil’s voltage, inductance and resistance, respectively. ${K}_{e}$ denotes the constant matrix of its voltage.
+
+Then, a cascaded system containing the subsystem of machinery and electricity consists of Equations (1) and (3).
+
+§ III. CONTROLLER DESIGN
+
+§ A.NN BASED CONTROLLER
+
+According to Equation (2), an ideal trajectory design is carried out for the desired joint angle of the underwater robotic manipulator. By defining the desired joint angle as ${q}_{d}$ and considering Equations (1) and (3), the desired input signal of the electrical current can be described as
+
+$$
+{I}_{d} = {K}_{me}^{-1}\left( {M{\ddot{q}}_{d} + C\dot{q} + D\dot{q} + G + \Delta + {\tau }_{1}}\right) \tag{4}
+$$
+
+where ${\tau }_{1}$ denotes the auxiliary controller for dynamics of underwater robotic manipulator. Similarly, the auxiliary controller of electrical system can be designed as
+
+$$
+{\tau }_{e} = {R}_{e}{I}_{d} + {K}_{e}{\dot{q}}_{d} + {\tau }_{2} \tag{5}
+$$
+
+where ${\tau }_{2}$ represents the auxiliary controller for electrical system.
+
+Further, define joint tracking error as
+
+$$
+e = {q}_{d} - q \tag{6}
+$$
+
+To guarantee convergent quality, design the fast terminal sliding surface as
+
+$$
+s = \dot{e} + {\alpha }_{1}{\operatorname{sign}}^{{\gamma }_{1}}\left( e\right) + {\alpha }_{2}{\operatorname{sign}}^{{\gamma }_{2}}\left( e\right) \tag{7}
+$$
+
+where ${\operatorname{sign}}^{\Delta }\left( \cdot \right) = {\left| \cdot \right| }^{\Delta }\operatorname{sign}\left( \cdot \right) ,{\gamma }_{1} \geq 1,0 \leq {\gamma }_{2} \leq 1,{\alpha }_{1}$ and ${\alpha }_{2}$ are introduced as positive gain matrix.
+
+Derivative of fast terminal sliding surface is
+
+$$
+\dot{s} = \ddot{e} + \left( {{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}}\right) \dot{e} \tag{8}
+$$
+
+To facilitate calculation, auxiliary variables are introduced as
+
+$$
+\left\{ \begin{array}{l} \vartheta = {\alpha }_{1}{\operatorname{sign}}^{{\gamma }_{1}}\left( e\right) + {\alpha }_{2}{\operatorname{sign}}^{{\gamma }_{2}}\left( e\right) \\ \mu = {\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1} \end{array}\right. \tag{9}
+$$
+
+Substituting Equation (9) into Equations (7) and (8) yields
+
+$$
+\left\{ \begin{array}{l} s = \dot{e} + \vartheta \\ \dot{s} = \ddot{e} + \mu \dot{e} \end{array}\right. \tag{10}
+$$
+
+Define electrical current error as $\eta = {I}_{d} - I$ , one has
+
+$$
+M\left( q\right) \dot{s} = {M\mu }\dot{e} + M\ddot{e}
+$$
+
+$$
+= {M\mu }\dot{e} + M\left( {{\ddot{q}}_{d} - \ddot{q}}\right) \tag{11}
+$$
+
+$$
+= {M\mu }\dot{e} + {K}_{me}\eta - C\dot{e} + \Delta - {\tau }_{1}
+$$
+
+and
+
+$$
+L\dot{\eta } = L{\dot{I}}_{d} - L\dot{I} = - {R\eta } - K\left( {s - \vartheta }\right) - {\tau }_{2} + L{\dot{I}}_{d}. \tag{12}
+$$
+
+To reach the goal of letting the error Equation (11) and (12) attain to zero, Lyapunov design theorem is utilized and a positively definite Lyapunov function can be written as
+
+$$
+{V}_{1} = \frac{1}{2}\left( {{e}^{\mathrm{T}}e + {s}^{\mathrm{T}}{Ms} + {\eta }^{\mathrm{T}}{L\eta }}\right) \tag{13}
+$$
+
+The time derivative of keeps
+
+$$
+{\dot{V}}_{1} = {s}^{T}\left( {e + {M\mu }\dot{e} + {C\vartheta } + \Delta - {\tau }_{1}}\right) - {e}^{T}\vartheta \tag{14}
+$$
+
+$$
++ {\eta }^{T}\left\lbrack {-{R}_{e}\eta + {K}_{me}s + K\left( {s - \vartheta }\right) + {L}_{e}{\dot{I}}_{d} - {\tau }_{2}}\right\rbrack
+$$
+
+Since Equation (14) contains nonlinear terms, and for the trajectory tracking control of underwater robotic manipulator, the nonlinear terms have an impact on the control results. For this reason, RBF neural network is adopted to estimate the nonlinear term. In detail, let
+
+$$
+\left\{ \begin{array}{l} {f}_{1} = e + {\mu M}\dot{e} + {C\vartheta } + \Delta = {W}_{1}^{\mathrm{T}}{h}_{1}\left( x\right) + {\varepsilon }_{1} \\ {f}_{2} = {K}_{me}s - {R}_{e}\eta + {K}_{e}\left( {s - \vartheta }\right) + {L}_{e}{\dot{I}}_{d} = {W}_{2}^{\mathrm{T}}{h}_{2}\left( x\right) + {\varepsilon }_{2} \end{array}\right. \tag{15}
+$$
+
+where ${W}_{i},{h}_{i},{\varepsilon }_{i}$ denote weights, inputs and regression errors, respectively.
+
+The controllers ${\tau }_{1}$ and ${\tau }_{2}$ can be given as
+
+$$
+\left\{ \begin{array}{l} {\tau }_{1} = {W}_{1e}^{\mathrm{T}}{h}_{1}\left( x\right) + {\alpha }_{1}{Ms} \\ {\tau }_{2} = {W}_{2e}^{\mathrm{T}}{h}_{2}\left( x\right) + {\alpha }_{2}{L\eta } \end{array}\right. \tag{16}
+$$
+
+where ${W}_{ie}$ denote updated weight matrices.
+
+In order to achieve the excellent robustness of neural network controller, the weight is written as
+
+$$
+\left\{ \begin{array}{l} {\dot{W}}_{1e} = {k}_{1}{h}_{1}\left( {X}_{1}\right) {s}^{\mathrm{T}} - {k}_{2}{W}_{1e} \\ {\dot{W}}_{2e} = {k}_{1}{h}_{2}\left( {X}_{2}\right) {\eta }^{\mathrm{T}} - {k}_{2}{W}_{2e} \end{array}\right. \tag{17}
+$$
+
+As pointed out [9], in a conventional sliding approach, since the item ${\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}\dot{e}$ in Equation (8) exists, it happens that ${e}_{x} \rightarrow 0$ . In order to deal with the singular phenomena, one might use the following saturation
+
+$$
+\operatorname{sat}\left( {v}_{z}\right) = \left\{ \begin{matrix} {v}_{z} & \left| {v}_{z}\right| \leq \bar{w} \\ \bar{w}\operatorname{sign}\left( {v}_{z}\right) & \left| {v}_{z}\right| \geq \bar{w} \end{matrix}\right. \tag{18}
+$$
+
+where ${v}_{z} = {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}\dot{e},\bar{w}$ is a positive number.
+
+Substituting Equation (18) into Equations (7), and replacing the fast terminal sliding surface (FTSM) to the nonsingular fast terminal sliding surface (NFTSM) yields
+
+$$
+{\dot{s}}_{2} = \ddot{e} + {\alpha }_{1}\gamma {\dot{e}}_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} \tag{19}
+$$
+
+Similarly, we can get
+
+$$
+M\left( q\right) {\dot{s}}_{2} = M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + M\ddot{e}
+$$
+
+$$
+= M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + M\left( {{\ddot{q}}_{d} - \ddot{q}}\right) \tag{20}
+$$
+
+$$
+= M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + {K}_{me}\eta - C\dot{e} + \Delta - {\tau }_{1}
+$$
+
+To guarantee the stability, Lyapunov function is defined as
+
+$$
+{V}_{2} = \frac{1}{2}\left( {{e}^{T}e + {s}_{2}{}^{T}M{s}_{2} + {\eta }^{T}{L\eta }}\right) \tag{21}
+$$
+
+Its derivative is
+
+$$
+{\dot{V}}_{2} = {s}_{2}^{T}\left( {e + M{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1}\dot{e} + {v}_{z} + {C\vartheta } + \Delta - {\tau }_{1}}\right) - {e}^{T}\vartheta \tag{22}
+$$
+
+$$
++ {\eta }^{T}\left\lbrack {-{R\eta } + {K}_{m}s + k\left( {s - \vartheta }\right) + L{\dot{I}}_{d} - {\tau }_{2}}\right\rbrack
+$$
+
+Combined with (15), the nonlinear term in the above expression can be cast as
+
+$$
+{f}_{3} = e + M{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1}\dot{e} + {C\vartheta } + \Delta = {W}_{1N}^{\mathrm{T}}{h}_{1}\left( x\right) + {\varepsilon }_{1} \tag{23}
+$$
+
+The auxiliary controllers ${\bar{\tau }}_{1}$ and ${\tau }_{2}$ can be described as
+
+$$
+\left\{ \begin{array}{l} {\bar{\tau }}_{1} = {W}_{1Ne}^{T}{h}_{1}\left( x\right) + {\alpha }_{1}M{s}_{2} - M{v}_{z} \\ {\tau }_{2} = {W}_{2e}^{\mathrm{T}}{h}_{2}\left( x\right) + {\alpha }_{2}{L\eta } \end{array}\right. \tag{24}
+$$
+
+B. Stability analysis
+
+A Lyapunov function is designed as
+
+$$
+{V}_{3} = {V}_{2} + \frac{1}{2{k}_{1}}\mathop{\sum }\limits_{{i = 1}}^{2}{\begin{Vmatrix}{\widetilde{W}}_{i}\end{Vmatrix}}_{F}^{2} \tag{25}
+$$
+
+where ${\widetilde{W}}_{i} = {W}_{i} - {W}_{ie}$ represents weight error.
+
+Its derivative is
+
+$$
+{\dot{V}}_{3} \leq - 2{\alpha }_{0}{V}_{3} + {s}^{T}{\varepsilon }_{1} + {\eta }^{T}{\varepsilon }_{2} - a\left( {{\alpha }_{1}{s}^{T}{Ms} + {\alpha }_{2}{\eta }^{T}{L\eta }}\right)
+$$
+
+$$
++ {k}_{2}\left( {\mathop{\sum }\limits_{{i = 1}}^{2}{\left( {\widetilde{W}}_{i},{W}_{i}\right) }_{F} - a\mathop{\sum }\limits_{{i = 1}}^{2}{\begin{Vmatrix}{\widetilde{W}}_{i}\end{Vmatrix}}_{F}^{2}}\right) \tag{26}
+$$
+
+in which $0 \leq a \leq 1,{\alpha }_{0} = \min \left\{ {\left( {1 - a}\right) {\alpha }_{1},\left( {1 - a}\right) {\alpha }_{2},\left( {1 - a}\right) {k}_{2}}\right\}$ .
+
+In accordance with [10], it holds that
+
+$$
+{\dot{V}}_{2} \leq - 2{\alpha }_{0}{V}_{2} + \lambda ,\left( {\lambda > 0}\right) \tag{27}
+$$
+
+Further, shrink Equation (27) as
+
+$$
+{\dot{V}}_{2} \leq - 2{\alpha }_{0}{V}_{2} \leq 0 \tag{28}
+$$
+
+From Equation (27) and (28), a conclusion can be made that the tracking system is stable. Thus, the effectiveness of controller in the control of UVMS underwater robotic manipulator is verified.
+
+§ IV. SIMULATION
+
+For the purpose of testifying the validity and advantages of the designed tracking controller, i.e., NN based the nonsingular fast terminal sliding mode (NN-NFTSM) controller, comparison is conducted with traditional PD control and neural network control approaches. TABLE I. displays the parameters of robotic manipulator and controller.
+
+TABLE I. PARAMETERS OF THE UVMS
+
+max width=
+
+Items Rod1 Rod2 Rod3
+
+1-4
+Length(m) 1 1 1
+
+1-4
+Mass(kg) 1 1 2
+
+1-4
+${L}_{e}$ 0.1 0.1 0.1
+
+1-4
+${R}_{e}$ 1 1 1
+
+1-4
+${K}_{e}$ 0.5 0.5 0.5
+
+1-4
+${K}_{me}$ 1 1 1
+
+1-4
+$\bar{w}$ 0.5 ${\alpha }_{1},{\alpha }_{2}$ 200
+
+1-4
+${k}_{p},{k}_{d}$ 300 ${k}_{1},{k}_{2}$ 50,0.8
+
+1-4
+
+Since the underwater robotic manipulator is mounted on underwater vehicle to form the UVMS system, the first hinge of the underwater robotic manipulator has direct interference with the vehicle. In the simulation process, it is assumed that this interference is a transient interference signal: a force of ${200}\mathrm{\;N}$ is applied to the vehicle at $\mathrm{t} = {1.7}\mathrm{\;s}$ .
+
+Fig. 2 displays the spatial tracking effect of UVMS end effector. It can be seen that the proposed NN-NFTSM controller is obviously better than traditional PD control and neural network control methods.
+
+ < g r a p h i c s >
+
+Fig. 2 Spatial tracking effect of UVMS end effector
+
+Fig. 3 shows the results of joint angle tracking control. It is obvious to get the result that both the nonsingular fast terminal sliding mode surface based on neural network and the proposed sliding mode controller based on neural network have higher tracking stability than PD control.
+
+ < g r a p h i c s >
+
+Fig. 3 Results of joint angle tracking control
+
+In Fig. 4 and Fig. 5, the tracking effect of UVMS end effector in $x,y,z$ directions are displayed. It is easy to get that in the method with neural network control, the tracking effect of three directions can reach stability. The proposed nonsingular fast terminal sliding mode control method combined with the RBF neural network can track the desired trajectory more quickly and stably.
+
+ < g r a p h i c s >
+
+Fig. 4 Tracking effect of UVMS end effector in x, y, z directions.
+
+ < g r a p h i c s >
+
+Fig. 5 UVMS end effector tracking error
+
+Fig. 6 - Fig. 8 show the comparison of MAE and RMSE under three control schemes. It is easy to get that NN-NFTSM has higher accuracy than RBF neural network (NN) and PD control.
+
+ < g r a p h i c s >
+
+Fig. 8 Error in z direction
+
+§ V. CONCLUSION
+
+In this article, a RBFNN based fast nonsingular terminal sliding mode controller is designed for UVMS. Singular items of the UVMS system are approximated by RBF neural network. Lyapunov design is selected to test the stability and feasibility of the proposed controller. It is proved that the convergence of tracking errors falls into a small zero neighborhood within finite time. Finally, the simulation results confirm that the proposed controller performs an excellent role in UVMS system.
+
+§ ACKNOWLEDGMENT
+
+The work in the paper is partly supported by the Natural Science Foundation of Fujian Province of China, Grant 2023J011572, and partly supported by Fuzhou Institute of Oceanography, Grants 2021F11 & 2022F13.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..30c2e68f7dc271b1e0ea0f8f505910906b149625
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,527 @@
+# Performance-Based Human-in-the-Loop Optimal Bipartite Consensus Control for Multi-Agent Systems via Reinforcement Learning
+
+Zongsheng Huang
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China Chengdu 611731, China
+
+zs_Huang@163.com
+
+Tieshan Li
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China Chengdu 611731, China
+
+tieshanli@126.com
+
+Yue Long
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+longyue@uestc.edu.cn
+
+Hanqing Yang
+
+School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
+
+hqyang5517@uestc.edu.cn
+
+${Abstract}$ -This paper investigates the performance-based human-in-the-loop (HiTL) optimal bipartite consensus control problem for nonlinear multi-agent systems (MASs) under signed topology. First, to respond to any emergencies and guarantee the safety of MASs, the MASs are monitored by human operator sending command signals to the non-autonomous leader. Then, under the joint design architecture of prescribe-time performance function and error transformation, a novel performance index function involving transformed error and control input is developed to achieve optimal bipartite consensus with prescribed-time. Subsequently, the reinforcement learning (RL) method is utilized to learn the solution to Hamilton-Jacobian-Bellman (HJB) equation, in which the fuzzy logic systems (FLSs) are employed to implement the method. Finally, the simulation results depict the effectiveness of the constructed control scheme.
+
+Index Terms-Human-in-the-loop control, prescribed-time control, reinforcement learning, nonlinear multi-agent systems.
+
+## I. INTRODUCTION
+
+In recent years, with the rapid development of multiple unmanned aerial vehicles (UAVs) [1], multiple unmanned ground vehicles (UGVs) [2] and other fields, multi-agent systems (MASs) have been paid more and more attention by scholars. As one of the hot issues in control problems of MASs, consensus control problems have been widely studied. As a branch of consensus control, bipartite consensus was first introduced in [3] taking both competition and cooperation relationships between agents into consideration. For bipartite consensus, the agents eventually converge to two states of opposite sign but equal size. In [4]-[6], the various control strategies of bipartite consensus have been designed broadly.
+
+Notably, the MASs mentioned above are fully autonomous. However, incidents with Boeing 737 jetliners and Tesla's autonomous driving systems have raised serious concerns and highlighted the challenges that fully autonomous MASs face in making judgments during in uncertain and complex environments. Therefore, it is urgent to develop monitoring schemes to complete tasks when MASs encounter unexpected situations [7]. Fortunately, the human-in-the-loop (HiTL) control approach was introduced in MASs to supervise the entire system to respond to sudden changes by sending commands to the leader agent [8]. Later, many studies on HiTL control for MASs have emerged in [9]-[15]. In [9], the HiTL formation tracking control scheme together with edge-based event-driven mechanism was constructed for MASs. Considering stochastic actuation attacks, in [13], the prescribed-time and prescribed-accuracy HiTL cluster consensus control problem has been solved. In view of the ability to deal with emergencies, the HiTL control approach has also been favored by multi-UAV systems in [14], [15].
+
+Optimal control, a widely used control method, has garnered significant attention. For nonlinear systems, the optimal solution is derived from the Hamilton-Jacobian-Bellman (HJB) equation. However, obtaining the solution of HJB equation through numerical methods is infeasible. To overcome this challenge, reinforcement learning (RL) that motivated by animal behaviors was proposed as a powerful tool [16]. The core idea of RL is to approximate the solution of the HJB equation using a function approximation structure. The value iteration algorithm, one of the valuable algorithms in RL, was developed by Murray et al. in [17], in which the convergence analysis was also detailed. In [18], the policy iteration algorithm, as another equally important algorithm, was designed to obtain the optimal saturation controller for nonlinear systems. Based on the previous work, RL method has been used to solve the optimal problem for MASs. In [19], an optimal control protocol based on RL was designed to achieve containment control without prior knowledge of the system dynamics. For unknown discrete-time MASs, in [20], the optimal bipartite consensus control problem was solved. Nevertheless, the above results only conclude that the optimal controller is globally asymptotically stable. It is important to note that achieving specified accuracy within a given time is crucial in many fields.
+
+---
+
+This work was supported in part by the National Natural Science Foundation of China under Grant 51939001, Grant 62273072, and Grant 62203088, in part by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0903.(Corresponding author: Tieshan Li)
+
+---
+
+Fortunately, the prescribed-time control (PTC) was firstly proposed by Song et al. [21]. The PTC distinguishes from finite-time control and fixed-time control, in which the preset settling time is not related to the initial values of the system. Depending on [21], in [22], the convergence rate can be predetermined as needed, and a general method for constructing the time-varying rate function was provided. In [23], a novel time-varying constraint function was devised to guarantee that the system remains operational beyond the prescribed time, leading to a global result. In particular, the PTC-based HiTL control scheme was developed to realize the cluster consensus within given time in [13]. However, to the best of the authors' knowledge, the bipartite consensus control scheme considering both optimal performance and prescribed-time performance under the framework of HiTL control has not been fully explored, which promotes our research.
+
+Driven by these observations, this paper focuses on investigating the performance-based HiTL optimal bipartite consensus control problem. The main contributions are summarized below.
+
+(1) Unlike the autonomous leader described in [4]-[6] which lacked intelligent decision-making, this paper aims to improve the security, stability, and emergency response capabilities of the system by designing the leader of the MASs to be non-autonomous, where the time-varying control input is governed by a human operator.
+
+(2) Compared with the existing optimal results for MASs in [19], [20], to realize both optimal performance and prescribed-time performance, a unified design framework of PTC and RL method is proposed, where the settling time and accuracy can be preset without initial values.
+
+The structure is given below. In Section II, the considered system and some assumptions are given. In Section III, the main results including the PTC performance function and optimal controller are designed. In Section IV, the convergence analysis is provided. The simulation results is given in Section V. Finally, the conclusion is presented in Section VI.
+
+## II. Problem Formulation and Preliminaries
+
+## A. Signed Communication Topologies
+
+The structurally balanced bipartition communication topology containing $N$ followers is represented by a directed graph $\mathcal{G} = \{ \mathcal{V},\varepsilon ,\mathcal{A}\}$ , where $\mathcal{V} = \left\{ {{\mathcal{V}}_{1},{\mathcal{V}}_{2},\cdots ,{\mathcal{V}}_{N}}\right\}$ represents the vertex set, which is divided into the cooperative set ${\mathcal{V}}_{\alpha }$ and competitive set ${\mathcal{V}}_{\beta }$ such that ${\mathcal{V}}_{\alpha } \cap {\mathcal{V}}_{\beta } = 0$ and ${\mathcal{V}}_{\alpha } \cup {\mathcal{V}}_{\beta } = \mathcal{V}$ . $\varepsilon \subseteq \mathcal{V} \times \mathcal{V}$ represents the edge set of $N$ followers. Let $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{N \times N}$ be the signed weight matrix, where ${a}_{ij} > 0$ if $\left( {{\mathcal{V}}_{i},{\mathcal{V}}_{j}}\right) \in {\mathcal{V}}_{m}, m \in \{ \alpha ,\beta \}$ and ${a}_{ij} < 0$ if ${\mathcal{V}}_{i} \in {\mathcal{V}}_{m},{\mathcal{V}}_{j} \in {\mathcal{V}}_{n}, m \neq n, m, n \in \{ \alpha ,\beta \}$ . The neighbor set of $i$ th follower is defined as ${\mathcal{N}}_{i} = \left\{ {j \in \mathcal{V} : {a}_{ij} \neq 0}\right\}$ . Define $\mathcal{L} = \mathcal{D} - \mathcal{A} \in {\mathbb{R}}^{N \times N}$ as the Laplacian matrix of $\mathcal{G}$ , where $\mathcal{D} = \operatorname{diag}\left( {{d}_{1},{d}_{2},\cdots ,{d}_{N}}\right) \in {\mathbb{R}}^{N \times N}$ denotes the degree matrix with ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}\left| {a}_{ij}\right|$ .
+
+The argument graph consisting of one leader and $N$ followers is denoted as $\widetilde{\mathcal{G}} = \{ \widetilde{\mathcal{V}},\widetilde{\varepsilon }\}$ , in which $\widetilde{\mathcal{V}} =$ $\left\{ {{\mathcal{V}}_{0},{\mathcal{V}}_{1},{\mathcal{V}}_{2},\cdots ,{\mathcal{V}}_{N}}\right\}$ and $\widetilde{\varepsilon } \subseteq \widetilde{\mathcal{V}} \times \widetilde{\mathcal{V}}$ . Let $\mathcal{B} =$ $\operatorname{diag}\left\{ {\left| {b}_{1}\right| ,\left| {b}_{2}\right| ,\cdots ,\left| {b}_{N}\right| }\right\} \in {\mathbb{R}}^{N \times N}$ , where ${b}_{i} = 1$ indicates that the information of the leader is available for the $i$ th node and ${b}_{i} > 0$ represents cooperative relation, ${b}_{i} < 0$ represents competitive relation.
+
+## B. Problem Formulation
+
+Assume that the nonlinear MAS is composed of $N\left( { \geq 2}\right)$ followers and one leader. The dynamics model of $i$ th follower is provided as
+
+$$
+{\dot{x}}_{i} = {f}_{i}\left( {x}_{i}\right) + {g}_{i}\left( {x}_{i}\right) {u}_{i}, i = 1,2,\cdots , N \tag{1}
+$$
+
+where ${x}_{i}\left( t\right) \in {\mathbb{R}}^{n}$ denotes state, ${u}_{i}\left( t\right) \in {\mathbb{R}}^{m}$ is control input, ${f}_{i}\left( {x}_{i}\right) \in {\mathbb{R}}^{n}$ is internal dynamics and ${g}_{i}\left( {x}_{i}\right) \in {\mathbb{R}}^{n \times m}$ is input dynamics.
+
+Next, the dynamics of the human-manipulated leader is given as
+
+$$
+{\dot{x}}_{0}^{h} = {f}_{0}^{h}\left( {x}_{0}^{h}\right) + {u}_{0}^{h}, \tag{2}
+$$
+
+where ${x}_{0}^{h}\left( t\right) \in {\mathbb{R}}^{n}$ denotes state and ${u}_{0}^{h}\left( t\right) \in {\mathbb{R}}^{m}$ is nonzero control input of human operator sending to leader, ${f}_{0}^{h}\left( {x}_{0}^{h}\right) \in$ ${\mathbb{R}}^{n}$ represents internal dynamics.
+
+The following assumptions and lemma are imposed.
+
+Assumption 1. [19] The signed graph $\mathcal{G}$ has a directed spanning tree.
+
+Assumption 2. [24] The input of human operator always makes the leader (2) stable.
+
+Lemma 1. [25]: The FLS can estimate a nonlinear continuous function $f\left( \mathfrak{x}\right) \in \mathbb{R}$ on a compact set ${\Omega }_{f} \in {\mathbb{R}}^{n}$ as
+
+$$
+\mathop{\sup }\limits_{{\mathfrak{x} \in {\Omega }_{f}}}\left| {f\left( \mathfrak{x}\right) - {\Theta }^{T}\phi \left( \mathfrak{x}\right) }\right| \leq b \tag{3}
+$$
+
+with $b > 0$ .
+
+## III. Main Results
+
+## A. Prescribed-Time Function and Error Transformation
+
+To achieve prescribed-time (PT) performance for MASs, the PT performance function $\vartheta \left( t\right)$ is given as
+
+$$
+\vartheta \left( t\right) = \left\{ \begin{array}{ll} \iota {e}^{-\beta {\left( \frac{T}{T - t}\right) }^{h}} + {\vartheta }_{{T}_{r}}, & 0 < t < {T}_{r} \\ {\vartheta }_{{T}_{r}}, & t \geq {T}_{r} \end{array}\right. \tag{4}
+$$
+
+where $h > 0,\iota > 0,\beta > 0,{\vartheta }_{{T}_{r}} > 0,0 < {T}_{r} < \infty$ and $0 < {\vartheta }_{{T}_{r}} < \infty$ represent the user-defined settling time and steady-state tracking accuracy, respectively.
+
+Construct the bipartite consensus error as ${e}_{i} =$ $\mathop{\sum }\limits_{{j = 1}}^{N}\left| {a}_{ij}\right| \left( {{x}_{i} - \operatorname{sign}\left( {a}_{ij}\right) {x}_{j}}\right) + \left| {b}_{i}\right| \left( {{x}_{i} - \operatorname{sign}\left( {b}_{i}\right) {x}_{0}^{h}}\right) ,{e}_{i} =$ ${\left\lbrack {e}_{i,1},\cdots ,{e}_{i, n}\right\rbrack }^{T} \in {\mathbb{R}}^{n}$ and adopt the error transformation function as
+
+$$
+{\varrho }_{i,\imath } = \tan \left( {\frac{\pi }{2}\frac{{e}_{i,\imath }}{\vartheta }}\right) ,\imath = 1,\cdots , n, \tag{5}
+$$
+
+where $\left| {{e}_{i,\iota }\left( 0\right) }\right| < \vartheta \left( 0\right)$ .
+
+Based on (5), it yields
+
+$$
+{e}_{i,\imath } = \frac{2\vartheta }{\pi }\arctan \left( {\varrho }_{i,\imath }\right) ,\imath = 1,\cdots , n, i = 1,\cdots , N. \tag{6}
+$$
+
+Remark 1. From (5), the inequality $- \vartheta \leq {e}_{i,\iota } \leq \vartheta ,\forall t \geq 0$ holds. Combined the definition in (4), it further observes that $- {\vartheta }_{{T}_{r}} \leq {e}_{i,\iota } \leq {\vartheta }_{{T}_{r}},\forall t \geq {T}_{r}$ if ${\varrho }_{i,\iota }$ is bounded, which means the PT performance of ${e}_{i}$ can be ensured.
+
+## B. Optimal control Scheme Design
+
+Define the performance index function as
+
+$$
+{J}_{i} = {\int }_{t}^{\infty }\left( {{e}_{i}^{T}{\mathcal{Q}}_{i}{e}_{i} + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i}}\right) {d\tau } \tag{7}
+$$
+
+$$
+= {\int }_{t}^{\infty }\left( {{\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i}}\right) {d\tau },
+$$
+
+where ${\mathcal{Q}}_{i}$ and ${\mathcal{R}}_{i}$ are symmetric positive definite matrices with suitable dimensions, ${\mathcal{A}}_{i} = {\left\lbrack {\mathcal{A}}_{i,1},\cdots ,{\mathcal{A}}_{i, n}\right\rbrack }^{T} =$ ${\left\lbrack \arctan \left( {\varrho }_{i,1}\right) ,\cdots ,\arctan \left( {\varrho }_{i, n}\right) \right\rbrack }^{T}$ .
+
+Taking the time derivative of ${\mathcal{A}}_{i, i}$ , one has
+
+$$
+{\dot{\mathcal{A}}}_{i,\iota } = \frac{1}{1 + {\varrho }_{i,\iota }^{2}}{\chi }_{i,\iota }\left( {{\dot{e}}_{i,\iota } - {\nu }_{i,\iota }}\right) , \tag{8}
+$$
+
+where ${\chi }_{i,\imath } = \frac{\pi }{{2\vartheta }{\cos }^{2}\left( {\frac{\pi }{2}\frac{{e}_{i,\imath }}{\vartheta }}\right) },{\nu }_{i,\imath } = \frac{{e}_{i,\imath }\dot{\vartheta }}{\vartheta },{\dot{e}}_{i} = {\Gamma }_{i}\left( {{f}_{i} + {g}_{i}{u}_{i}}\right) -$ $\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\dot{x}}_{j} - {b}_{i}{\dot{x}}_{0}^{h}$ and ${\Gamma }_{i} = {d}_{i} + \left| {b}_{i}\right|$ .
+
+Then, define the Hamiltonian function as
+
+$$
+{H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i},\frac{\partial {J}_{i}}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}}{\partial \vartheta }}\right) = {\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right)
+$$
+
+$$
++ {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i} + \frac{\partial {J}_{i}}{\partial {\mathcal{A}}_{i}}\left\lbrack {{\bar{\chi }}_{i}\left( {{\dot{e}}_{i} - {\nu }_{i}}\right) }\right\rbrack + \frac{\partial {J}_{i}}{\partial \vartheta }\frac{\partial \vartheta }{\partial t} \tag{9}
+$$
+
+$$
+= {\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i} + \frac{\partial {J}_{i}}{\partial {\varrho }_{i}}\left\lbrack {{\chi }_{i}\left( {{\dot{e}}_{i} - {\nu }_{i}}\right) }\right\rbrack
+$$
+
+$$
++ \frac{\partial {J}_{i}}{\partial \vartheta }\frac{\partial \vartheta }{\partial t},
+$$
+
+where ${\bar{\chi }}_{i} = \operatorname{diag}\left\{ {\frac{{\chi }_{i,1}}{1 + {\varrho }_{i,1}^{2}},\cdots ,\frac{{\chi }_{i, n}}{1 + {\varrho }_{i, n}^{2}}}\right\} ,{\nu }_{i} = \left\lbrack {{\nu }_{i,1},\cdots ,{\nu }_{i, n}}\right\rbrack$ and ${\chi }_{i} = \operatorname{diag}\left\{ {{\chi }_{i,1},\cdots ,{\chi }_{i, n}}\right\}$ .
+
+The corresponding HJB equation is given as
+
+$$
+\mathop{\min }\limits_{{u}_{i}}{H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i}^{ * },\frac{\partial {J}_{i}^{ * }}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}^{ * }}{\partial \vartheta }}\right) = 0. \tag{10}
+$$
+
+Differentiating the (10) with respect to ${u}_{i}$ , one has
+
+$$
+{u}_{i}^{ * } = - \frac{{\Gamma }_{i}}{2}{\mathcal{R}}_{i}^{-1}{g}_{i}^{T}{\chi }_{i}^{T}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}. \tag{11}
+$$
+
+Substituting (11) into (10), (10) becomes
+
+$$
+{\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + \frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}\left\lbrack {{\chi }_{i}\left( {{\Gamma }_{i}{f}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\dot{x}}_{i} - {b}_{i}{\dot{x}}_{0}^{h}}\right. }\right.
+$$
+
+$$
+\left. \left. {-{\nu }_{i}}\right) \right\rbrack + \frac{\partial {J}_{i}^{ * }}{\partial \vartheta }\frac{\partial \vartheta }{\partial t} - \frac{{\Gamma }_{i}^{2}}{4}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}^{T}}{g}_{i}{\chi }_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{T}{g}_{i}^{T}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}} = 0.
+$$
+
+Inspired by [26], $\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}$ can be segmented as
+
+$$
+\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}} = \frac{2{k}_{i}}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\varrho }_{i} + \frac{2}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) + \frac{1}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) , \tag{12}
+$$
+
+where ${k}_{i} > 0,{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) = {\mathcal{R}}_{i}{\chi }_{i}\left( {{f}_{i}\left( {x}_{i}\right) - {\dot{x}}_{0}^{h} - {o}^{-1}{\nu }_{i}}\right)$ with $o = {\lambda }_{\max }\left( {\mathcal{L} + \mathcal{B}}\right) ,{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) = - 2{k}_{i}{\varrho }_{i}^{2} - 2{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) + {k}_{i}{\chi }_{i}^{2}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}.$
+
+Substituting (12) into (11), one has
+
+$$
+{u}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)
+$$
+
+$$
+- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) . \tag{13}
+$$
+
+### C.PI Algorithm and FLSs-Based Implementation
+
+Obviously, the HJB equation can not be acquired by numerical methods. Therefore, the PI approach is given in Algorithm 1 to find the optimal result.
+
+Algorithm 1: PI Algorithm for Solving PT Optimal
+
+Consensus Control Policy
+
+---
+
+ 1 Step 1: Initialization. Give an initial control protocols
+
+ ${u}_{i}^{\left( 0\right) },\forall i$ .
+
+2 Step 2: Policy evaluation. Solve the cost function ${J}_{i}^{l}$
+
+ as: ${H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i}^{ * },\frac{\partial {J}_{i}^{l}}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}^{l}}{\partial \vartheta }}\right) = 0$ .
+
+ 3 Step 3: Policy improvement. Update optimal control
+
+ input ${u}_{i}^{\left( l + 1\right) }$ as Eq. (13).
+
+ Step 4: If $\begin{Vmatrix}{{J}_{i}^{\left( l + 1\right) } - {J}_{i}^{\left( l\right) }}\end{Vmatrix} \leq \aleph$ with the predefined
+
+ parameter $\aleph > 0$ , stop; otherwise, set $l = l + 1$ and
+
+ return to Step 2.
+
+---
+
+The convergence and optimality of Algorithm 1 have been proved in [27] and are omitted here.
+
+In view of the unknown term ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)$ and ${\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right)$ in (13), the FLSs is used to approximate these terms as.
+
+$$
+{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) = {\omega }_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{14}
+$$
+
+$$
+{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) = {\omega }_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{15}
+$$
+
+where ${\omega }_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1} \times n}$ and ${\omega }_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2} \times n}$ represent ideal weight matrices with ${h}_{c1}$ and ${h}_{c2}$ are the number of fuzzy rules; ${\phi }_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1}}$ and ${\phi }_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2}}$ are fuzzy basis functions; ${\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right)$ and ${\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right)$ denote bounded approximation errors.
+
+Thus, (13) becomes
+
+$$
+{u}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\omega }_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{F}}_{i}}(\mathcal{X}}\right.
+$$
+
+$$
+- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\omega }_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) .
+$$
+
+However, the ${\omega }_{{\mathcal{F}}_{i}}$ and ${\omega }_{{\mathcal{J}}_{i}}$ are unknown, the estimation forms of (14) and (15) are
+
+$$
+{\widehat{\mathcal{F}}}_{i}\left( {\mathcal{X}}_{i}\right) = {\widehat{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{16}
+$$
+
+$$
+{\widehat{\mathcal{J}}}_{i}\left( {\mathcal{X}}_{i}\right) = {\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{17}
+$$
+
+where ${\widehat{\omega }}_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1} \times n}$ and ${\widehat{\omega }}_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2} \times n}$ represent estimated weight matrices.
+
+According to (16) and (17), one has
+
+$$
+{\widehat{u}}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\widehat{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right)
+$$
+
+$$
+- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) . \tag{18}
+$$
+
+The updating laws are constructed as
+
+$$
+{\dot{\widehat{\omega }}}_{{\mathcal{F}}_{i}} = {\mathcal{C}}_{i}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1} - {r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) , \tag{19}
+$$
+
+$$
+{\dot{\widehat{\omega }}}_{{\mathcal{J}}_{i}} = - {r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}, \tag{20}
+$$
+
+where ${\mathcal{C}}_{i} \in {\mathbb{R}}^{{h}_{c1} \times {h}_{c1}}$ is a positive-definite matrix, ${r}_{{\mathcal{F}}_{i}} >$ $0,{r}_{{\mathcal{J}}_{i}} > 0, r > 0$ are design parameters.
+
+## IV. STABILITY ANALYSIS
+
+Theorem 1. Consider the MAS consisting of followers (1) and leader (1) under Assumption 1-3, by choosing ${k}_{i} > \frac{3}{4}$ and adopting optimal control input (18) and adaptive law (19) and (20), then the consensus error can converge to the prescribed accuracy within prescribed time.
+
+Proof. Develop the Lyapunov function as
+
+$$
+V = \frac{1}{2}{\varrho }^{T}\varrho + \frac{1}{2}\mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\mathcal{C}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} + {\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}}\right) \tag{21}
+$$
+
+where $\varrho = {\left\lbrack {\varrho }_{1}^{T},\cdots ,{\varrho }_{n}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{N \times n}$ , estimation error ${\widetilde{\omega }}_{{\mathcal{F}}_{i}} =$ ${\omega }_{{\mathcal{F}}_{i}} - {\widehat{\omega }}_{{\mathcal{F}}_{i}}$ and ${\widetilde{\omega }}_{{\mathcal{J}}_{i}} = {\omega }_{{\mathcal{J}}_{i}} - {\widehat{\omega }}_{{\mathcal{J}}_{i}}$ . Invoking (5), (19) and (20), it yields
+
+$$
+\dot{V} = {\varrho }^{T}\left\lbrack {\chi \left( {\mathcal{L} + \mathcal{B}}\right) \dot{e} - {\chi \nu }}\right\rbrack - \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}}\right. }\right.
+$$
+
+$$
+\left. {-{r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}}\right)
+$$
+
+$$
+\leq \mathop{\sum }\limits_{{j = 1}}^{N}{\varrho }_{i}^{T}o\left( {-{k}_{i}{\mathcal{R}}_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\mathcal{R}}_{i}^{-1}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) - \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}}\right. }\right.
+$$
+
+$$
+\left. {-{r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}}\right)
+$$
+
+$$
+\leq \mathop{\sum }\limits_{{j = 1}}^{N}{\varrho }_{i}^{T}o\left( {-{k}_{i}{\mathcal{R}}_{i}^{-1}{\varrho }_{i} + {\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) - \frac{{\mathcal{R}}_{i}^{-1}}{2}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right)
+$$
+
+$$
++ \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{r}_{{\mathcal{F}}_{i}}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) }\right. }\right.
+$$
+
+$$
+\left. {\left. {+r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) \left. {\widehat{\omega }}_{{\mathcal{J}}_{i}}\right) \text{.}
+$$
+
+(22)
+
+Using Young's inequality, we have
+
+$$
+o{\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}} \leq \frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} + \frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\epsilon }_{{\mathcal{F}}_{i}}\end{Vmatrix}}^{2}, \tag{23}
+$$
+
+$$
+\left. {\left. {-\frac{o{\mathcal{R}}_{i}^{-1}}{2}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) \leq \frac{o{\mathcal{R}}_{i}^{-1}}{4}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) ){\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) }\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}
+$$
+
+$$
++ \frac{o{\mathcal{R}}_{i}^{-1}}{4}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2},
+$$
+
+(24)
+
+$$
+{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widehat{\omega }}_{{\mathcal{F}}_{i}} \leq - \frac{1}{2}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} + \frac{1}{2}{\omega }_{{\mathcal{F}}_{i}}^{T}{\omega }_{{\mathcal{F}}_{i}}, \tag{25}
+$$
+
+$$
+{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}} \leq \frac{-{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right.
+$$
+
+$$
+\left. {+r{\mathcal{I}}_{{h}_{c2}}}\right) {\widetilde{\omega }}_{{\mathcal{J}}_{i}} + \frac{{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}.
+$$
+
+(26)
+
+Calculating (22) by bringing (23)-(26), one has
+
+$$
+\dot{V} \leq - \mathop{\sum }\limits_{{j = 1}}^{N}o{\mathcal{R}}_{i}^{-1}\left( {{k}_{i} - \frac{3}{4}}\right) {\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} - \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{r}_{{\mathcal{F}}_{i}}}{2}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}
+$$
+
+$$
+\left. {-\mathop{\sum }\limits_{{j = 1}}^{N}\left( {\frac{{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widetilde{\omega }}_{{\mathcal{J}}_{i}}}\right) + \Lambda }\right)
+$$
+
+$$
+\leq - \frac{{\kappa }_{1}}{2}\mathop{\sum }\limits_{{j = 1}}^{N}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} - \frac{{\kappa }_{2}}{2}\mathop{\sum }\limits_{{j = 1}}^{N}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\mathcal{C}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} - \frac{{\kappa }_{3}}{2}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}
+$$
+
+$$
++ \Lambda
+$$
+
+$$
+\leq - {\kappa V} + \Lambda ,
+$$
+
+(27)
+
+where $\;\Lambda \; = \;\mathop{\sum }\limits_{{j = 1}}^{N}\frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\epsilon }_{{\mathcal{F}}_{i}}\end{Vmatrix}}^{2} +$ $\left. {\left. {\mathop{\sum }\limits_{{j = 1}}^{N}\frac{o{\mathcal{R}}_{i}^{-1}}{4}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) {\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) }\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}} + \mathop{\sum }\limits_{{j = 1}}^{N}\frac{o{\mathcal{R}}_{i}^{-1}}{4}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} +$ $\mathop{\sum }\limits_{{j = 1}}^{N}\frac{{r}_{{\mathcal{F}}_{i}}}{2}{\omega }_{{\mathcal{F}}_{i}}^{T}{\omega }_{{\mathcal{F}}_{i}} + \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}},$ ${\kappa }_{1}\; = \;\mathop{\min }\limits_{{i = 1,\cdots , N}}\left\{ {{2o}{\mathcal{R}}_{i}^{-1}\left( \begin{array}{lll} {k}_{i} & - & \frac{3}{4} \end{array}\right) }\right\} ,\;{\kappa }_{2}\; =$ $\mathop{\min }\limits_{{i = 1,\cdots , N}}\left\{ \frac{{r}_{{\mathcal{F}}_{i}}}{{\lambda }_{\max }\left( {\mathcal{C}}_{i}^{-1}\right) }\right\} ,{\kappa }_{3} = \mathop{\min }\limits_{{i = 1,\cdots , N}}\left\{ {{r}_{{\mathcal{J}}_{i}}{\lambda }_{\min }\left( {\phi }_{i}\right) }\right\} ,$ $\kappa = \min \left\{ {{\kappa }_{1},{\kappa }_{2},{\kappa }_{3}}\right\} ,{\lambda }_{\min }\left( {\phi }_{i}\right)$ is the minimal eigenvalue of ${\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right)$ .
+
+## V. SIMULATION
+
+A nonlinear MAS composed by four single-link robot arms (three followers and one human-controlled leader) is given to verify the effectiveness of the proposed control scheme. The model of agent is given as [12]
+
+$$
+{J}_{i}{\ddot{q}}_{i} + {D}_{i}{\dot{q}}_{i} + {M}_{i}g{d}_{i}\sin \left( {q}_{i}\right) = {u}_{i}, i = 1,\cdots ,3,
+$$
+
+the physical parameters of $g,{M}_{i},{D}_{i},{J}_{i}$ and ${d}_{i}$ can be found in [12] for details. ${u}_{0}^{h}$ is set as
+
+$$
+{u}_{0}^{h} = \left\{ \begin{array}{l} {0.3} * \sin \left( t\right) * \sin \left( t\right) ,0 \leq t < {15} \\ 0,{15} \leq t < {30} \\ \sin \left( t\right) * \cos \left( t\right) ,{30} \leq t \leq {50}. \end{array}\right.
+$$
+
+The communication graph is shown below
+
+
+
+Fig. 1: Communication graph.
+
+As shown in Fig. 1, it can be obtained that
+
+$$
+\mathcal{A} = \left\lbrack \begin{matrix} 0 & - 1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack ,\mathcal{L} = \left\lbrack \begin{matrix} 2 & 1 & - 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack ,
+$$
+
+$\mathcal{B} = \operatorname{diag}\{ 1,0,0,0,0\} .$
+
+For PT performance function, select ${\vartheta }_{{T}_{r}} = {0.06},{T}_{r} = {3s}$ . The initial state values of followers and leader are presented in Table 1.
+
+TABLE I: Initial state values of followers and leader.
+
+| State | $i = 0$ | $i = 1$ | $i = 2$ | $i = 3$ |
| ${x}_{i,1}\left( 0\right)$ | 1 | 0.8 | 0.5 | 0.8 |
| ${x}_{i,2}\left( 0\right)$ | -1 | 0.8 | -0.5 | -0.8 |
+
+For the unknown term ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) ,{\mathcal{X}}_{i} = {\left\lbrack {x}_{i},{x}_{0}^{h},{\dot{x}}_{0}^{h},\vartheta ,\dot{\vartheta }\right\rbrack }^{T}$ and defined over $\left\lbrack {-6,6}\right\rbrack$ . Choose ${\mathcal{X}}_{i}^{0} =$ ${\left\lbrack \underset{5}{\underbrace{{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T},\cdots ,{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T}}}\right\rbrack }^{T}$ ${\phi }_{{\mathcal{F}}_{i}^{\mathcal{L}}}\left( {\mathcal{X}}_{i}\right) = \exp \left( {-\frac{{\left( {\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}\right) }^{T}\left( {{\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}}\right) }{2}}\right) .$
+
+For the unknown term ${\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) ,{\mathcal{X}}_{i} = {\left\lbrack {x}_{i},{\varrho }_{i},{x}_{0}^{h},{\dot{x}}_{0}^{h},\vartheta ,\dot{\vartheta }\right\rbrack }^{T}$ and defined over $\left\lbrack {-6,6}\right\rbrack$ . Choose ${\mathcal{X}}_{i}^{0} =$ ${\left\lbrack \underset{6}{\underbrace{{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T},\cdots ,{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T}}}\right\rbrack }^{T}\;$ and ${\phi }_{{\mathcal{J}}_{i}}{\left( {\mathcal{X}}_{i}\right) }^{\mathcal{L}}\left( {\mathcal{X}}_{i}\right) = \exp \left( {-\frac{{\left( {\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}\right) }^{T}\left( {{\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}}\right) }{2}}\right) .$
+
+For updating law (19) and (20), ${\widehat{\omega }}_{{\mathcal{F}}_{1}}\left( 0\right) =$ ${\widehat{\omega }}_{{\mathcal{F}}_{2}}\left( 0\right) = {\widehat{\omega }}_{{\mathcal{F}}_{3}}\left( 0\right) = {\left\lbrack {0.1}\right\rbrack }_{{12} \times 2},{\widehat{\omega }}_{{\mathcal{J}}_{1}}\left( 0\right) = {\widehat{\omega }}_{{\mathcal{J}}_{2}}\left( 0\right) =$ ${\widehat{\omega }}_{{\mathcal{J}}_{3}}\left( 0\right) = {\left\lbrack {0.92}\right\rbrack }_{{12} \times 2},{\mathcal{C}}_{1} =$ diag $\{ {0.5},\cdots ,{0.5}\} ,{\mathcal{C}}_{2} =$ diag $\underset{12}{\underbrace{\{ {0.7},\cdots ,{0.7}\} }},{\mathcal{C}}_{3} = \;$ diag $\underset{12}{\underbrace{\{ {0.3},\cdots ,{0.3}\} }}$ ,
+
+${\mathcal{R}}_{i} = \operatorname{diag}\{ {0.8},{0.8}\} ,{r}_{{\mathcal{F}}_{i}} = 2,{k}_{i} = {45},{r}_{{\mathcal{J}}_{i}} = 1.$
+
+
+
+Fig. 2: Curves of ${\widetilde{x}}_{i,1},{x}_{0,1}^{h}$ and $- {x}_{0,1}^{h}$ .
+
+
+
+Fig. 3: Curves of ${\widetilde{x}}_{i,2},{x}_{0,2}^{h}$ and $- {x}_{0,2}^{h}$ .
+
+
+
+Fig. 4: Curves of errors and performance bounds.
+
+
+
+Fig. 5: Curves of optimal control input.
+
+
+
+Fig. 6: Curves of $\begin{Vmatrix}{\omega }_{{\mathcal{F}}_{i}}\end{Vmatrix}$ .
+
+From Fig. 2 and Fig. 3, the bipartite consensus can be achieved and the leader, followers 1 and 2 belong to a group while the follower 3 is geared to another group with opposite sign. Fig. 4 shows the bipartite consensus and the PT performance bounds. It can be obtained that the consensus error can reach the given accuracy 0.06 with the prescribed time ${3s}$ . The optimal control input for each agent is depicted in Fig. 5, in which ${u}_{i}$ rapidly converges to a small region of zero. The norm of updating weights in unknown terms ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)$ are given in Fig. 6.
+
+## VI. CONCLUSION
+
+In this article, the problem of performance-based HiTL optimal bipartite consensus control for nonlinear MASs has been studied. First, the MASs have been monitored by human operator sending command signals to the non-autonomous leader to respond to any emergencies and guarantee the safety of MASs. Then, under the joint design architecture of prescribe-time performance function and error transformation, a novel performance index function has been developed to achieve optimal bipartite consensus with prescribed-time. Subsequently, the RL has been utilized to learn the solution to HJB equation, in which the FLSs are employed to implement the algorithm. The validity of the designed control scheme has been confirmed by simulation.
+
+## REFERENCES
+
+[1] M. Qian, Z. Wu, and B. Jiang, "Cerebellar model articulation neural network-based distributed fault tolerant tracking control with obstacle avoidance for fixed-wing UAVs," IEEE Transactions on Aerospace and Electronic Systems, vol. 59, no. 5, pp. 6841-6852, 2023.
+
+[2] S. Liu, B. Jiang, Z. Mao, and Y. Zhang, "Decentralized adaptive event-triggered fault-tolerant synchronization tracking control of multiple UAVs and UGVs with prescribed performance," IEEE Transactions on Vehicular Technology, vol. 73, no. 7, pp. 9656-9665, 2024.
+
+[3] C. Altafini, "Consensus problems on networks with antagonistic interactions," IEEE Transactions on Automatic Control, vol. 58, no. 4, pp. 935-946, 2013.
+
+[4] B. Ning, Q. Han, and Z. Zuo, "Bipartite consensus tracking for second-order multiagent systems: A time-varying function-based preset-time approach," IEEE Transactions on Automatic Control, vol. 66, no. 6, pp. 2739-2745, 2021.
+
+[5] S. Miao and H. Su, "Bipartite consensus for second-order multiagent systems with matrix-weighted signed network," IEEE Transactions on Cybernetics, vol. 52, no. 12, pp. 13038-13047, 2022.
+
+[6] Y. Zhou, Y. Liu, Y. Zhao, M. Cao, and G. Chen, "Fully distributed prescribed-time bipartite synchronization of general linear systems: An adaptive gain scheduling strategy," Automatica, vol. 161, p. 111459, 2024.
+
+[7] L. Feng, C. Wiltsche, L. Humphrey, and U. Topcu, "Synthesis of human-in-the-loop control protocols for autonomous systems," IEEE Transactions on Automation Science and Engineering, vol. 13, no. 2, pp. 450-462, 2016.
+
+[8] B. Kiumarsi and T. Basar, "Human-in-the-loop control of distributed multi-agent systems: A relative input-output approach," in 2018 IEEE Conference on Decision and Control (CDC), 2018, pp. 3343-3348.
+
+[9] L. Ma and F. Zhu, "Human-in-the-loop formation control for multi-agent systems with asynchronous edge-based event-triggered communications," Automatica, doi: 10.1016/j.automatica.2024.111744.
+
+[10] G. Lin, H. Li, H. Ma, D. Yao, and R. Lu, "Human-in-the-loop consensus control for nonlinear multi-agent systems with actuator faults," IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 1, pp. 111-122, 2022.
+
+[11] J. Chen, J. Xie, J. Li, and W. Chen, "Human-in-the-loop fuzzy iterative
+
+learning control of consensus for unknown mixed-order nonlinear multi-agent systems," IEEE Transactions on Fuzzy Systems, vol. 32, no. 1, pp. 255-265, 2023.
+
+[12] G. Lin, H. Li, H. Ma, and Q. Zhou, "Distributed containment control for human-in-the-loop MASs with unknown time-varying parameters," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 69, no. 12, pp. 5300-5311, 2022.
+
+[13] P.-M. Liu, X.-G. Guo, J.-L. Wang, D. Coutinho, and Z.-G. Wu, "Preset-time and preset-accuracy human-in-the-loop cluster consensus control for MASs under stochastic actuation attacks," IEEE Transactions on Automatic Control, vol. 69, no. 3, pp. 1675-1688, 2024.
+
+[14] H. Guo, M. Chen, Y. Jiang, and M. Lungu, "Distributed adaptive human-in-the-loop event-triggered formation control for QUAVs with quantized communication," IEEE Transactions on Industrial Informatics, vol. 19, no. 6, pp. 7572-7582, 2023.
+
+[15] L. Chen, H. Liang, Y. Pan, and T. Li, "Human-in-the-loop consensus tracking control for UAV systems via an improved prescribed performance approach," IEEE Transactions on Aerospace and Electronic Systems, vol. 59, no. 6, pp. 8380-8391, 2023.
+
+[16] P. J. Werbos, "Reinforcement learning and approximate dynamic programming (RLADP)-foundations, common misconceptions, and the challenges ahead," Reinforcement Learning and Approximate Dynamic Programming for Feedback Control, pp. 1-30, 2012.
+
+[17] J. J. Murray, C. J. Cox, G. G. Lendaris, and R. Saeks, "Adaptive dynamic programming," IEEE Transactions on Systems, Man, and Cybernetics, Part $C$ (Applications and Reviews), vol. 32, no. 2, pp. 140-153,2002.
+
+[18] M. Abu-Khalaf and F. L. Lewis, "Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach," Automatica, vol. 41, no. 5, pp. 779-791, 2005.
+
+[19] T. Li, W. Bai, Q. Liu, Y. Long, and C. L. P. Chen, "Distributed fault-tolerant containment control protocols for the discrete-time multiagent systems via reinforcement learning method," IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 8, pp. 3979-3991, 2023.
+
+[20] Q. Liu, H. Yan, M. Wang, Z. Li, and S. Liu, "Data-driven optimal bipartite consensus control for second-order multiagent systems via policy gradient reinforcement learning," IEEE Transactions on Cybernetics, vol. 54, no. 6, pp. 3468-3478, 2024.
+
+[21] Y. Song, Y. Wang, J. Holloway, and M. Krstic, "Time-varying feedback for regulation of normal-form nonlinear systems in prescribed finite time," Automatica, vol. 83, pp. 243-251, 2017.
+
+[22] Y. Wang and Y. Song, "A general approach to precise tracking of nonlinear systems subject to non-vanishing uncertainties," Automatica, vol. 106, pp. 306-314, 2019.
+
+[23] Y. Cao, J. Cao, and Y. Song, "Practical prescribed time tracking control over infinite time interval involving mismatched uncertainties and nonvanishing disturbances," Automatica, vol. 136, p. 110050, 2022.
+
+[24] G. Lin, H. Li, C. K. Ahn, and D. Yao, "Event-based finite-time neural control for Human-in-the-Loop UAV attitude systems," IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 12, pp. 10387-10397, 2023.
+
+[25] L.-X. Wang, "Stable adaptive fuzzy control of nonlinear systems," IEEE Transactions on Fuzzy Systems, vol. 1, no. 2, pp. 146-155, 1993.
+
+[26] Y. Zhang, M. Chadli, and Z. Xiang, "Prescribed-time formation control for a class of multiagent systems via fuzzy reinforcement learning," IEEE Transactions on Fuzzy Systems, vol. 31, no. 12, pp. 4195-4204, 2023.
+
+[27] M. Abu-Khalaf and F. L. Lewis, "Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach," Automatica, vol. 41, no. 5, pp. 779-791, 2005.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..0c42b6139d1b079abdaaf8a8d98d7efbff26dad3
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,471 @@
+§ PERFORMANCE-BASED HUMAN-IN-THE-LOOP OPTIMAL BIPARTITE CONSENSUS CONTROL FOR MULTI-AGENT SYSTEMS VIA REINFORCEMENT LEARNING
+
+Zongsheng Huang
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China Chengdu 611731, China
+
+zs_Huang@163.com
+
+Tieshan Li
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China Chengdu 611731, China
+
+tieshanli@126.com
+
+Yue Long
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China
+
+Chengdu 611731, China
+
+longyue@uestc.edu.cn
+
+Hanqing Yang
+
+School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
+
+hqyang5517@uestc.edu.cn
+
+${Abstract}$ -This paper investigates the performance-based human-in-the-loop (HiTL) optimal bipartite consensus control problem for nonlinear multi-agent systems (MASs) under signed topology. First, to respond to any emergencies and guarantee the safety of MASs, the MASs are monitored by human operator sending command signals to the non-autonomous leader. Then, under the joint design architecture of prescribe-time performance function and error transformation, a novel performance index function involving transformed error and control input is developed to achieve optimal bipartite consensus with prescribed-time. Subsequently, the reinforcement learning (RL) method is utilized to learn the solution to Hamilton-Jacobian-Bellman (HJB) equation, in which the fuzzy logic systems (FLSs) are employed to implement the method. Finally, the simulation results depict the effectiveness of the constructed control scheme.
+
+Index Terms-Human-in-the-loop control, prescribed-time control, reinforcement learning, nonlinear multi-agent systems.
+
+§ I. INTRODUCTION
+
+In recent years, with the rapid development of multiple unmanned aerial vehicles (UAVs) [1], multiple unmanned ground vehicles (UGVs) [2] and other fields, multi-agent systems (MASs) have been paid more and more attention by scholars. As one of the hot issues in control problems of MASs, consensus control problems have been widely studied. As a branch of consensus control, bipartite consensus was first introduced in [3] taking both competition and cooperation relationships between agents into consideration. For bipartite consensus, the agents eventually converge to two states of opposite sign but equal size. In [4]-[6], the various control strategies of bipartite consensus have been designed broadly.
+
+Notably, the MASs mentioned above are fully autonomous. However, incidents with Boeing 737 jetliners and Tesla's autonomous driving systems have raised serious concerns and highlighted the challenges that fully autonomous MASs face in making judgments during in uncertain and complex environments. Therefore, it is urgent to develop monitoring schemes to complete tasks when MASs encounter unexpected situations [7]. Fortunately, the human-in-the-loop (HiTL) control approach was introduced in MASs to supervise the entire system to respond to sudden changes by sending commands to the leader agent [8]. Later, many studies on HiTL control for MASs have emerged in [9]-[15]. In [9], the HiTL formation tracking control scheme together with edge-based event-driven mechanism was constructed for MASs. Considering stochastic actuation attacks, in [13], the prescribed-time and prescribed-accuracy HiTL cluster consensus control problem has been solved. In view of the ability to deal with emergencies, the HiTL control approach has also been favored by multi-UAV systems in [14], [15].
+
+Optimal control, a widely used control method, has garnered significant attention. For nonlinear systems, the optimal solution is derived from the Hamilton-Jacobian-Bellman (HJB) equation. However, obtaining the solution of HJB equation through numerical methods is infeasible. To overcome this challenge, reinforcement learning (RL) that motivated by animal behaviors was proposed as a powerful tool [16]. The core idea of RL is to approximate the solution of the HJB equation using a function approximation structure. The value iteration algorithm, one of the valuable algorithms in RL, was developed by Murray et al. in [17], in which the convergence analysis was also detailed. In [18], the policy iteration algorithm, as another equally important algorithm, was designed to obtain the optimal saturation controller for nonlinear systems. Based on the previous work, RL method has been used to solve the optimal problem for MASs. In [19], an optimal control protocol based on RL was designed to achieve containment control without prior knowledge of the system dynamics. For unknown discrete-time MASs, in [20], the optimal bipartite consensus control problem was solved. Nevertheless, the above results only conclude that the optimal controller is globally asymptotically stable. It is important to note that achieving specified accuracy within a given time is crucial in many fields.
+
+This work was supported in part by the National Natural Science Foundation of China under Grant 51939001, Grant 62273072, and Grant 62203088, in part by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0903.(Corresponding author: Tieshan Li)
+
+Fortunately, the prescribed-time control (PTC) was firstly proposed by Song et al. [21]. The PTC distinguishes from finite-time control and fixed-time control, in which the preset settling time is not related to the initial values of the system. Depending on [21], in [22], the convergence rate can be predetermined as needed, and a general method for constructing the time-varying rate function was provided. In [23], a novel time-varying constraint function was devised to guarantee that the system remains operational beyond the prescribed time, leading to a global result. In particular, the PTC-based HiTL control scheme was developed to realize the cluster consensus within given time in [13]. However, to the best of the authors' knowledge, the bipartite consensus control scheme considering both optimal performance and prescribed-time performance under the framework of HiTL control has not been fully explored, which promotes our research.
+
+Driven by these observations, this paper focuses on investigating the performance-based HiTL optimal bipartite consensus control problem. The main contributions are summarized below.
+
+(1) Unlike the autonomous leader described in [4]-[6] which lacked intelligent decision-making, this paper aims to improve the security, stability, and emergency response capabilities of the system by designing the leader of the MASs to be non-autonomous, where the time-varying control input is governed by a human operator.
+
+(2) Compared with the existing optimal results for MASs in [19], [20], to realize both optimal performance and prescribed-time performance, a unified design framework of PTC and RL method is proposed, where the settling time and accuracy can be preset without initial values.
+
+The structure is given below. In Section II, the considered system and some assumptions are given. In Section III, the main results including the PTC performance function and optimal controller are designed. In Section IV, the convergence analysis is provided. The simulation results is given in Section V. Finally, the conclusion is presented in Section VI.
+
+§ II. PROBLEM FORMULATION AND PRELIMINARIES
+
+§ A. SIGNED COMMUNICATION TOPOLOGIES
+
+The structurally balanced bipartition communication topology containing $N$ followers is represented by a directed graph $\mathcal{G} = \{ \mathcal{V},\varepsilon ,\mathcal{A}\}$ , where $\mathcal{V} = \left\{ {{\mathcal{V}}_{1},{\mathcal{V}}_{2},\cdots ,{\mathcal{V}}_{N}}\right\}$ represents the vertex set, which is divided into the cooperative set ${\mathcal{V}}_{\alpha }$ and competitive set ${\mathcal{V}}_{\beta }$ such that ${\mathcal{V}}_{\alpha } \cap {\mathcal{V}}_{\beta } = 0$ and ${\mathcal{V}}_{\alpha } \cup {\mathcal{V}}_{\beta } = \mathcal{V}$ . $\varepsilon \subseteq \mathcal{V} \times \mathcal{V}$ represents the edge set of $N$ followers. Let $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{N \times N}$ be the signed weight matrix, where ${a}_{ij} > 0$ if $\left( {{\mathcal{V}}_{i},{\mathcal{V}}_{j}}\right) \in {\mathcal{V}}_{m},m \in \{ \alpha ,\beta \}$ and ${a}_{ij} < 0$ if ${\mathcal{V}}_{i} \in {\mathcal{V}}_{m},{\mathcal{V}}_{j} \in {\mathcal{V}}_{n},m \neq n,m,n \in \{ \alpha ,\beta \}$ . The neighbor set of $i$ th follower is defined as ${\mathcal{N}}_{i} = \left\{ {j \in \mathcal{V} : {a}_{ij} \neq 0}\right\}$ . Define $\mathcal{L} = \mathcal{D} - \mathcal{A} \in {\mathbb{R}}^{N \times N}$ as the Laplacian matrix of $\mathcal{G}$ , where $\mathcal{D} = \operatorname{diag}\left( {{d}_{1},{d}_{2},\cdots ,{d}_{N}}\right) \in {\mathbb{R}}^{N \times N}$ denotes the degree matrix with ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}\left| {a}_{ij}\right|$ .
+
+The argument graph consisting of one leader and $N$ followers is denoted as $\widetilde{\mathcal{G}} = \{ \widetilde{\mathcal{V}},\widetilde{\varepsilon }\}$ , in which $\widetilde{\mathcal{V}} =$ $\left\{ {{\mathcal{V}}_{0},{\mathcal{V}}_{1},{\mathcal{V}}_{2},\cdots ,{\mathcal{V}}_{N}}\right\}$ and $\widetilde{\varepsilon } \subseteq \widetilde{\mathcal{V}} \times \widetilde{\mathcal{V}}$ . Let $\mathcal{B} =$ $\operatorname{diag}\left\{ {\left| {b}_{1}\right| ,\left| {b}_{2}\right| ,\cdots ,\left| {b}_{N}\right| }\right\} \in {\mathbb{R}}^{N \times N}$ , where ${b}_{i} = 1$ indicates that the information of the leader is available for the $i$ th node and ${b}_{i} > 0$ represents cooperative relation, ${b}_{i} < 0$ represents competitive relation.
+
+§ B. PROBLEM FORMULATION
+
+Assume that the nonlinear MAS is composed of $N\left( { \geq 2}\right)$ followers and one leader. The dynamics model of $i$ th follower is provided as
+
+$$
+{\dot{x}}_{i} = {f}_{i}\left( {x}_{i}\right) + {g}_{i}\left( {x}_{i}\right) {u}_{i},i = 1,2,\cdots ,N \tag{1}
+$$
+
+where ${x}_{i}\left( t\right) \in {\mathbb{R}}^{n}$ denotes state, ${u}_{i}\left( t\right) \in {\mathbb{R}}^{m}$ is control input, ${f}_{i}\left( {x}_{i}\right) \in {\mathbb{R}}^{n}$ is internal dynamics and ${g}_{i}\left( {x}_{i}\right) \in {\mathbb{R}}^{n \times m}$ is input dynamics.
+
+Next, the dynamics of the human-manipulated leader is given as
+
+$$
+{\dot{x}}_{0}^{h} = {f}_{0}^{h}\left( {x}_{0}^{h}\right) + {u}_{0}^{h}, \tag{2}
+$$
+
+where ${x}_{0}^{h}\left( t\right) \in {\mathbb{R}}^{n}$ denotes state and ${u}_{0}^{h}\left( t\right) \in {\mathbb{R}}^{m}$ is nonzero control input of human operator sending to leader, ${f}_{0}^{h}\left( {x}_{0}^{h}\right) \in$ ${\mathbb{R}}^{n}$ represents internal dynamics.
+
+The following assumptions and lemma are imposed.
+
+Assumption 1. [19] The signed graph $\mathcal{G}$ has a directed spanning tree.
+
+Assumption 2. [24] The input of human operator always makes the leader (2) stable.
+
+Lemma 1. [25]: The FLS can estimate a nonlinear continuous function $f\left( \mathfrak{x}\right) \in \mathbb{R}$ on a compact set ${\Omega }_{f} \in {\mathbb{R}}^{n}$ as
+
+$$
+\mathop{\sup }\limits_{{\mathfrak{x} \in {\Omega }_{f}}}\left| {f\left( \mathfrak{x}\right) - {\Theta }^{T}\phi \left( \mathfrak{x}\right) }\right| \leq b \tag{3}
+$$
+
+with $b > 0$ .
+
+§ III. MAIN RESULTS
+
+§ A. PRESCRIBED-TIME FUNCTION AND ERROR TRANSFORMATION
+
+To achieve prescribed-time (PT) performance for MASs, the PT performance function $\vartheta \left( t\right)$ is given as
+
+$$
+\vartheta \left( t\right) = \left\{ \begin{array}{ll} \iota {e}^{-\beta {\left( \frac{T}{T - t}\right) }^{h}} + {\vartheta }_{{T}_{r}}, & 0 < t < {T}_{r} \\ {\vartheta }_{{T}_{r}}, & t \geq {T}_{r} \end{array}\right. \tag{4}
+$$
+
+where $h > 0,\iota > 0,\beta > 0,{\vartheta }_{{T}_{r}} > 0,0 < {T}_{r} < \infty$ and $0 < {\vartheta }_{{T}_{r}} < \infty$ represent the user-defined settling time and steady-state tracking accuracy, respectively.
+
+Construct the bipartite consensus error as ${e}_{i} =$ $\mathop{\sum }\limits_{{j = 1}}^{N}\left| {a}_{ij}\right| \left( {{x}_{i} - \operatorname{sign}\left( {a}_{ij}\right) {x}_{j}}\right) + \left| {b}_{i}\right| \left( {{x}_{i} - \operatorname{sign}\left( {b}_{i}\right) {x}_{0}^{h}}\right) ,{e}_{i} =$ ${\left\lbrack {e}_{i,1},\cdots ,{e}_{i,n}\right\rbrack }^{T} \in {\mathbb{R}}^{n}$ and adopt the error transformation function as
+
+$$
+{\varrho }_{i,\imath } = \tan \left( {\frac{\pi }{2}\frac{{e}_{i,\imath }}{\vartheta }}\right) ,\imath = 1,\cdots ,n, \tag{5}
+$$
+
+where $\left| {{e}_{i,\iota }\left( 0\right) }\right| < \vartheta \left( 0\right)$ .
+
+Based on (5), it yields
+
+$$
+{e}_{i,\imath } = \frac{2\vartheta }{\pi }\arctan \left( {\varrho }_{i,\imath }\right) ,\imath = 1,\cdots ,n,i = 1,\cdots ,N. \tag{6}
+$$
+
+Remark 1. From (5), the inequality $- \vartheta \leq {e}_{i,\iota } \leq \vartheta ,\forall t \geq 0$ holds. Combined the definition in (4), it further observes that $- {\vartheta }_{{T}_{r}} \leq {e}_{i,\iota } \leq {\vartheta }_{{T}_{r}},\forall t \geq {T}_{r}$ if ${\varrho }_{i,\iota }$ is bounded, which means the PT performance of ${e}_{i}$ can be ensured.
+
+§ B. OPTIMAL CONTROL SCHEME DESIGN
+
+Define the performance index function as
+
+$$
+{J}_{i} = {\int }_{t}^{\infty }\left( {{e}_{i}^{T}{\mathcal{Q}}_{i}{e}_{i} + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i}}\right) {d\tau } \tag{7}
+$$
+
+$$
+= {\int }_{t}^{\infty }\left( {{\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i}}\right) {d\tau },
+$$
+
+where ${\mathcal{Q}}_{i}$ and ${\mathcal{R}}_{i}$ are symmetric positive definite matrices with suitable dimensions, ${\mathcal{A}}_{i} = {\left\lbrack {\mathcal{A}}_{i,1},\cdots ,{\mathcal{A}}_{i,n}\right\rbrack }^{T} =$ ${\left\lbrack \arctan \left( {\varrho }_{i,1}\right) ,\cdots ,\arctan \left( {\varrho }_{i,n}\right) \right\rbrack }^{T}$ .
+
+Taking the time derivative of ${\mathcal{A}}_{i,i}$ , one has
+
+$$
+{\dot{\mathcal{A}}}_{i,\iota } = \frac{1}{1 + {\varrho }_{i,\iota }^{2}}{\chi }_{i,\iota }\left( {{\dot{e}}_{i,\iota } - {\nu }_{i,\iota }}\right) , \tag{8}
+$$
+
+where ${\chi }_{i,\imath } = \frac{\pi }{{2\vartheta }{\cos }^{2}\left( {\frac{\pi }{2}\frac{{e}_{i,\imath }}{\vartheta }}\right) },{\nu }_{i,\imath } = \frac{{e}_{i,\imath }\dot{\vartheta }}{\vartheta },{\dot{e}}_{i} = {\Gamma }_{i}\left( {{f}_{i} + {g}_{i}{u}_{i}}\right) -$ $\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\dot{x}}_{j} - {b}_{i}{\dot{x}}_{0}^{h}$ and ${\Gamma }_{i} = {d}_{i} + \left| {b}_{i}\right|$ .
+
+Then, define the Hamiltonian function as
+
+$$
+{H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i},\frac{\partial {J}_{i}}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}}{\partial \vartheta }}\right) = {\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right)
+$$
+
+$$
++ {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i} + \frac{\partial {J}_{i}}{\partial {\mathcal{A}}_{i}}\left\lbrack {{\bar{\chi }}_{i}\left( {{\dot{e}}_{i} - {\nu }_{i}}\right) }\right\rbrack + \frac{\partial {J}_{i}}{\partial \vartheta }\frac{\partial \vartheta }{\partial t} \tag{9}
+$$
+
+$$
+= {\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i} + \frac{\partial {J}_{i}}{\partial {\varrho }_{i}}\left\lbrack {{\chi }_{i}\left( {{\dot{e}}_{i} - {\nu }_{i}}\right) }\right\rbrack
+$$
+
+$$
++ \frac{\partial {J}_{i}}{\partial \vartheta }\frac{\partial \vartheta }{\partial t},
+$$
+
+where ${\bar{\chi }}_{i} = \operatorname{diag}\left\{ {\frac{{\chi }_{i,1}}{1 + {\varrho }_{i,1}^{2}},\cdots ,\frac{{\chi }_{i,n}}{1 + {\varrho }_{i,n}^{2}}}\right\} ,{\nu }_{i} = \left\lbrack {{\nu }_{i,1},\cdots ,{\nu }_{i,n}}\right\rbrack$ and ${\chi }_{i} = \operatorname{diag}\left\{ {{\chi }_{i,1},\cdots ,{\chi }_{i,n}}\right\}$ .
+
+The corresponding HJB equation is given as
+
+$$
+\mathop{\min }\limits_{{u}_{i}}{H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i}^{ * },\frac{\partial {J}_{i}^{ * }}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}^{ * }}{\partial \vartheta }}\right) = 0. \tag{10}
+$$
+
+Differentiating the (10) with respect to ${u}_{i}$ , one has
+
+$$
+{u}_{i}^{ * } = - \frac{{\Gamma }_{i}}{2}{\mathcal{R}}_{i}^{-1}{g}_{i}^{T}{\chi }_{i}^{T}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}. \tag{11}
+$$
+
+Substituting (11) into (10), (10) becomes
+
+$$
+{\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + \frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}\left\lbrack {{\chi }_{i}\left( {{\Gamma }_{i}{f}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\dot{x}}_{i} - {b}_{i}{\dot{x}}_{0}^{h}}\right. }\right.
+$$
+
+$$
+\left. \left. {-{\nu }_{i}}\right) \right\rbrack + \frac{\partial {J}_{i}^{ * }}{\partial \vartheta }\frac{\partial \vartheta }{\partial t} - \frac{{\Gamma }_{i}^{2}}{4}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}^{T}}{g}_{i}{\chi }_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{T}{g}_{i}^{T}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}} = 0.
+$$
+
+Inspired by [26], $\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}$ can be segmented as
+
+$$
+\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}} = \frac{2{k}_{i}}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\varrho }_{i} + \frac{2}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) + \frac{1}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) , \tag{12}
+$$
+
+where ${k}_{i} > 0,{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) = {\mathcal{R}}_{i}{\chi }_{i}\left( {{f}_{i}\left( {x}_{i}\right) - {\dot{x}}_{0}^{h} - {o}^{-1}{\nu }_{i}}\right)$ with $o = {\lambda }_{\max }\left( {\mathcal{L} + \mathcal{B}}\right) ,{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) = - 2{k}_{i}{\varrho }_{i}^{2} - 2{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) + {k}_{i}{\chi }_{i}^{2}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}.$
+
+Substituting (12) into (11), one has
+
+$$
+{u}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)
+$$
+
+$$
+- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) . \tag{13}
+$$
+
+§ C.PI ALGORITHM AND FLSS-BASED IMPLEMENTATION
+
+Obviously, the HJB equation can not be acquired by numerical methods. Therefore, the PI approach is given in Algorithm 1 to find the optimal result.
+
+Algorithm 1: PI Algorithm for Solving PT Optimal
+
+Consensus Control Policy
+
+ 1 Step 1: Initialization. Give an initial control protocols
+
+ ${u}_{i}^{\left( 0\right) },\forall i$ .
+
+2 Step 2: Policy evaluation. Solve the cost function ${J}_{i}^{l}$
+
+ as: ${H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i}^{ * },\frac{\partial {J}_{i}^{l}}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}^{l}}{\partial \vartheta }}\right) = 0$ .
+
+ 3 Step 3: Policy improvement. Update optimal control
+
+ input ${u}_{i}^{\left( l + 1\right) }$ as Eq. (13).
+
+ Step 4: If $\begin{Vmatrix}{{J}_{i}^{\left( l + 1\right) } - {J}_{i}^{\left( l\right) }}\end{Vmatrix} \leq \aleph$ with the predefined
+
+ parameter $\aleph > 0$ , stop; otherwise, set $l = l + 1$ and
+
+ return to Step 2.
+
+The convergence and optimality of Algorithm 1 have been proved in [27] and are omitted here.
+
+In view of the unknown term ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)$ and ${\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right)$ in (13), the FLSs is used to approximate these terms as.
+
+$$
+{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) = {\omega }_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{14}
+$$
+
+$$
+{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) = {\omega }_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{15}
+$$
+
+where ${\omega }_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1} \times n}$ and ${\omega }_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2} \times n}$ represent ideal weight matrices with ${h}_{c1}$ and ${h}_{c2}$ are the number of fuzzy rules; ${\phi }_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1}}$ and ${\phi }_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2}}$ are fuzzy basis functions; ${\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right)$ and ${\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right)$ denote bounded approximation errors.
+
+Thus, (13) becomes
+
+$$
+{u}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\omega }_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{F}}_{i}}(\mathcal{X}}\right.
+$$
+
+$$
+- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\omega }_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) .
+$$
+
+However, the ${\omega }_{{\mathcal{F}}_{i}}$ and ${\omega }_{{\mathcal{J}}_{i}}$ are unknown, the estimation forms of (14) and (15) are
+
+$$
+{\widehat{\mathcal{F}}}_{i}\left( {\mathcal{X}}_{i}\right) = {\widehat{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{16}
+$$
+
+$$
+{\widehat{\mathcal{J}}}_{i}\left( {\mathcal{X}}_{i}\right) = {\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{17}
+$$
+
+where ${\widehat{\omega }}_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1} \times n}$ and ${\widehat{\omega }}_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2} \times n}$ represent estimated weight matrices.
+
+According to (16) and (17), one has
+
+$$
+{\widehat{u}}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\widehat{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right)
+$$
+
+$$
+- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) . \tag{18}
+$$
+
+The updating laws are constructed as
+
+$$
+{\dot{\widehat{\omega }}}_{{\mathcal{F}}_{i}} = {\mathcal{C}}_{i}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1} - {r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) , \tag{19}
+$$
+
+$$
+{\dot{\widehat{\omega }}}_{{\mathcal{J}}_{i}} = - {r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}, \tag{20}
+$$
+
+where ${\mathcal{C}}_{i} \in {\mathbb{R}}^{{h}_{c1} \times {h}_{c1}}$ is a positive-definite matrix, ${r}_{{\mathcal{F}}_{i}} >$ $0,{r}_{{\mathcal{J}}_{i}} > 0,r > 0$ are design parameters.
+
+§ IV. STABILITY ANALYSIS
+
+Theorem 1. Consider the MAS consisting of followers (1) and leader (1) under Assumption 1-3, by choosing ${k}_{i} > \frac{3}{4}$ and adopting optimal control input (18) and adaptive law (19) and (20), then the consensus error can converge to the prescribed accuracy within prescribed time.
+
+Proof. Develop the Lyapunov function as
+
+$$
+V = \frac{1}{2}{\varrho }^{T}\varrho + \frac{1}{2}\mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\mathcal{C}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} + {\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}}\right) \tag{21}
+$$
+
+where $\varrho = {\left\lbrack {\varrho }_{1}^{T},\cdots ,{\varrho }_{n}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{N \times n}$ , estimation error ${\widetilde{\omega }}_{{\mathcal{F}}_{i}} =$ ${\omega }_{{\mathcal{F}}_{i}} - {\widehat{\omega }}_{{\mathcal{F}}_{i}}$ and ${\widetilde{\omega }}_{{\mathcal{J}}_{i}} = {\omega }_{{\mathcal{J}}_{i}} - {\widehat{\omega }}_{{\mathcal{J}}_{i}}$ . Invoking (5), (19) and (20), it yields
+
+$$
+\dot{V} = {\varrho }^{T}\left\lbrack {\chi \left( {\mathcal{L} + \mathcal{B}}\right) \dot{e} - {\chi \nu }}\right\rbrack - \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}}\right. }\right.
+$$
+
+$$
+\left. {-{r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}}\right)
+$$
+
+$$
+\leq \mathop{\sum }\limits_{{j = 1}}^{N}{\varrho }_{i}^{T}o\left( {-{k}_{i}{\mathcal{R}}_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right.
+$$
+
+$$
+\left. {-\frac{1}{2}{\mathcal{R}}_{i}^{-1}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) - \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}}\right. }\right.
+$$
+
+$$
+\left. {-{r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}}\right)
+$$
+
+$$
+\leq \mathop{\sum }\limits_{{j = 1}}^{N}{\varrho }_{i}^{T}o\left( {-{k}_{i}{\mathcal{R}}_{i}^{-1}{\varrho }_{i} + {\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) - \frac{{\mathcal{R}}_{i}^{-1}}{2}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right)
+$$
+
+$$
++ \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{r}_{{\mathcal{F}}_{i}}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) }\right. }\right.
+$$
+
+$$
+\left. {\left. {+r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) \left. {\widehat{\omega }}_{{\mathcal{J}}_{i}}\right) \text{ . }
+$$
+
+(22)
+
+Using Young's inequality, we have
+
+$$
+o{\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}} \leq \frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} + \frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\epsilon }_{{\mathcal{F}}_{i}}\end{Vmatrix}}^{2}, \tag{23}
+$$
+
+$$
+\left. {\left. {-\frac{o{\mathcal{R}}_{i}^{-1}}{2}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) \leq \frac{o{\mathcal{R}}_{i}^{-1}}{4}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) ){\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) }\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}
+$$
+
+$$
++ \frac{o{\mathcal{R}}_{i}^{-1}}{4}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2},
+$$
+
+(24)
+
+$$
+{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widehat{\omega }}_{{\mathcal{F}}_{i}} \leq - \frac{1}{2}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} + \frac{1}{2}{\omega }_{{\mathcal{F}}_{i}}^{T}{\omega }_{{\mathcal{F}}_{i}}, \tag{25}
+$$
+
+$$
+{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}} \leq \frac{-{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right.
+$$
+
+$$
+\left. {+r{\mathcal{I}}_{{h}_{c2}}}\right) {\widetilde{\omega }}_{{\mathcal{J}}_{i}} + \frac{{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}.
+$$
+
+(26)
+
+Calculating (22) by bringing (23)-(26), one has
+
+$$
+\dot{V} \leq - \mathop{\sum }\limits_{{j = 1}}^{N}o{\mathcal{R}}_{i}^{-1}\left( {{k}_{i} - \frac{3}{4}}\right) {\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} - \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{r}_{{\mathcal{F}}_{i}}}{2}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}
+$$
+
+$$
+\left. {-\mathop{\sum }\limits_{{j = 1}}^{N}\left( {\frac{{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widetilde{\omega }}_{{\mathcal{J}}_{i}}}\right) + \Lambda }\right)
+$$
+
+$$
+\leq - \frac{{\kappa }_{1}}{2}\mathop{\sum }\limits_{{j = 1}}^{N}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} - \frac{{\kappa }_{2}}{2}\mathop{\sum }\limits_{{j = 1}}^{N}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\mathcal{C}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} - \frac{{\kappa }_{3}}{2}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}
+$$
+
+$$
++ \Lambda
+$$
+
+$$
+\leq - {\kappa V} + \Lambda ,
+$$
+
+(27)
+
+where $\;\Lambda \; = \;\mathop{\sum }\limits_{{j = 1}}^{N}\frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\epsilon }_{{\mathcal{F}}_{i}}\end{Vmatrix}}^{2} +$ $\left. {\left. {\mathop{\sum }\limits_{{j = 1}}^{N}\frac{o{\mathcal{R}}_{i}^{-1}}{4}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) {\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) }\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}} + \mathop{\sum }\limits_{{j = 1}}^{N}\frac{o{\mathcal{R}}_{i}^{-1}}{4}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} +$ $\mathop{\sum }\limits_{{j = 1}}^{N}\frac{{r}_{{\mathcal{F}}_{i}}}{2}{\omega }_{{\mathcal{F}}_{i}}^{T}{\omega }_{{\mathcal{F}}_{i}} + \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}},$ ${\kappa }_{1}\; = \;\mathop{\min }\limits_{{i = 1,\cdots ,N}}\left\{ {{2o}{\mathcal{R}}_{i}^{-1}\left( \begin{array}{lll} {k}_{i} & - & \frac{3}{4} \end{array}\right) }\right\} ,\;{\kappa }_{2}\; =$ $\mathop{\min }\limits_{{i = 1,\cdots ,N}}\left\{ \frac{{r}_{{\mathcal{F}}_{i}}}{{\lambda }_{\max }\left( {\mathcal{C}}_{i}^{-1}\right) }\right\} ,{\kappa }_{3} = \mathop{\min }\limits_{{i = 1,\cdots ,N}}\left\{ {{r}_{{\mathcal{J}}_{i}}{\lambda }_{\min }\left( {\phi }_{i}\right) }\right\} ,$ $\kappa = \min \left\{ {{\kappa }_{1},{\kappa }_{2},{\kappa }_{3}}\right\} ,{\lambda }_{\min }\left( {\phi }_{i}\right)$ is the minimal eigenvalue of ${\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right)$ .
+
+§ V. SIMULATION
+
+A nonlinear MAS composed by four single-link robot arms (three followers and one human-controlled leader) is given to verify the effectiveness of the proposed control scheme. The model of agent is given as [12]
+
+$$
+{J}_{i}{\ddot{q}}_{i} + {D}_{i}{\dot{q}}_{i} + {M}_{i}g{d}_{i}\sin \left( {q}_{i}\right) = {u}_{i},i = 1,\cdots ,3,
+$$
+
+the physical parameters of $g,{M}_{i},{D}_{i},{J}_{i}$ and ${d}_{i}$ can be found in [12] for details. ${u}_{0}^{h}$ is set as
+
+$$
+{u}_{0}^{h} = \left\{ \begin{array}{l} {0.3} * \sin \left( t\right) * \sin \left( t\right) ,0 \leq t < {15} \\ 0,{15} \leq t < {30} \\ \sin \left( t\right) * \cos \left( t\right) ,{30} \leq t \leq {50}. \end{array}\right.
+$$
+
+The communication graph is shown below
+
+ < g r a p h i c s >
+
+Fig. 1: Communication graph.
+
+As shown in Fig. 1, it can be obtained that
+
+$$
+\mathcal{A} = \left\lbrack \begin{matrix} 0 & - 1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack ,\mathcal{L} = \left\lbrack \begin{matrix} 2 & 1 & - 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack ,
+$$
+
+$\mathcal{B} = \operatorname{diag}\{ 1,0,0,0,0\} .$
+
+For PT performance function, select ${\vartheta }_{{T}_{r}} = {0.06},{T}_{r} = {3s}$ . The initial state values of followers and leader are presented in Table 1.
+
+TABLE I: Initial state values of followers and leader.
+
+max width=
+
+State $i = 0$ $i = 1$ $i = 2$ $i = 3$
+
+1-5
+${x}_{i,1}\left( 0\right)$ 1 0.8 0.5 0.8
+
+1-5
+${x}_{i,2}\left( 0\right)$ -1 0.8 -0.5 -0.8
+
+1-5
+
+For the unknown term ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) ,{\mathcal{X}}_{i} = {\left\lbrack {x}_{i},{x}_{0}^{h},{\dot{x}}_{0}^{h},\vartheta ,\dot{\vartheta }\right\rbrack }^{T}$ and defined over $\left\lbrack {-6,6}\right\rbrack$ . Choose ${\mathcal{X}}_{i}^{0} =$ ${\left\lbrack \underset{5}{\underbrace{{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T},\cdots ,{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T}}}\right\rbrack }^{T}$ ${\phi }_{{\mathcal{F}}_{i}^{\mathcal{L}}}\left( {\mathcal{X}}_{i}\right) = \exp \left( {-\frac{{\left( {\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}\right) }^{T}\left( {{\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}}\right) }{2}}\right) .$
+
+For the unknown term ${\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) ,{\mathcal{X}}_{i} = {\left\lbrack {x}_{i},{\varrho }_{i},{x}_{0}^{h},{\dot{x}}_{0}^{h},\vartheta ,\dot{\vartheta }\right\rbrack }^{T}$ and defined over $\left\lbrack {-6,6}\right\rbrack$ . Choose ${\mathcal{X}}_{i}^{0} =$ ${\left\lbrack \underset{6}{\underbrace{{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T},\cdots ,{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T}}}\right\rbrack }^{T}\;$ and ${\phi }_{{\mathcal{J}}_{i}}{\left( {\mathcal{X}}_{i}\right) }^{\mathcal{L}}\left( {\mathcal{X}}_{i}\right) = \exp \left( {-\frac{{\left( {\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}\right) }^{T}\left( {{\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}}\right) }{2}}\right) .$
+
+For updating law (19) and (20), ${\widehat{\omega }}_{{\mathcal{F}}_{1}}\left( 0\right) =$ ${\widehat{\omega }}_{{\mathcal{F}}_{2}}\left( 0\right) = {\widehat{\omega }}_{{\mathcal{F}}_{3}}\left( 0\right) = {\left\lbrack {0.1}\right\rbrack }_{{12} \times 2},{\widehat{\omega }}_{{\mathcal{J}}_{1}}\left( 0\right) = {\widehat{\omega }}_{{\mathcal{J}}_{2}}\left( 0\right) =$ ${\widehat{\omega }}_{{\mathcal{J}}_{3}}\left( 0\right) = {\left\lbrack {0.92}\right\rbrack }_{{12} \times 2},{\mathcal{C}}_{1} =$ diag $\{ {0.5},\cdots ,{0.5}\} ,{\mathcal{C}}_{2} =$ diag $\underset{12}{\underbrace{\{ {0.7},\cdots ,{0.7}\} }},{\mathcal{C}}_{3} = \;$ diag $\underset{12}{\underbrace{\{ {0.3},\cdots ,{0.3}\} }}$ ,
+
+${\mathcal{R}}_{i} = \operatorname{diag}\{ {0.8},{0.8}\} ,{r}_{{\mathcal{F}}_{i}} = 2,{k}_{i} = {45},{r}_{{\mathcal{J}}_{i}} = 1.$
+
+ < g r a p h i c s >
+
+Fig. 2: Curves of ${\widetilde{x}}_{i,1},{x}_{0,1}^{h}$ and $- {x}_{0,1}^{h}$ .
+
+ < g r a p h i c s >
+
+Fig. 3: Curves of ${\widetilde{x}}_{i,2},{x}_{0,2}^{h}$ and $- {x}_{0,2}^{h}$ .
+
+ < g r a p h i c s >
+
+Fig. 4: Curves of errors and performance bounds.
+
+ < g r a p h i c s >
+
+Fig. 5: Curves of optimal control input.
+
+ < g r a p h i c s >
+
+Fig. 6: Curves of $\begin{Vmatrix}{\omega }_{{\mathcal{F}}_{i}}\end{Vmatrix}$ .
+
+From Fig. 2 and Fig. 3, the bipartite consensus can be achieved and the leader, followers 1 and 2 belong to a group while the follower 3 is geared to another group with opposite sign. Fig. 4 shows the bipartite consensus and the PT performance bounds. It can be obtained that the consensus error can reach the given accuracy 0.06 with the prescribed time ${3s}$ . The optimal control input for each agent is depicted in Fig. 5, in which ${u}_{i}$ rapidly converges to a small region of zero. The norm of updating weights in unknown terms ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)$ are given in Fig. 6.
+
+§ VI. CONCLUSION
+
+In this article, the problem of performance-based HiTL optimal bipartite consensus control for nonlinear MASs has been studied. First, the MASs have been monitored by human operator sending command signals to the non-autonomous leader to respond to any emergencies and guarantee the safety of MASs. Then, under the joint design architecture of prescribe-time performance function and error transformation, a novel performance index function has been developed to achieve optimal bipartite consensus with prescribed-time. Subsequently, the RL has been utilized to learn the solution to HJB equation, in which the FLSs are employed to implement the algorithm. The validity of the designed control scheme has been confirmed by simulation.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Jz7lDfrb1j/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Jz7lDfrb1j/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8c333979fef3e325f01c9c5e9e6153dbcc496977
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Jz7lDfrb1j/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,228 @@
+# Work in Progress: Enhancing Human-Robot Interaction through a Speech and Command Recognition System for a Service Robot Using ROS Melodic
+
+*Note: Sub-titles are not captured in Xplore and should not be used
+
+Luis Emiliano Rodríguez Raygoza
+
+Tecnologico de Monterrey
+
+School of Engineering and Sciences Monterrey, México
+
+a01252086@tec.mx
+
+Jorge De-J. Lozoya-Santos
+
+Tecnologico de Monterrey School of Engineering and Sciences Monterrey, México jorge.lozoya@tec.mx
+
+Luis C. Félix-Herrán
+
+Tecnologico de Monterrey
+
+School of Engineering and Sciences
+
+Monterrey, México
+
+lcfelix@tec.mx
+
+Juan C. Tudon-Martinez
+
+Tecnologico de Monterrey
+
+School of Engineering and Sciences
+
+Monterrey, México
+
+jc.tudon@tec.mx
+
+${Abstract}$ -This paper presents the development and evaluation of a Speech and Command Recognition system integrated into PiBot, an autonomous service robot developed at Tecnológico de Monterrey. The system executes on Robot Operating System (ROS) Melodic framework running on a Jetson TX2 embedded computer to enable natural language interaction through Automated Speech Recognition (ASR). The study focuses on the challenges and opportunities of implementing speech recognition in real-world environments, particularly within constrained hardware platforms. The system achieved a 25% Word Error Rate (WER) and a 73% Command Accuracy, with performance varying across different testing environments. The system achieved a 25% Word Error Rate (WER) and a 73% Command Accuracy, with performance varying across different testing environments. Difficulties were noted in recognizing uncommon or non-Spanish words. A comparison with state-of-the-art models indicates room for improvement. Future work will focus on fine-tuning the model using datasets with ground truth transcriptions to enhance reliability in complex, noise-prone settings.
+
+Index Terms-Automated Speech Recognition (ASR), Human-Robot Interaction (HRI), Service Robots, Command Detection, Embedded Systems
+
+## I. INTRODUCTION
+
+In recent years, robotics has made notable progress, with service robots becoming prominent solutions designed to communicate, interact, and assist customers [8]. As society moves toward greater automation, effective human-robot interaction is increasingly important. Among the key elements facilitating this interaction, speech algorithms are essential tools and widely used approaches in Human-Robot Interaction (HRI) [7]. Speech functions both as an input and an output in dialogue systems. As an input, it allows robots to recognize spoken language through Speech-to-Text (STT) or Automated Speech Recognition (ASR). As an output, speech synthesis converts textual responses into spoken language, enabling natural language interaction [1].
+
+This paper presents preliminary work focusing on the development and evaluation of these systems to identify areas for future improvement. The system is integrated into PiBot, an autonomous service robot developed at Tecnológico de Monterrey, with its design and development previously described [5]. PiBot's algorithms run within the Robot Operating System (ROS) Melodic framework on Ubuntu 18.04, utilizing the processing capabilities of a Jetson TX2 embedded computer. While this combination of software and hardware is functional, it presents limitations due to the constraints of embedded computer architecture, reliance on battery power, and limited availability of GPU-accelerated library versions. Additionally, the Jetson TX2, an older model, poses specific challenges impacting the system's performance and flexibility.
+
+This paper examines the challenges and opportunities of integrating speech recognition algorithms within a service robot, emphasizing the practical implementation and evaluation of these systems in real-world settings. Through experimentation and analysis, we aim to identify the strengths and limitations of current speech recognition technologies when deployed on constrained hardware platforms like the Jetson TX2, providing insights that may inform future enhancements in human-robot interaction in complex, noise-prone environments.
+
+The paper is organized as follows: Section II describes the system integration and technological configuration. Section III outlines the configuration and operation of the developed processing nodes, detailing their roles in the processing pipeline. Section IV discusses the testing and validation methodology used to evaluate the system. The validation results are presented in Section V, and conclusions are drawn in Section VI.
+
+## II. PIBOT's SPEECH RECOGNITION INTEGRATION
+
+The development of the Speech and Command Recognition System for PiBot is a significant enhancement, significantly improving its interactive capabilities. This system, detailed in this section, facilitates verbal communication between humans and PiBot, extending interactions beyond the existing terminal and web interface. It also sets the stage for future voice-activated motion tasks with their respective algorithms. The integration process, which began with the ReSpeaker Mic Array, is a crucial step in this journey. This device, connected via USB, provides raw audio data and the relative direction of sound, which will be used to activate motion tasks through speech, thereby enhancing PiBot's functionality.
+
+The implementation of the Speech and Command Recognition System for PiBot is designed to enable it to respond to vocal instructions, a common interface method for HRI. The process begins by connecting the ReSpeaker Mic Array and setting up an inference node. This node is dedicated to pre-processing the audio signal, performing inference on the audio data, and post-processing the results to obtain interpreted speech. The system then uses score criteria to determine whether a command is present in the inferred text. If a command is detected, it is forwarded to a state machine to execute the appropriate task on PiBot.
+
+## A. Technological Framework and Adaptations
+
+This section describes the hardware and software architecture that integrates the Speech and Command Recognition system into PiBot. The integration is supported by a Jetson TX2 embedded computer running Ubuntu 18.04 with the JetPack 4.6.5 SDK and a ReSpeaker 2.0 Mic Array connected via USB for capturing audio input. The Jetson TX2 serves as the central processing unit, handling all computational tasks including audio inference, navigation, and sensor fusion. The JetPack SDK includes essential libraries, such as CUDA 10.2 and cuDNN 8.2.1, providing GPU acceleration to handle the demanding deep learning inference tasks required for real-time operation [4].
+
+The ROS Melodic framework provides a robust environment for developing modular nodes that handle specific tasks within the speech recognition pipeline. The Jetson TX2's GPU accelerates the inference process of the ASR model, enabling real-time speech processing. ROS topics and services are used for inter-node communication, allowing the ReSpeaker Node to publish audio data, the Inference Node to perform GPU-accelerated speech-to-text conversion, and the Command Detection Node to interpret commands. The State Machine Node orchestrates the execution of commands, leveraging ROS's actionlib for asynchronous task handling, which ensures that multiple actions can be managed concurrently.
+
+Due to the limitations of the Jetson TX2 hardware, several strategies were explored for configuring the necessary software environment. The Nvidia JetPack SDK is crucial for hardware-accelerated AI development, but due to compatibility constraints, the available machine learning frameworks are limited to older versions. Initially, a conda environment was considered for managing the library versions required for inference, but the lack of ARM-compatible versions proved to be a significant barrier. A compatible PyTorch Docker container was also investigated, offering GPU-accelerated support for speech recognition. Despite the potential, this approach faced practical challenges related to processing demands and frequent image deletions.
+
+Ultimately, we installed a specific version of PyTorch (provided by Nvidia) that works with CUDA 10.2, enabling us to perform GPU-accelerated inference for speech recognition tasks. This required transitioning from the torchaudio library to the librosa library for certain audio processing tasks, maintaining the same inference approach with some modifications. The difference in performance was significant: GPU-accelerated inference took 3-4 seconds, while CPU-based inference took approximately 55 seconds, emphasizing the necessity of GPU acceleration for achieving near real-time response. Figure X illustrates the hardware and software integration within PiBot, including the flow between ROS nodes, the Jetson TX2, and the ReSpeaker Mic Array.
+
+## III. OPERATION OF PROCESSING NODES
+
+This section details the setup and functionality of the processing nodes developed for the speech and command recognition system, highlighting their roles within the framework. The system handles audio input, speech recognition, command detection, and command execution through four distinct nodes. Each node operates within the processing pipeline, collectively ensuring the system's functionality. The initial node was adapted from an existing ROS Melodic package [2], which facilitates communication with the ReSpeaker 2.0 Mic Array. This array captures audio input and provides directional sound data using its quad-microphone setup. The directional information is intended for future enhancements, such as activating motion tasks based on speech direction.
+
+The second node, developed specifically for this implementation, handles inference. It receives audio segments, preprocesses the data to reduce background noise, performs GPU-accelerated inference using the jonatasgrosman/wav2vec2- large-xlsr-53-spanish model [3], and post-processes the results to identify keywords. The third node processes the inferred text to determine if it contains a command from a predefined set of keywords and thresholds. Finally, the fourth node functions as a state machine, waiting for commands and executing the corresponding tasks on PiBot.
+
+Figure 1 illustrates the interconnection of these nodes, the topics they broadcast, and the data types transmitted between them. This visual aid clarifies the communication flow and the sequential processing steps from one node to the next. Each node and the algorithms employed are further explained in the subsequent subsections.
+
+
+
+Fig. 1. Schematic representation of the audio processing framework, showcasing the workflow from audio capture to command execution. It begins with the ReSpeaker Node processing audio data, followed by the Inference Node for text inference and processing, and the Command Detection Node for command detection and selection. The State Machine Node completes the sequence by executing the corresponding actions.
+
+## A. Inference Node Configuration and Operation
+
+The Inference Node initiates the speech recognition process by handling audio segments, pre-processing them, performing inference, and converting the results into text. The node subscribes to the "/speech_audio" ROS topic, where audio segments are continuously published by the ReSpeaker node. Upon receiving an audio message, the data undergoes several processing steps before the inferred text is published to the "/audio_text_topic" as a String message for the next node.
+
+Audio processing begins with the initialization of necessary libraries. The 'rospy' library facilitates communication between ROS nodes, while 'librosa' and 'soundfile' are used for audio processing. Additional libraries support array manipulation, audio transformations, and machine learning tasks. The core of the inference task utilizes the wav2vec2-large-xlsr-53-spanish model, a speech recognition model already fine-tuned with the Common Voice Corpus 6.1 dataset for Spanish. This model, sources from Hugging Face [3], operates on wav files sampled at ${16},{000}\mathrm{\;{Hz}}$ . The dataset used for fine-tuning, Common Voice Corpus 6.1, provides a diverse range of transcriptions. For our implementation, we leveraged this pre-existing fine-tuned model to process our collected audio data without further modification. Global variables are set during initialization, and the GPU device is configured. A similarity threshold is established for fuzzy word matching, and a spell checker and a set of keywords are initialized to address common inference errors.
+
+Once the model is ready, which takes approximately 20 seconds, the ROS node and subscriber are activated to listen for audio data on the "/speech_audio" topic. The main processing occurs in the callback function, which is triggered upon receiving an audio message. The audio data is converted to a .wav file and loaded into a GPU-compatible tensor. Preprocessing is performed using the 'noisereduce' library to minimize background noise, as illustrated in Figure 2, which shows the effects of noise reduction on different audio signals.
+
+The filtered audio is then processed by the Wav2Vec 2.0 model, which transcribes the spoken content into text. Postprocessing involves mapping the inferred text to correct common transcription errors. This includes mapping terms like 'piot' to 'pibot', 'pivot' to 'pibot', and 'machin' to 'machine'. Additionally, the 'fuzzywuzzy' library performs word corrections based on the Levenshtein Distance, allowing for corrections with a similarity threshold of ${70}\%$ . Keywords such as 'pibot', 'pibotino', and 'patrullar' are specifically targeted for this correction process. The refined text is then published to the "/audio_text_topic" ROS topic for the Command Detection Node to process.
+
+## B. Command Detection Node
+
+The Command Detection Node converts inferred text into detected commands to execute later. It subscribes to the "/audio_text_topic" ROS topic to receive text inputs from the Inference Node. Upon receiving a message, the node processes the text by tokenizing it into individual words for detailed analysis. Special attention is given to the keywords "pibot" and "pibotino," which identify the robot intended to receive the commands. Detecting these keywords ensures that only relevant commands are processed, filtering out unrelated speech.
+
+After recognizing the robot identifier, the node maps keywords associated with each potential command. These keywords are grouped by synonyms to improve the accuracy of the scoring mechanism, which determines the most likely intended command. This grouping prevents score inflation from repetitive similar words, ensuring a more accurate interpretation. The algorithm then evaluates the scores for each potential command against predefined thresholds. If a command's score exceeds its threshold and is the highest among the candidates, it is selected for execution. The selected command is published to the "/state_topic" ROS topic as a String message for the State Machine Node to execute relevant algorithms.
+
+TABLE I
+
+| Command | Intent | Keywords | Thresholds |
| talk_about_system | For PiBot to play a series of audio files, explaining itself, its capabilities, its components and its features. | (hablame, háblame, cuéntame, cuentame, explicame, explícate), (ti), (sobre), (capacidades), (componentes) | 0.2 |
| talk_about_machine_care | For PiBot to play a series of audio files, explaining Machine Care, a strategical business partner for PiBot development. | (hablame, háblame, cuentame, cuéntame, explicame), (machin, machine, care, quer) | 0.6 |
| talk_about_event | For PiBot to play a series of audio files, explaining the ENCLELAC event which invited PiBot to attend. | (háblame, hablame, cuéntame, cuentame, explicame), (evento, clelac, conferencia, enclelac, claustro) | 0.6 |
| come_towards_me | Command for a future action intended to command PiBot to navigate towards the person closest to the sound direction. | (ven, vente, acercate, aproxima, aproximate), (aca, acá, aquí) | 0.6 |
| patrol | Command for future to start patrolling actions on PiBot, navigating autonomously in a set of predefined points. | (patrullaje, patrullar, vuelta), (empieza, comienza, ponte) | 0.2 |
| look_at_me | Command for future action intended to command PiBot to rotate to face the sound direction source. | (voltea, volteame, observame, observa, boltea, mi- rame, mira), (empieza, comienza, ponte) | 0.3 |
| stop_action | Some states are indefinite and are only stopped by this action. Additionally, the audio sequences can be stopped with this command. | (detente, alto, parate, cancela, basta, termina, inter- rumpe, suspende, aborta) | 0.2 |
| continue | Signals PiBot to continue with its last state, either continuing from the last played audio, or continuing an indefinite task. | (continúa, continua, reanuda) | 1 |
| TARTET | | |
+
+List of commands programmed in the speech recognition implementation for PiBot. The list presents the identifier, the intention of the matter FOR THIS STATE (IN THE STATE MACHINE), KEYWORDS FOR EACH COMMAND, AND THE MINIMUM SCORE THRESHOLD TO SURPASS TO BE CONSIDERED A POSSIBLE CANDIDATE.
+
+Table I lists the programmed commands, their intended actions, associated keywords, and the minimum score thresholds required for selection.
+
+## C. State Machine Node
+
+The State Machine Node manages the execution of tasks corresponding to received commands using the smach library. Each state within the state machine performs specific functions and makes decisions based on the commands received. The final state, FinishProgram, handles the concluding logic before terminating the program. Figure 3 depicts the structure of the state machine, which begins in the Initial Setup state and transitions to the Waiting Mode state once all necessary nodes are active. Upon receiving a command, the state machine transitions to the appropriate state to execute the corresponding task and then returns to Waiting Mode after completion or interruption. If the state machine is stopped, it moves to the End state.
+
+The Initial Setup state verifies that all required ROS nodes, specifically the Inference Node and Command Detection Node, are operational. This verification accounts for the approximately 20-second power-on time. Once confirmed, the state machine transitions to the Waiting Mode state, where it remains ready to receive and process commands from the "/state_topic" ROS topic.
+
+In the Waiting Mode state, the state machine continuously monitors for incoming commands. Upon receiving a command, it transitions to the corresponding state to execute the associated actions. All states responsible for providing explanations are fully operational, playing a series of audio files as intended. Users can interrupt these actions with the 'stop_action' command, which halts audio playback and records the last played audio. The 'continue' command also allows users to resume the last interrupted action. While commands such as 'patrol', 'come_towards_me', and 'look_at_me' are recognized and processed, the actions they trigger are scheduled for future development.
+
+Each state within the state machine is designed to handle specific functionalities, with clearly defined transitions ensuring a reliable and adaptable system. Audio feedback is integral to the state machine, enhancing user interaction by confirming received commands. When a command is successfully detected, the system plays a randomly selected confirmation audio from a pool of pre-recorded phrases. This approach confirms command recognition and adds variety to interactions, aiming to improve the user experience. Similarly, continuation commands trigger randomized audio feedback to inform users that the system has resumed its previous action.
+
+## IV. TESTING AND VALIDATION OF SPEECH RECOGNITION IMPLEMENTATION
+
+Evaluating the performance of the speech recognition system implemented in PiBot is essential to ensure effective interaction and accurate command execution and to highlight critical areas of opportunity for future work in this platform. This section details the testing methodology, including the audio samples, testing environments, and the metrics used to assess system reliability and accuracy. The testing was performed on a computer running ROS Melodic on Windows 11 through the Windows Subsystem for Linux with Ubuntu 18.04. The focus was on evaluating the algorithmic accuracy and reliability consistent across different platforms.
+
+
+
+Fig. 2. Comparison of Audio Waveforms Across Various Scenarios (spoken text said is "oye pibot hablame de ti"): a) Original clear audio waveform, b) Noise-reduced waveform from the original clear audio, c) Original audio waveform with added background noise, d) Noise-reduced waveform of the audio with added background noise.
+
+
+
+Fig. 3. Illustration of the State Machine structure implemented on PiBot, to perform different algorithms based on the command received.
+
+| Desired Command | Transcription |
| look_at_me | oye pibot voltea |
| talk_about_system | oye pibot hablame de ti |
| talk_about_event | oye pibot hablame del evento |
| talk_about_event | oye pibot hablame de clelac |
| look_at_me | oye pibot voltea |
| patrol | oye pibot ponte a patrullar |
| talk_about_machine_care | oye pibot hablame de machine care |
| come_towards_me | oye pibot ven para aca |
| talk_about_system | oye pibot cuentame sobre ti |
| talk_about_event | oye pibot que sabes del evento |
| patrol | oye pibot comienza a patrullar |
+
+TABLE II
+
+LIST OF PHRASES WITH THE INTENDED COMMAND USED ON THE SPEECH RECOGNITION TESTING.
+
+## A. Recording and Preparation of Audio Data
+
+Audio recordings for this validation were captured using PiBot's ReSpeaker Mic Array. A Python script defined each audio segment's start and end times based on keyboard inputs. This resulted in individual .wav files named according to the spoken phrase and the recording location; all recordings were mono-channel with a sample rate of ${16},{000}\mathrm{\;{Hz}}$ . Some recordings included English words such as "machine" and "care" to test the system's handling of multilingual inputs even when fine-tuned with a Spanish dataset. Table II lists the phrases and their corresponding intended commands. It is important to note that all commands are in Spanish, aligning with the system's target language.
+
+## B. Testing Environments
+
+The speech recognition system was tested in three different environments to evaluate its performance under varying noise conditions:
+
+- Office Floor: Recordings were made on the second floor of the CETEC tower at Tecnológico de Monterrey's In-novaction floor, where undergraduate students presented their final projects. This environment featured significant background noise due to multiple conversations and activities, providing a challenging setting for speech recognition.
+
+- Library: PiBot was positioned on the first floor of Tecnológico de Monterrey's library. Electrical escalators, nearby shops, and student activity contributed to background noise, testing the system's ability to function in a moderately noisy environment.
+
+- Laboratory: Recordings in the laboratory were conducted in a wide, open space with minimal background noise. This environment served as a control to assess the system's performance in ideal conditions.
+
+During testing, the speaker maintained a consistent speaking volume and pace. The impact of factors such as background noise, echo, and microphone distance were analyzed to understand their effects on WER and Command Accuracy.
+
+Figures 4, 5, and 6 show PiBot's placement in each of these environments.
+
+
+
+Fig. 4. PiBot located at Innovaction, where the set of phrases were recorded.
+
+
+
+Fig. 5. PiBot located at Tec's Library, where the set of phrases were recorded.
+
+## C. Evaluation Metrics
+
+Two primary metrics were used to evaluate the system's performance: Word Error Rate (WER) and Command Prediction Accuracy.
+
+1) Word Error Rate (WER): WER measures the difference between the recognized word sequence and the ground truth transcription by calculating the minimum number of substitutions, insertions, and deletions required to transform one sequence into the other. It is calculated using the following formula:
+
+$$
+\text{ Word Error Rate } = \frac{S + I + D}{N} \tag{1}
+$$
+
+- S (Substitutions): The number of words in the recognized transcription that differ from the ground truth.
+
+- I (Insertions): The number of additional words present in the recognized transcription that are not in the ground truth.
+
+- D (Deletions): The number of words from the ground truth that are missing in the recognized transcription.
+
+
+
+Fig. 6. PiBot located at Tec's Laboratory, where the set of phrases were recorded.
+
+- N (Number of words): The total number of words in the ground truth transcription.
+
+WER was calculated using the jewel Python library, which compares the inferred transcription against the ground truth.
+
+2) Command Prediction Accuracy: Command Prediction Accuracy evaluates whether the system correctly identifies and executes the intended command. This metric is binary: a score of 1 is assigned if the inferred command matches the ground truth command, and a score of 0 otherwise.
+
+## V. RESULTS
+
+The tests were designed to evaluate the performance of the speech recognition pipeline within operational scenarios, specifically examining how spoken words trigger commands in PiBot's state machine. These evaluations assess the system's reliability and identify potential areas for enhancement. The results are detailed in Table V, which includes each file's name, inferred text, selected command, Word Error Rate (WER), and command accuracy for each scenario. Additionally, Table III summarizes the metrics to provide an overview of overall averages and location-specific performance.
+
+ | Word Error Rate | Command Accuracy |
| Library | 42% | 50% |
| Laboratory | 19% | 70% |
| Office | 13% | 100% |
| Average | 25% | 73% |
| TARLE III |
+
+AVERAGES OF WER AND COMMAND ACCURACY AVERAGES IN DIFFERENT LOCATIONS AND OVERALL.
+
+PiBot's speech recognition system achieved an overall command accuracy of ${73}\%$ and a WER of ${25}\%$ . Compared to advanced models such as OpenAI's Whisper, which achieves a WER below 9% and maintains performance in noisy environments [6], there is potential for further improvement in PiBot's system. The variation in WER and Command Accuracy across different environments suggests that factors beyond general noise levels influence system performance. In the library, the open space and echoes likely contributed to a higher WER of 42% and lower Command Accuracy of 50%. Despite high background noise in the office, the confined space may have allowed the microphone array to better capture the speaker's voice, resulting in a lower WER of 13% and high Command Accuracy. The laboratory, covered by glass walls and an open ceiling, showed intermediate results. These findings could indicate that environmental acoustics, such as echo and reverberation, and the directional characteristics of background noise, significantly impact the system's effectiveness.
+
+## VI. Conclusions
+
+This study evaluated a speech recognition and command detection system for the PiBot platform, achieving an average Word Error Rate (WER) of 25% and a Command Accuracy of 73%. The system's performance varied across different testing environments, with the library setting exhibiting the highest WER of ${42}\%$ and the lowest Command Accuracy of ${50}\%$ . Conversely, despite its high background noise, the office environment demonstrated a WER of 13% and a Command Accuracy of ${100}\%$ . The laboratory environment, characterized by minimal background noise, showed a WER of ${19}\%$ and a Command Accuracy of 70%.
+
+These results might indicate that factors beyond the general noise level influence the system's performance. The unexpectedly high accuracy in the noisy office environment suggests that the system can perform well in high background noise scenarios, while the specific conditions remain unclear. In contrast, the library's moderate noise levels adversely affected both WER and Command Accuracy, being the only open location. Additionally, the system faced challenges in recognizing certain words, particularly those uncommon or not in Spanish, such as "Enclelac" and "Machine Care." This difficulty aligns with existing research, which indicates that proper nouns and less frequent terms are more susceptible to recognition errors in speech systems [1]. To address the difficulties in recognizing uncommon or non-Spanish words, we implemented post-processing techniques in the Command Detection Node, specifically mapping commonly unrecognized words to the correct terms. PiBot, the system's name, was common but had special difficulty due to the wide range of inferences.
+
+However, due to limitations and lack of a dataset for this initial setup, these methods had limited effectiveness, particularly in complex acoustic environments. To improve this, future work should focus on fine-tuning the model using a more comprehensive dataset, which should include a wide range set of words and phrases that the system is expected to handle. This dataset should be captured using the ReSpeaker microphone array across various environments with different noise levels. Such customization will likely enhance the model's ability to recognize specific terms and increase overall command accuracy, as the current lack of effective keyword detection negatively impacts the performance of short command phrases.
+
+Furthermore, advancing pre-processing techniques, mainly through more effective noise reduction methods, could significantly increase the system's robustness and accuracy. Some noise reduction techniques include other libraries for spectral subtraction and deep learning-based noise impression algorithms that do not affect speech recognition tasks. These improvements are essential for ensuring reliable and practical real-world applications of PiBot in various and potentially challenging acoustic environments.
+
+## REFERENCES
+
+[1] C. Bartneck et al. Human-Robot Interaction: An In-
+
+troduction. Cambridge University Press, 2020. ISBN: 9781108587303. URL: https://books.google.com.mx/ books?id=YibUDwAAQBAJ.
+
+[2] Yuki Furuta. respeaker_ros: ROS Package for ReSpeaker Mic Array. https://github.com/jsk-ros-pkg/jsk_3rdparty/ tree/master/respeaker_ros. Apache License 2.0. 2023.
+
+[3] Jonatas Grosman. Fine-tuned XLSR-53 large model for speech recognition in Spanish. https://huggingface.co/ jonatasgrosman/wav2vec2-large-xlsr-53-spanish. 2021.
+
+[4] Nvidia Developer. JetPack SDK. https://developer.nvidia.com/embedded/jetpack.Online; accessed 8 July 2024. 2013.
+
+[5] Ricardo Osorio-Oliveros et al. "PiBOT: Design and Development of a Mobile Robotic Platform for COVID-19 Response". In: Lecture Notes in Networks and Systems 347 LNNS (2022). Cited by: 0, pp. 252-260. DOI: 10. 1007/978-3-030-90033-5_27. URL: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85121573204&doi= 10.1007%5C%2f978-3-030-90033-5_27&partnerID= 40&md5=89743dec2d3a5b61529b2aefb77b9240.
+
+[6] Alec Radford et al. Robust Speech Recognition via Large-Scale Weak Supervision. 2022. DOI: 10.48550/ARXIV. 2212.04356. URL: https://arxiv.org/abs/2212.04356.
+
+[7] Eduardo Benitez Sandoval, Scott Brown, and Mari Velonaki. "How the inclusion of design principles contribute to the development of social robots". In: Cited by: 7. 2018, pp. 535-538. DOI: 10.1145/3292147. 3292239. URL: https://www.scopus.com/inward/ record . uri ? eid = 2 - s2 . 0 - 85061271823 & doi = 10 . 1145 % 2f3292147 . 3292239 & partnerID = 40 & md5 = deb249bb9409edb33c9d3ed3ed767903.
+
+[8] Jochen Wirtz et al. "Brave new world: service robots in the frontline". In: Journal of Service Management (2018). URL: https://api.semanticscholar.org/CorpusID: 62889871.
+
+| Filename | Inferred Text | Detected Command | WER | Command Accuracy |
| comienza_patrullar.wav | oyemiotoveja patrullar | | 0.8 | 0 |
| comienza_patrullar_laboratorio.wav | oye ven pibot comienza a patrullar | patrol | 0.2 | 1 |
| comienza_patrullar_office.wav | oye pibot comienza a patrullar | patrol | 0 | 1 |
| cuentame_laboratorio.wav | oye pibot cuentame sobre ti | talk_about_system | 0 | 1 |
| cuentame_sobre_ti_biblio.wav | ove pibot cuentame sobre ti | talk_about_system | 0 | 1 |
| cuentame_ti_office.wav | oye pibot cuentame sobre ti | talk_about_system | 0 | 1 |
| hablameenclelac.wav | ove pibot hablame de | | 0.2 | 0 |
| hablameenclelac_biblio.wav | eoundedte | | 1 | 0 |
| hablameenclelac office.wav | ove pibot hablame de enclelac | talk_about_event | 0 | 1 |
| hablameevento biblio.wav | ove pibot o aca del evento | talk_about_event | 0.4 | 1 |
| hablameevento office.wav | ove pibot hablame de evento | talk_about_event | 0.2 | 1 |
| hablamedeeenclelac_laboratorio.wav | ove pibot hablame enclelac | talk_about_event | 0.2 | 1 |
| hablamedelevento laboratorio.wav | ove pibot hablame de evento | | 0.2 | 0 |
| hablamedeti biblio.wav | ove pibot hablame de ti | talk_about_system | 0 | 1 |
| hablamedeti_laboratorio.wav | ove pibot hablame de ti | talk_about_system | 0 | 1 |
| hablamedeti office.wav | ove pibot hablame de ti | talk_about_system | 0 | 1 |
| mc_biblio.wav | ove pibot hablame de aca | | 0.33 | 0 |
| mc laboratiorio.wav | ove pibot hablame de | | 0.33 | 0 |
| mc office.wav | ove pibot hablame de machin | talk about machine care | 0.33 | 1 |
| patrullar_laboratorio.wav | ove mibun patrullar | | 0.6 | 0 |
| pontepatrullar biblio.wav | ove pibot bonda patrullar | patrol | 0.4 | 1 |
| ponte_patrullar_office.wav | ove pibot aca patrullar | patrol | 0.4 | 1 |
| sabes evento biblio.wav | oye pibot que moes de le ven | | 0.67 | 0 |
| sabes evento laboratorio.wav | ove pibot que sabes de evento | talk about event | 0.17 | 1 |
| sabes evento office.wav | ove pibot que sabes de evento | talk about event | 0.17 | 1 |
| ven aca office.wav | ove pibot ven par aca | come towards me | 0.2 | 1 |
| ven biblio.wav | ove pibot clelac | talk about event | 0.6 | 0 |
| ven laboratorio.wav | ove pibot ven par aca | come towards me | 0.2 | 1 |
| voltea biblio.wav | ove pibot voltea | look at me | 0 | 1 |
| voltea laboratorio.wav | ove pibot voltea | look at me | 0 | 1 |
| voltea office.wav | oye pibot voltea | look at me | 0 | 1 |
+
+TABLE V
+
+LIST OF GENERATED AUDIO FILES WITH THEIR TRANSCRIPTION, INFERED TEXT, DETECTED COMMAND, WER, AND COMMAND ACCURACY METRICS.
+
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Jz7lDfrb1j/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Jz7lDfrb1j/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bba31e3d31c83ca63a2c48848d99b0058ccb7a5a
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Jz7lDfrb1j/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,288 @@
+§ WORK IN PROGRESS: ENHANCING HUMAN-ROBOT INTERACTION THROUGH A SPEECH AND COMMAND RECOGNITION SYSTEM FOR A SERVICE ROBOT USING ROS MELODIC
+
+*Note: Sub-titles are not captured in Xplore and should not be used
+
+Luis Emiliano Rodríguez Raygoza
+
+Tecnologico de Monterrey
+
+School of Engineering and Sciences Monterrey, México
+
+a01252086@tec.mx
+
+Jorge De-J. Lozoya-Santos
+
+Tecnologico de Monterrey School of Engineering and Sciences Monterrey, México jorge.lozoya@tec.mx
+
+Luis C. Félix-Herrán
+
+Tecnologico de Monterrey
+
+School of Engineering and Sciences
+
+Monterrey, México
+
+lcfelix@tec.mx
+
+Juan C. Tudon-Martinez
+
+Tecnologico de Monterrey
+
+School of Engineering and Sciences
+
+Monterrey, México
+
+jc.tudon@tec.mx
+
+${Abstract}$ -This paper presents the development and evaluation of a Speech and Command Recognition system integrated into PiBot, an autonomous service robot developed at Tecnológico de Monterrey. The system executes on Robot Operating System (ROS) Melodic framework running on a Jetson TX2 embedded computer to enable natural language interaction through Automated Speech Recognition (ASR). The study focuses on the challenges and opportunities of implementing speech recognition in real-world environments, particularly within constrained hardware platforms. The system achieved a 25% Word Error Rate (WER) and a 73% Command Accuracy, with performance varying across different testing environments. The system achieved a 25% Word Error Rate (WER) and a 73% Command Accuracy, with performance varying across different testing environments. Difficulties were noted in recognizing uncommon or non-Spanish words. A comparison with state-of-the-art models indicates room for improvement. Future work will focus on fine-tuning the model using datasets with ground truth transcriptions to enhance reliability in complex, noise-prone settings.
+
+Index Terms-Automated Speech Recognition (ASR), Human-Robot Interaction (HRI), Service Robots, Command Detection, Embedded Systems
+
+§ I. INTRODUCTION
+
+In recent years, robotics has made notable progress, with service robots becoming prominent solutions designed to communicate, interact, and assist customers [8]. As society moves toward greater automation, effective human-robot interaction is increasingly important. Among the key elements facilitating this interaction, speech algorithms are essential tools and widely used approaches in Human-Robot Interaction (HRI) [7]. Speech functions both as an input and an output in dialogue systems. As an input, it allows robots to recognize spoken language through Speech-to-Text (STT) or Automated Speech Recognition (ASR). As an output, speech synthesis converts textual responses into spoken language, enabling natural language interaction [1].
+
+This paper presents preliminary work focusing on the development and evaluation of these systems to identify areas for future improvement. The system is integrated into PiBot, an autonomous service robot developed at Tecnológico de Monterrey, with its design and development previously described [5]. PiBot's algorithms run within the Robot Operating System (ROS) Melodic framework on Ubuntu 18.04, utilizing the processing capabilities of a Jetson TX2 embedded computer. While this combination of software and hardware is functional, it presents limitations due to the constraints of embedded computer architecture, reliance on battery power, and limited availability of GPU-accelerated library versions. Additionally, the Jetson TX2, an older model, poses specific challenges impacting the system's performance and flexibility.
+
+This paper examines the challenges and opportunities of integrating speech recognition algorithms within a service robot, emphasizing the practical implementation and evaluation of these systems in real-world settings. Through experimentation and analysis, we aim to identify the strengths and limitations of current speech recognition technologies when deployed on constrained hardware platforms like the Jetson TX2, providing insights that may inform future enhancements in human-robot interaction in complex, noise-prone environments.
+
+The paper is organized as follows: Section II describes the system integration and technological configuration. Section III outlines the configuration and operation of the developed processing nodes, detailing their roles in the processing pipeline. Section IV discusses the testing and validation methodology used to evaluate the system. The validation results are presented in Section V, and conclusions are drawn in Section VI.
+
+§ II. PIBOT'S SPEECH RECOGNITION INTEGRATION
+
+The development of the Speech and Command Recognition System for PiBot is a significant enhancement, significantly improving its interactive capabilities. This system, detailed in this section, facilitates verbal communication between humans and PiBot, extending interactions beyond the existing terminal and web interface. It also sets the stage for future voice-activated motion tasks with their respective algorithms. The integration process, which began with the ReSpeaker Mic Array, is a crucial step in this journey. This device, connected via USB, provides raw audio data and the relative direction of sound, which will be used to activate motion tasks through speech, thereby enhancing PiBot's functionality.
+
+The implementation of the Speech and Command Recognition System for PiBot is designed to enable it to respond to vocal instructions, a common interface method for HRI. The process begins by connecting the ReSpeaker Mic Array and setting up an inference node. This node is dedicated to pre-processing the audio signal, performing inference on the audio data, and post-processing the results to obtain interpreted speech. The system then uses score criteria to determine whether a command is present in the inferred text. If a command is detected, it is forwarded to a state machine to execute the appropriate task on PiBot.
+
+§ A. TECHNOLOGICAL FRAMEWORK AND ADAPTATIONS
+
+This section describes the hardware and software architecture that integrates the Speech and Command Recognition system into PiBot. The integration is supported by a Jetson TX2 embedded computer running Ubuntu 18.04 with the JetPack 4.6.5 SDK and a ReSpeaker 2.0 Mic Array connected via USB for capturing audio input. The Jetson TX2 serves as the central processing unit, handling all computational tasks including audio inference, navigation, and sensor fusion. The JetPack SDK includes essential libraries, such as CUDA 10.2 and cuDNN 8.2.1, providing GPU acceleration to handle the demanding deep learning inference tasks required for real-time operation [4].
+
+The ROS Melodic framework provides a robust environment for developing modular nodes that handle specific tasks within the speech recognition pipeline. The Jetson TX2's GPU accelerates the inference process of the ASR model, enabling real-time speech processing. ROS topics and services are used for inter-node communication, allowing the ReSpeaker Node to publish audio data, the Inference Node to perform GPU-accelerated speech-to-text conversion, and the Command Detection Node to interpret commands. The State Machine Node orchestrates the execution of commands, leveraging ROS's actionlib for asynchronous task handling, which ensures that multiple actions can be managed concurrently.
+
+Due to the limitations of the Jetson TX2 hardware, several strategies were explored for configuring the necessary software environment. The Nvidia JetPack SDK is crucial for hardware-accelerated AI development, but due to compatibility constraints, the available machine learning frameworks are limited to older versions. Initially, a conda environment was considered for managing the library versions required for inference, but the lack of ARM-compatible versions proved to be a significant barrier. A compatible PyTorch Docker container was also investigated, offering GPU-accelerated support for speech recognition. Despite the potential, this approach faced practical challenges related to processing demands and frequent image deletions.
+
+Ultimately, we installed a specific version of PyTorch (provided by Nvidia) that works with CUDA 10.2, enabling us to perform GPU-accelerated inference for speech recognition tasks. This required transitioning from the torchaudio library to the librosa library for certain audio processing tasks, maintaining the same inference approach with some modifications. The difference in performance was significant: GPU-accelerated inference took 3-4 seconds, while CPU-based inference took approximately 55 seconds, emphasizing the necessity of GPU acceleration for achieving near real-time response. Figure X illustrates the hardware and software integration within PiBot, including the flow between ROS nodes, the Jetson TX2, and the ReSpeaker Mic Array.
+
+§ III. OPERATION OF PROCESSING NODES
+
+This section details the setup and functionality of the processing nodes developed for the speech and command recognition system, highlighting their roles within the framework. The system handles audio input, speech recognition, command detection, and command execution through four distinct nodes. Each node operates within the processing pipeline, collectively ensuring the system's functionality. The initial node was adapted from an existing ROS Melodic package [2], which facilitates communication with the ReSpeaker 2.0 Mic Array. This array captures audio input and provides directional sound data using its quad-microphone setup. The directional information is intended for future enhancements, such as activating motion tasks based on speech direction.
+
+The second node, developed specifically for this implementation, handles inference. It receives audio segments, preprocesses the data to reduce background noise, performs GPU-accelerated inference using the jonatasgrosman/wav2vec2- large-xlsr-53-spanish model [3], and post-processes the results to identify keywords. The third node processes the inferred text to determine if it contains a command from a predefined set of keywords and thresholds. Finally, the fourth node functions as a state machine, waiting for commands and executing the corresponding tasks on PiBot.
+
+Figure 1 illustrates the interconnection of these nodes, the topics they broadcast, and the data types transmitted between them. This visual aid clarifies the communication flow and the sequential processing steps from one node to the next. Each node and the algorithms employed are further explained in the subsequent subsections.
+
+ < g r a p h i c s >
+
+Fig. 1. Schematic representation of the audio processing framework, showcasing the workflow from audio capture to command execution. It begins with the ReSpeaker Node processing audio data, followed by the Inference Node for text inference and processing, and the Command Detection Node for command detection and selection. The State Machine Node completes the sequence by executing the corresponding actions.
+
+§ A. INFERENCE NODE CONFIGURATION AND OPERATION
+
+The Inference Node initiates the speech recognition process by handling audio segments, pre-processing them, performing inference, and converting the results into text. The node subscribes to the "/speech_audio" ROS topic, where audio segments are continuously published by the ReSpeaker node. Upon receiving an audio message, the data undergoes several processing steps before the inferred text is published to the "/audio_text_topic" as a String message for the next node.
+
+Audio processing begins with the initialization of necessary libraries. The 'rospy' library facilitates communication between ROS nodes, while 'librosa' and 'soundfile' are used for audio processing. Additional libraries support array manipulation, audio transformations, and machine learning tasks. The core of the inference task utilizes the wav2vec2-large-xlsr-53-spanish model, a speech recognition model already fine-tuned with the Common Voice Corpus 6.1 dataset for Spanish. This model, sources from Hugging Face [3], operates on wav files sampled at ${16},{000}\mathrm{\;{Hz}}$ . The dataset used for fine-tuning, Common Voice Corpus 6.1, provides a diverse range of transcriptions. For our implementation, we leveraged this pre-existing fine-tuned model to process our collected audio data without further modification. Global variables are set during initialization, and the GPU device is configured. A similarity threshold is established for fuzzy word matching, and a spell checker and a set of keywords are initialized to address common inference errors.
+
+Once the model is ready, which takes approximately 20 seconds, the ROS node and subscriber are activated to listen for audio data on the "/speech_audio" topic. The main processing occurs in the callback function, which is triggered upon receiving an audio message. The audio data is converted to a .wav file and loaded into a GPU-compatible tensor. Preprocessing is performed using the 'noisereduce' library to minimize background noise, as illustrated in Figure 2, which shows the effects of noise reduction on different audio signals.
+
+The filtered audio is then processed by the Wav2Vec 2.0 model, which transcribes the spoken content into text. Postprocessing involves mapping the inferred text to correct common transcription errors. This includes mapping terms like 'piot' to 'pibot', 'pivot' to 'pibot', and 'machin' to 'machine'. Additionally, the 'fuzzywuzzy' library performs word corrections based on the Levenshtein Distance, allowing for corrections with a similarity threshold of ${70}\%$ . Keywords such as 'pibot', 'pibotino', and 'patrullar' are specifically targeted for this correction process. The refined text is then published to the "/audio_text_topic" ROS topic for the Command Detection Node to process.
+
+§ B. COMMAND DETECTION NODE
+
+The Command Detection Node converts inferred text into detected commands to execute later. It subscribes to the "/audio_text_topic" ROS topic to receive text inputs from the Inference Node. Upon receiving a message, the node processes the text by tokenizing it into individual words for detailed analysis. Special attention is given to the keywords "pibot" and "pibotino," which identify the robot intended to receive the commands. Detecting these keywords ensures that only relevant commands are processed, filtering out unrelated speech.
+
+After recognizing the robot identifier, the node maps keywords associated with each potential command. These keywords are grouped by synonyms to improve the accuracy of the scoring mechanism, which determines the most likely intended command. This grouping prevents score inflation from repetitive similar words, ensuring a more accurate interpretation. The algorithm then evaluates the scores for each potential command against predefined thresholds. If a command's score exceeds its threshold and is the highest among the candidates, it is selected for execution. The selected command is published to the "/state_topic" ROS topic as a String message for the State Machine Node to execute relevant algorithms.
+
+TABLE I
+
+max width=
+
+Command Intent Keywords Thresholds
+
+1-4
+talk_about_system For PiBot to play a series of audio files, explaining itself, its capabilities, its components and its features. (hablame, háblame, cuéntame, cuentame, explicame, explícate), (ti), (sobre), (capacidades), (componentes) 0.2
+
+1-4
+talk_about_machine_care For PiBot to play a series of audio files, explaining Machine Care, a strategical business partner for PiBot development. (hablame, háblame, cuentame, cuéntame, explicame), (machin, machine, care, quer) 0.6
+
+1-4
+talk_about_event For PiBot to play a series of audio files, explaining the ENCLELAC event which invited PiBot to attend. (háblame, hablame, cuéntame, cuentame, explicame), (evento, clelac, conferencia, enclelac, claustro) 0.6
+
+1-4
+come_towards_me Command for a future action intended to command PiBot to navigate towards the person closest to the sound direction. (ven, vente, acercate, aproxima, aproximate), (aca, acá, aquí) 0.6
+
+1-4
+patrol Command for future to start patrolling actions on PiBot, navigating autonomously in a set of predefined points. (patrullaje, patrullar, vuelta), (empieza, comienza, ponte) 0.2
+
+1-4
+look_at_me Command for future action intended to command PiBot to rotate to face the sound direction source. (voltea, volteame, observame, observa, boltea, mi- rame, mira), (empieza, comienza, ponte) 0.3
+
+1-4
+stop_action Some states are indefinite and are only stopped by this action. Additionally, the audio sequences can be stopped with this command. (detente, alto, parate, cancela, basta, termina, inter- rumpe, suspende, aborta) 0.2
+
+1-4
+continue Signals PiBot to continue with its last state, either continuing from the last played audio, or continuing an indefinite task. (continúa, continua, reanuda) 1
+
+1-4
+X TARTET X X
+
+1-4
+
+List of commands programmed in the speech recognition implementation for PiBot. The list presents the identifier, the intention of the matter FOR THIS STATE (IN THE STATE MACHINE), KEYWORDS FOR EACH COMMAND, AND THE MINIMUM SCORE THRESHOLD TO SURPASS TO BE CONSIDERED A POSSIBLE CANDIDATE.
+
+Table I lists the programmed commands, their intended actions, associated keywords, and the minimum score thresholds required for selection.
+
+§ C. STATE MACHINE NODE
+
+The State Machine Node manages the execution of tasks corresponding to received commands using the smach library. Each state within the state machine performs specific functions and makes decisions based on the commands received. The final state, FinishProgram, handles the concluding logic before terminating the program. Figure 3 depicts the structure of the state machine, which begins in the Initial Setup state and transitions to the Waiting Mode state once all necessary nodes are active. Upon receiving a command, the state machine transitions to the appropriate state to execute the corresponding task and then returns to Waiting Mode after completion or interruption. If the state machine is stopped, it moves to the End state.
+
+The Initial Setup state verifies that all required ROS nodes, specifically the Inference Node and Command Detection Node, are operational. This verification accounts for the approximately 20-second power-on time. Once confirmed, the state machine transitions to the Waiting Mode state, where it remains ready to receive and process commands from the "/state_topic" ROS topic.
+
+In the Waiting Mode state, the state machine continuously monitors for incoming commands. Upon receiving a command, it transitions to the corresponding state to execute the associated actions. All states responsible for providing explanations are fully operational, playing a series of audio files as intended. Users can interrupt these actions with the 'stop_action' command, which halts audio playback and records the last played audio. The 'continue' command also allows users to resume the last interrupted action. While commands such as 'patrol', 'come_towards_me', and 'look_at_me' are recognized and processed, the actions they trigger are scheduled for future development.
+
+Each state within the state machine is designed to handle specific functionalities, with clearly defined transitions ensuring a reliable and adaptable system. Audio feedback is integral to the state machine, enhancing user interaction by confirming received commands. When a command is successfully detected, the system plays a randomly selected confirmation audio from a pool of pre-recorded phrases. This approach confirms command recognition and adds variety to interactions, aiming to improve the user experience. Similarly, continuation commands trigger randomized audio feedback to inform users that the system has resumed its previous action.
+
+§ IV. TESTING AND VALIDATION OF SPEECH RECOGNITION IMPLEMENTATION
+
+Evaluating the performance of the speech recognition system implemented in PiBot is essential to ensure effective interaction and accurate command execution and to highlight critical areas of opportunity for future work in this platform. This section details the testing methodology, including the audio samples, testing environments, and the metrics used to assess system reliability and accuracy. The testing was performed on a computer running ROS Melodic on Windows 11 through the Windows Subsystem for Linux with Ubuntu 18.04. The focus was on evaluating the algorithmic accuracy and reliability consistent across different platforms.
+
+ < g r a p h i c s >
+
+Fig. 2. Comparison of Audio Waveforms Across Various Scenarios (spoken text said is "oye pibot hablame de ti"): a) Original clear audio waveform, b) Noise-reduced waveform from the original clear audio, c) Original audio waveform with added background noise, d) Noise-reduced waveform of the audio with added background noise.
+
+ < g r a p h i c s >
+
+Fig. 3. Illustration of the State Machine structure implemented on PiBot, to perform different algorithms based on the command received.
+
+max width=
+
+Desired Command Transcription
+
+1-2
+look_at_me oye pibot voltea
+
+1-2
+talk_about_system oye pibot hablame de ti
+
+1-2
+talk_about_event oye pibot hablame del evento
+
+1-2
+talk_about_event oye pibot hablame de clelac
+
+1-2
+look_at_me oye pibot voltea
+
+1-2
+patrol oye pibot ponte a patrullar
+
+1-2
+talk_about_machine_care oye pibot hablame de machine care
+
+1-2
+come_towards_me oye pibot ven para aca
+
+1-2
+talk_about_system oye pibot cuentame sobre ti
+
+1-2
+talk_about_event oye pibot que sabes del evento
+
+1-2
+patrol oye pibot comienza a patrullar
+
+1-2
+
+TABLE II
+
+LIST OF PHRASES WITH THE INTENDED COMMAND USED ON THE SPEECH RECOGNITION TESTING.
+
+§ A. RECORDING AND PREPARATION OF AUDIO DATA
+
+Audio recordings for this validation were captured using PiBot's ReSpeaker Mic Array. A Python script defined each audio segment's start and end times based on keyboard inputs. This resulted in individual .wav files named according to the spoken phrase and the recording location; all recordings were mono-channel with a sample rate of ${16},{000}\mathrm{\;{Hz}}$ . Some recordings included English words such as "machine" and "care" to test the system's handling of multilingual inputs even when fine-tuned with a Spanish dataset. Table II lists the phrases and their corresponding intended commands. It is important to note that all commands are in Spanish, aligning with the system's target language.
+
+§ B. TESTING ENVIRONMENTS
+
+The speech recognition system was tested in three different environments to evaluate its performance under varying noise conditions:
+
+ * Office Floor: Recordings were made on the second floor of the CETEC tower at Tecnológico de Monterrey's In-novaction floor, where undergraduate students presented their final projects. This environment featured significant background noise due to multiple conversations and activities, providing a challenging setting for speech recognition.
+
+ * Library: PiBot was positioned on the first floor of Tecnológico de Monterrey's library. Electrical escalators, nearby shops, and student activity contributed to background noise, testing the system's ability to function in a moderately noisy environment.
+
+ * Laboratory: Recordings in the laboratory were conducted in a wide, open space with minimal background noise. This environment served as a control to assess the system's performance in ideal conditions.
+
+During testing, the speaker maintained a consistent speaking volume and pace. The impact of factors such as background noise, echo, and microphone distance were analyzed to understand their effects on WER and Command Accuracy.
+
+Figures 4, 5, and 6 show PiBot's placement in each of these environments.
+
+ < g r a p h i c s >
+
+Fig. 4. PiBot located at Innovaction, where the set of phrases were recorded.
+
+ < g r a p h i c s >
+
+Fig. 5. PiBot located at Tec's Library, where the set of phrases were recorded.
+
+§ C. EVALUATION METRICS
+
+Two primary metrics were used to evaluate the system's performance: Word Error Rate (WER) and Command Prediction Accuracy.
+
+1) Word Error Rate (WER): WER measures the difference between the recognized word sequence and the ground truth transcription by calculating the minimum number of substitutions, insertions, and deletions required to transform one sequence into the other. It is calculated using the following formula:
+
+$$
+\text{ Word Error Rate } = \frac{S + I + D}{N} \tag{1}
+$$
+
+ * S (Substitutions): The number of words in the recognized transcription that differ from the ground truth.
+
+ * I (Insertions): The number of additional words present in the recognized transcription that are not in the ground truth.
+
+ * D (Deletions): The number of words from the ground truth that are missing in the recognized transcription.
+
+ < g r a p h i c s >
+
+Fig. 6. PiBot located at Tec's Laboratory, where the set of phrases were recorded.
+
+ * N (Number of words): The total number of words in the ground truth transcription.
+
+WER was calculated using the jewel Python library, which compares the inferred transcription against the ground truth.
+
+2) Command Prediction Accuracy: Command Prediction Accuracy evaluates whether the system correctly identifies and executes the intended command. This metric is binary: a score of 1 is assigned if the inferred command matches the ground truth command, and a score of 0 otherwise.
+
+§ V. RESULTS
+
+The tests were designed to evaluate the performance of the speech recognition pipeline within operational scenarios, specifically examining how spoken words trigger commands in PiBot's state machine. These evaluations assess the system's reliability and identify potential areas for enhancement. The results are detailed in Table V, which includes each file's name, inferred text, selected command, Word Error Rate (WER), and command accuracy for each scenario. Additionally, Table III summarizes the metrics to provide an overview of overall averages and location-specific performance.
+
+max width=
+
+X Word Error Rate Command Accuracy
+
+1-3
+Library 42% 50%
+
+1-3
+Laboratory 19% 70%
+
+1-3
+Office 13% 100%
+
+1-3
+Average 25% 73%
+
+1-3
+3|c|TARLE III
+
+1-3
+
+AVERAGES OF WER AND COMMAND ACCURACY AVERAGES IN DIFFERENT LOCATIONS AND OVERALL.
+
+PiBot's speech recognition system achieved an overall command accuracy of ${73}\%$ and a WER of ${25}\%$ . Compared to advanced models such as OpenAI's Whisper, which achieves a WER below 9% and maintains performance in noisy environments [6], there is potential for further improvement in PiBot's system. The variation in WER and Command Accuracy across different environments suggests that factors beyond general noise levels influence system performance. In the library, the open space and echoes likely contributed to a higher WER of 42% and lower Command Accuracy of 50%. Despite high background noise in the office, the confined space may have allowed the microphone array to better capture the speaker's voice, resulting in a lower WER of 13% and high Command Accuracy. The laboratory, covered by glass walls and an open ceiling, showed intermediate results. These findings could indicate that environmental acoustics, such as echo and reverberation, and the directional characteristics of background noise, significantly impact the system's effectiveness.
+
+§ VI. CONCLUSIONS
+
+This study evaluated a speech recognition and command detection system for the PiBot platform, achieving an average Word Error Rate (WER) of 25% and a Command Accuracy of 73%. The system's performance varied across different testing environments, with the library setting exhibiting the highest WER of ${42}\%$ and the lowest Command Accuracy of ${50}\%$ . Conversely, despite its high background noise, the office environment demonstrated a WER of 13% and a Command Accuracy of ${100}\%$ . The laboratory environment, characterized by minimal background noise, showed a WER of ${19}\%$ and a Command Accuracy of 70%.
+
+These results might indicate that factors beyond the general noise level influence the system's performance. The unexpectedly high accuracy in the noisy office environment suggests that the system can perform well in high background noise scenarios, while the specific conditions remain unclear. In contrast, the library's moderate noise levels adversely affected both WER and Command Accuracy, being the only open location. Additionally, the system faced challenges in recognizing certain words, particularly those uncommon or not in Spanish, such as "Enclelac" and "Machine Care." This difficulty aligns with existing research, which indicates that proper nouns and less frequent terms are more susceptible to recognition errors in speech systems [1]. To address the difficulties in recognizing uncommon or non-Spanish words, we implemented post-processing techniques in the Command Detection Node, specifically mapping commonly unrecognized words to the correct terms. PiBot, the system's name, was common but had special difficulty due to the wide range of inferences.
+
+However, due to limitations and lack of a dataset for this initial setup, these methods had limited effectiveness, particularly in complex acoustic environments. To improve this, future work should focus on fine-tuning the model using a more comprehensive dataset, which should include a wide range set of words and phrases that the system is expected to handle. This dataset should be captured using the ReSpeaker microphone array across various environments with different noise levels. Such customization will likely enhance the model's ability to recognize specific terms and increase overall command accuracy, as the current lack of effective keyword detection negatively impacts the performance of short command phrases.
+
+Furthermore, advancing pre-processing techniques, mainly through more effective noise reduction methods, could significantly increase the system's robustness and accuracy. Some noise reduction techniques include other libraries for spectral subtraction and deep learning-based noise impression algorithms that do not affect speech recognition tasks. These improvements are essential for ensuring reliable and practical real-world applications of PiBot in various and potentially challenging acoustic environments.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NKhQ1UEQFb/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NKhQ1UEQFb/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..b2307313eb8ca3779a910bbb09fb6b754e5ebb2e
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NKhQ1UEQFb/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,601 @@
+# Safety-critical Obstacle Avoidance Control of Autonomous Surface Vehicles with Uncertainties and Disturbances
+
+${1}^{\text{st }}$ Gege Dong
+
+College of Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+donggege0507@163.com
+
+${2}^{\text{nd }}$ Li-Ying Hao*
+
+College of Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+haoliying_0305@163.com
+
+${Abstract}$ -This paper proposes a safety-critical obstacle avoidance control approach for autonomous surface vehicles (ASVs) with disturbances and uncertainties. The existing exponential control barrier functions (ECBF) are extended to handle unknown disturbances, leading to the development of input-to-state safe exponential control barrier functions (ISSf-ECBFs). An extended state observer is used to estimate unknown external marine disturbances and internal model uncertainties, based on which an anti-disturbance controller is designed. Based on the proposed ISSf-ECBFs, a quadratic programming problem is formulated to determine the optimal control input. It is proven that the closed-loop system is input-to-state safe and the errors of the closed-loop system are uniformly ultimately bounded. Simulations validate the effectiveness of the proposed control strategy.
+
+Index Terms-Autonomous surface vehicles (ASVs), safety-critical control, obstacles avoidance, input-to-state safe exponential control barrier functions (ISSf-ECBFs)
+
+## I. INTRODUCTION
+
+Autonomous Surface Vehicles (ASVs) are gaining attention for their ability to enhance maritime operations [1]-[3]. With advanced sensors and navigation systems, ASVs can navigate complex environments and perform diverse tasks [4]. They are increasingly utilized in search and rescue, fisheries management, hydrographic surveying, and offshore energy, making them a focal point for researchers in ASV control [5]-[7].
+
+ASVs navigating in dynamic marine environments face numerous challenges, primarily internal model uncertainties and external disturbances [8]. Internal uncertainties arise from modeling inaccuracies, parameter variations, and sensor noise. Additionally, ASVs must navigate unpredictable ocean conditions, such as waves, currents, and winds. These factors can adversely affect the performance of control strategy. To address this challenge, researchers have proposed various methods to enhance the robustness of the system, such as sliding mode control [9], adaptive control, and neural network control [10].
+
+The Extended State Observer (ESO) can estimate disturbances in real-time and dynamically adjust the control strategy. By treating internal model uncertainties and external disturbances as lumped disturbances for estimation, the reliance on the model can be reduced, thereby enhancing the robustness of the system.
+
+In complex maritime environments, ASVs face significant threats from various obstacles, including vessels, islands, and reefs [11]. To mitigate these risks, researchers have proposed several obstacle avoidance strategies, such as the artificial potential method [12], the velocity obstacle method [13], and the dynamic window approach [14]. Control barrier functions (CBFs), introduced in [15], have proven effective in ensuring real-time safety. In [16], the nominal controller was modified to formally adhere to safety constraints for successful obstacle avoidance. However, the control strategy in [16] did not account for model uncertainties or disturbances. To address this, [17] introduced input-to-state safe control barrier functions (ISSf-CBFs). Furthermore, [18] proposed a framework to ensure safety for uncertain nonlinear systems with structured parametric uncertainty. In [19], a collision avoidance strategy for ASVs was proposed using ISSf-CBFs. However, these functions have a relative degree of one, limiting their use in higher-order systems. To address this, [20] introduced exponential control barrier functions (ECBFs). [21] further explored ISSf-ECBFs under known perturbation bounds, but measuring such disturbances is challenging. Therefore, a safety-critical controller based on ISSf-ECBFs is crucial for ASVs dealing with unknown model uncertainties and external disturbances.
+
+This paper presents a safety-critical control strategy for Autonomous Surface Vehicles (ASVs) that accounts for external marine disturbances and internal model uncertainties. The key contributions are as follows:
+
+1) While the existing method [21] constructs safety constraints only under known disturbances or their upper bounds, this paper extends the results of input-to-state safe control barrier functions (ISSf-ECBFs) to develop safety constraints for unknown disturbances.
+
+---
+
+This work was funded by the National Natural Science Foundation of China (51939001, 52171292, 51979020, 61976033), Dalian Outstanding Young Talents Program (2022RJ05), the Topnotch Young Talents Program of China (36261402), and the Liaoning Revitalization Talents Program (XLYC2007188).
+
+---
+
+2) Unlike previous work [12], [22]-[24], this paper formulates a safety-critical controller based on ISSf-ECBFs by constructing a quadratic programming problem to facilitate collision avoidance with obstacles.
+
+The structure of the paper includes the following sections: The preliminaries and problem statement are covered in Section II. Section III gives the safety-critical controller design and Section IV is stability and safety analysis. Simulations are carried out in Section V. Section VI summarizes this article.
+
+## II. PRELIMINARIES AND PROBLEM STATEMENT
+
+## A. Notation
+
+In this paper, the notation $\parallel \cdot \parallel$ denotes the 2-norm of a vector, and $\mathfrak{R}$ represents the set of real numbers. The symbols ${\lambda }_{\min }\left( \cdot \right)$ and ${\lambda }_{\max }\left( \cdot \right)$ indicate the smallest and largest eigenvalues of a symmetric matrix, respectively.
+
+Let $\beta \left( r\right)$ be a scalar continuous function defined for $r \in$ $\lbrack - b, a)$ . If $a = \infty , b = 0$ , and $\beta \left( r\right) \rightarrow \infty$ as $r \rightarrow \infty$ , then $\beta \left( r\right)$ belongs to class ${\mathcal{K}}_{\infty }$ . If $a, b = \infty ,\beta \left( r\right) \rightarrow \infty$ as $r \rightarrow \infty$ , and $\beta \left( r\right) \rightarrow - \infty$ as $r \rightarrow - \infty$ , it represents an extended class ${\mathcal{K}}_{\infty }$ , denoted as ${\mathcal{K}}_{\infty , e}$ .
+
+B. Input-to-state Safe Exponential Control Barrier Functions Consider the following system
+
+$$
+\dot{x} = f\left( x\right) + g\left( x\right) u + {d}_{w} \tag{1}
+$$
+
+where $x\left( t\right) \in {\Re }^{n}$ denotes state and $u \in {\Re }^{m}$ denotes control input. The term ${d}_{w}$ denotes bounded disturbances. The function $f\left( x\right) \in {\Re }^{n}$ and $g\left( x\right) \in {\Re }^{n \times m}$ are locally Lipschitz continuous.
+
+Definition 1. [25] The set $\mathcal{C} \in {\Re }^{n}$ is described as
+
+$$
+\mathcal{C} \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) \geq 0}\right\}
+$$
+
+$$
+\partial \mathcal{C} \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) = 0}\right\}
+$$
+
+$$
+\operatorname{Int}\left( \mathcal{C}\right) \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) > 0}\right\} \tag{2}
+$$
+
+where $h\left( \cdot \right) \in {\Re }^{n} \mapsto \Re$ represents a continuously differentiable function, and $\mathcal{C}$ is referred to as the safe set. If for all ${x}_{0} \in \mathcal{C}$ it holds that $x\left( t\right) \in \mathcal{C}$ for every $t \in I\left( {x}_{0}\right)$ , then the set $\mathcal{C}$ is considered forward invariant. Consequently, the system described by (1) with ${d}_{w}\left( t\right) = 0$ can be deemed safe on $\mathcal{C}$ .
+
+Definition 2. The relative degree of $S\left( x\right) : {\Re }^{n} \rightarrow \Re$ with respect to the system (1) refers to the number of derivatives required along the dynamics of (1) before the control input $u$ explicitly appears.
+
+Definition 3. [17] For system (1), an extended set ${\mathcal{C}}_{d} \supset \mathcal{C}$ is expressed as follows
+
+$$
+{\mathcal{C}}_{d} \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) + {\beta }_{d}\left( {\begin{Vmatrix}{d}_{w}\left( t\right) \end{Vmatrix}}_{\infty }\right) \geq 0}\right\}
+$$
+
+$$
+\partial {\mathcal{C}}_{d} \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) + {\beta }_{d}\left( {\begin{Vmatrix}{d}_{w}\left( t\right) \end{Vmatrix}}_{\infty }\right) = 0}\right\}
+$$
+
+$$
+\operatorname{Int}\left( {\mathcal{C}}_{d}\right) \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) + {\beta }_{d}\left( {\begin{Vmatrix}{d}_{w}\left( t\right) \end{Vmatrix}}_{\infty }\right) > 0}\right\} \tag{3}
+$$
+
+where ${\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty } \leq {\bar{d}}_{w}$ , a positive constant, and $S\left( x\right)$ is a continuous function, with ${\beta }_{d}\left( \cdot \right) \in {\mathcal{K}}_{\infty , e}$ .
+
+Definition 4. (ISSf [17]) If the control input $u$ and the function ${\beta }_{d}$ ensure the forward invariance of the set ${\mathcal{C}}_{d}$ , then the system (1) with disturbances is ISSf on $\mathcal{C}$ .
+
+Definition 5. (ISSf-ECBF [17]) Considering the sets ${\mathcal{C}}_{d}$ defined by (3), $S\left( x\right)$ , which has a relative degree $\rho > 1$ , qualifies as an ISSf-ECBF for the system described in (1). This holds true if, for all $x \in {\Re }^{n}$ , there exist a bound ${\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty } \leq {\bar{\tau }}_{w}$ and a function $\gamma \left( \cdot \right) \in {\mathcal{K}}_{\infty , e}$ that satisfies
+
+$$
+\mathop{\sup }\limits_{{u \in \mathcal{U}}}\left\lbrack {{\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u + {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}{d}_{w}}\right.
+$$
+
+$$
+\left. {+{\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}}\right\rbrack \geq - \gamma \left( {\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }\right) \tag{4}
+$$
+
+The terms ${\mathcal{L}}_{f}^{\rho }$ and ${\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}$ represent the Lie derivatives of the function $S\left( x\right) .{\mathcal{T}}_{s} = {\left\lbrack \begin{array}{llll} {p}_{0} & {p}_{1} & \ldots & {p}_{\iota } \end{array}\right\rbrack }^{T}$ where ${p}_{i}$ is positive constant. ${\mathcal{H}}_{s} = {\left\lbrack \begin{array}{llll} S\left( x\right) & {\mathcal{L}}_{f}S\left( x\right) & \ldots & {\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) \end{array}\right\rbrack }^{T}$ .
+
+Lemma 1. If $S\left( x\right)$ functions as an ISSf-ECBF for the system (1) in the set $\mathcal{C}$ , then any controller $u \in \mathcal{U}$ that is Lipschitz continuous and valid for all $x \in {\Re }^{n}$ must satisfy
+
+$$
+\mathcal{U}\left( x\right) = \left\{ {u \in {\Re }^{m} : {\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u}\right.
+$$
+
+$$
+\left. {+{\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}{d}_{w} + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s} \geq - \gamma \left( {\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }\right) }\right\} . \tag{5}
+$$
+
+This implies that the set ${\mathcal{C}}_{d}$ is forward invariant. In other words, the system (1) is ISSf on the set $\mathcal{C}$ .
+
+### C.ASV Model
+
+The kinematics and kinetics of ASV can be described as:
+
+$$
+\dot{\eta }\left( t\right) = R\left( \psi \right) \nu \left( t\right) \tag{6}
+$$
+
+$$
+M\dot{\nu }\left( t\right) = f\left( \nu \right) + {d}_{w}\left( t\right) + \tau \left( t\right)
+$$
+
+where $\eta \left( t\right) = {\left\lbrack \begin{array}{ll} \bar{p}\left( t\right) & \psi \left( t\right) \end{array}\right\rbrack }^{T} \in {\Re }^{3}$ represents the position and heading of ASV. $R\left( \psi \right) = \operatorname{diag}\left\{ {{R}_{2}\left( \psi \right) ,1}\right\}$ is a rotate matrix
+
+with
+
+$$
+{R}_{2}\left( \psi \right) = \left\lbrack \begin{matrix} \cos \left( \psi \right) & - \sin \left( \psi \right) \\ \sin \left( \psi \right) & \cos \left( \psi \right) \end{matrix}\right\rbrack . \tag{7}
+$$
+
+The vector $\nu \left( t\right) = {\left\lbrack \begin{array}{lll} u\left( t\right) & v\left( t\right) & r\left( t\right) \end{array}\right\rbrack }^{T} \in {\Re }^{3}$ represents the surge velocity, sway velocity, and yaw velocity, respectively. The matrix $M$ denotes the inertial matrix. $f\left( \nu \right)$ represents the Coriolis and centripetal matrix, damping matrix, and unmodeled hydrodynamics. The vector $\tau \left( t\right)$ signifies the forces produced by the actuators. The external disturbances, caused by wind, waves, and ocean currents, are represented by ${d}_{w}\left( t\right) = {\left\lbrack \begin{array}{lll} {d}_{w1}\left( t\right) & {d}_{w2}\left( t\right) & {d}_{w3}\left( t\right) \end{array}\right\rbrack }^{T} \in {\Re }^{3}.$
+
+Letting $q\left( t\right) = R\left( \psi \right) \nu \left( t\right)$ ,(6) can be rewritten as
+
+$$
+\dot{p} = q \tag{8}
+$$
+
+$$
+\dot{q} = \xi + R{M}^{-1}\tau
+$$
+
+where $\xi = R{M}^{-1}\left( {{d}_{w} + f\left( \nu \right) }\right) + \dot{R}\nu$ .
+
+The desired parameterized path is set as ${p}_{0}\left( \theta \right) =$ ${\left\lbrack {x}_{0}\left( \theta \right) ,{y}_{0}\left( \theta \right) ,{\psi }_{0}\left( \theta \right) \right\rbrack }^{\mathrm{T}},{\psi }_{0}\left( \theta \right) = \arctan \left( {{y}_{0}^{\theta }\left( \theta \right) /{x}_{0}^{\theta }\left( \theta \right) }\right)$ where $\theta$ represents path variable. ${y}_{0}^{\theta }\left( \theta \right)$ and ${x}_{0}^{\theta }\left( \theta \right)$ is the partial derivative of ${y}_{0}\left( \theta \right)$ and ${x}_{0}\left( \theta \right)$ , respectively. In addition, it is assumed that ${p}_{0}^{\theta }\left( \theta \right)$ is bounded.
+
+## D. Problem Formulation
+
+The safety-critical obstacle avoidance controller of ASV is required to achieve the following tasks:
+
+(1) Geometric task: Ensure that the ASV follows the desired path, meaning that
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow \infty }}\begin{Vmatrix}{p\left( t\right) - {p}_{0}\left( \theta \right) }\end{Vmatrix} < {l}_{1} \tag{9}
+$$
+
+where ${l}_{1} \in \mathfrak{R}$ denotes a small positive constant.
+
+(2) Dynamic task: The derivative of the path variable $\theta$ converge to the desired speed
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow \infty }}\begin{Vmatrix}{\dot{\theta }\left( t\right) - {u}_{d}\left( t\right) }\end{Vmatrix} < {l}_{2} \tag{10}
+$$
+
+where ${u}_{d}\left( t\right)$ represents desired speed and ${l}_{2}$ is a small positive constant.
+
+(3) Obstacle avoidance task: To prevent collisions between the ASV and obstacles, the following condition must be met
+
+$$
+\begin{Vmatrix}{\bar{p}\left( t\right) - {\bar{p}}_{k}\left( t\right) }\end{Vmatrix} > {r}_{k} + {d}_{k} \tag{11}
+$$
+
+where ${\bar{p}}_{k}\left( t\right) ,{r}_{k}$ , and ${d}_{k}$ represent the position, the radius, and the minimum obstacle avoidance distance of the $k$ th obstacle.
+
+## III. Main Results
+
+## A. ISSf-ECBF with Unknown Disturbancces
+
+While the previous studies have made substantial progress, they were mainly directed at scenarios with known disturbances or predefined upper bounds. To alleviate this limitation, the following theorem is presented to account for unknown disturbances.
+
+Theorem 1. Given the ISSf-ECBF $S\left( x\right)$ as defined in Definition 5 for the system (1) on the set $\mathcal{C}$ , if there exists a bound ${\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty } \leq {\bar{d}}_{w}$ such that for every $x \in {\Re }^{n}$ , the following inequality holds
+
+$$
+\mathop{\sup }\limits_{{u \in \mathcal{U}}}\left\lbrack {{\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}}\right.
+$$
+
+$$
+\left. {-{\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }\right\rbrack \geq 0 \tag{12}
+$$
+
+and the admissible control set satisfies as
+
+$$
+\mathcal{U}\left( x\right) = \left\{ {u \in {\Re }^{m} : {\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}}\right.
+$$
+
+$$
+- {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) \geq 0\} . \tag{13}
+$$
+
+Then, we can obtain that the system (1) is ISSf on $\mathcal{C}$ .
+
+Proof. For $u \in \mathcal{U}\left( x\right)$ , one has
+
+$$
+{\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u + {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}{d}_{w} + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}
+$$
+
+$$
+\geq {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) + {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}{d}_{w}
+$$
+
+$$
+\geq {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right)
+$$
+
+$$
+- \parallel \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\parallel {\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }. \tag{14}
+$$
+
+Adding and subtracting $\frac{{\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }^{2}}{4}$ yields
+
+$$
+\dot{h} \geq {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x} - \frac{{\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }}{2}\right) }^{2} - \frac{{\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }^{2}}{4}
+$$
+
+$$
+\geq - \frac{{\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }^{2}}{4} \tag{15}
+$$
+
+which is of the form (4).
+
+Remark 1. Compared with [21], the proposed ISSf-ECBF can deal with unknown perturbations. Although asymptotically stable ESO is used in reference 21, in practice, the disturbance estimation error is difficult to be 0 . Thus, it is essential to develop ISSf-ECBFs that ensure safety in the presence of unknown disturbances.
+
+## B. Anti-disturbance Controller Design
+
+In this section, we will focus on designing a safety-critical controller. The control architecture for the proposed strategy is illustrated in Figure 1.
+
+
+
+Fig. 1. Control architecture of the safety-critical controller for the ASV.
+
+Firstly, we utilize the ESO to obtain the estimations of the model uncertainties, external disturbances in this part. In addition, the ESO relies on the following assumption.
+
+Assumption 1. $\dot{\xi }\left( t\right)$ is a bounded function meeting
+
+$$
+\parallel \dot{\xi }\left( t\right) \parallel \leq {\xi }^{ * } \tag{16}
+$$
+
+where ${\xi }^{ * }$ is a positive constant.
+
+Then, the ESO is devised to estimate model uncertainties, external disturbances.
+
+$$
+\left\{ \begin{array}{l} \dot{\widehat{q}}\left( t\right) = - {K}_{1}\widetilde{q}\left( t\right) + \widehat{\xi }\left( t\right) + R{M}^{-1}\tau \\ \dot{\widehat{\xi }}\left( t\right) = - {K}_{2}\widetilde{q}\left( t\right) \end{array}\right. \tag{17}
+$$
+
+where $\widehat{q}\left( t\right)$ and $\widehat{\xi }\left( t\right)$ represent the estimates of $q\left( t\right)$ and $\xi \left( t\right)$ . The observer matrices ${\left\lbrack \begin{array}{ll} {K}_{1} & {K}_{2} \end{array}\right\rbrack }^{T} = {\left\lbrack \begin{array}{ll} {2w}{I}_{3} & {w}^{2}{I}_{3} \end{array}\right\rbrack }^{T}$ where $w$ is the observer bandwidth.
+
+Defining $\widetilde{q}\left( t\right) = \widehat{q}\left( t\right) - q\left( t\right)$ , and $\widetilde{\xi }\left( t\right) = \widehat{\xi }\left( t\right) - \xi \left( t\right)$ are the estimates. The dynamics of $\widetilde{q}\left( t\right)$ and $\widetilde{\xi }\left( t\right)$ can be written as
+
+$$
+\left\{ \begin{array}{l} \dot{\widetilde{q}}\left( t\right) = - {K}_{1}\widetilde{q}\left( t\right) + \widetilde{\xi }\left( t\right) \\ \dot{\widetilde{\xi }}\left( t\right) = - {K}_{2}\widetilde{q}\left( t\right) - \dot{\xi }\left( t\right) . \end{array}\right. \tag{18}
+$$
+
+Next, (18) can be rewritten as
+
+$$
+{\dot{E}}_{o}\left( t\right) = T{E}_{o}\left( t\right) - D\dot{\xi }\left( t\right) \tag{19}
+$$
+
+where ${E}_{o}\left( t\right) = {\left\lbrack \begin{array}{ll} {\widetilde{q}}^{\mathrm{T}}\left( t\right) & {\widetilde{\xi }}^{\mathrm{T}}\left( t\right) \end{array}\right\rbrack }^{\mathrm{T}} \in {\Re }^{6}$ and
+
+$$
+T = \left\lbrack \begin{array}{ll} - {K}_{1} & {I}_{3} \\ - {K}_{2} & {0}_{3} \end{array}\right\rbrack , D = \left\lbrack \begin{array}{l} {0}_{3} \\ {I}_{3} \end{array}\right\rbrack .
+$$
+
+Then, the following tracking error is defined as ${e}_{1} = p -$ ${p}_{0}\left( \theta \right)$ . By taking the derivative of ${e}_{1}$ and using (6), we can
+
+get
+
+$$
+{\dot{e}}_{1} = q - {p}_{0}^{\theta }\left( \theta \right) \dot{\theta }. \tag{20}
+$$
+
+Let ${u}_{d} - \vartheta \left( t\right) = \dot{\theta }\left( t\right)$ , one can obtain
+
+$$
+{\dot{e}}_{1} = q - {p}_{0}^{\theta }\left( \theta \right) \left( {{u}_{d} - \vartheta }\right) . \tag{21}
+$$
+
+The kinematic guidance law ${q}_{d}$ is designed as follows to stabilize ${e}_{1}$ :
+
+$$
+{q}_{d} = - {k}_{1}{e}_{1} + {p}_{0}^{\theta }\left( \theta \right) {u}_{d} \tag{22}
+$$
+
+and
+
+$$
+\dot{\vartheta } = - \ell \left( {\vartheta + \mu {p}_{0}^{\theta }{\left( \theta \right) }^{\mathrm{T}}{e}_{1}}\right) \tag{23}
+$$
+
+where ${k}_{1} = \operatorname{diag}\left\{ {{k}_{11},{k}_{12},{k}_{13}}\right\} ,\ell$ and $\mu$ are positive constants.
+
+To proceed, defining ${e}_{2} = q - {\widehat{q}}_{d}$ , where ${\widehat{q}}_{d}$ is the estimate of ${q}_{d}$ . ${\widehat{q}}_{d}$ can be obtained by using the following filtering scheme:
+
+$$
+{t}_{d}{\dot{\widehat{q}}}_{d} + {\widehat{q}}_{d} = {q}_{d},\;{\widehat{q}}_{d}\left( 0\right) = {q}_{d}\left( 0\right) \tag{24}
+$$
+
+where ${t}_{d}$ is a positive constant. Let
+
+$$
+{e}_{d} = {\widehat{q}}_{d} - {q}_{d}. \tag{25}
+$$
+
+And ${\dot{q}}_{d} \triangleq a = {\left\lbrack \begin{array}{lll} {a}_{1} & {a}_{2} & {a}_{3} \end{array}\right\rbrack }^{T} \cdot {a}_{j}$ is bounded by $\left| {a}_{j}\right| \leq {a}_{j}^{ * }$ , $j = 1,2,3$ , where ${a}_{j}^{ * }$ is a positive constant. For details, please refer to [26].
+
+Then, the time derivative of ${e}_{2}$ yields
+
+$$
+{\dot{e}}_{2} = \xi + R{M}^{-1}\tau \left( t\right) + \frac{{e}_{d}}{{t}_{d}}. \tag{26}
+$$
+
+To stabilize ${e}_{2}$ , the anti-disturbance control law is developed as follows:
+
+$$
+{\tau }_{c}\left( t\right) = M{R}^{T}\left( {-\widehat{\xi } - {e}_{1} - \frac{{e}_{d}}{{t}_{d}} - {k}_{2}{e}_{2}}\right) \tag{27}
+$$
+
+where ${k}_{2} = \operatorname{diag}\left\{ {{k}_{21},{k}_{22},{k}_{23}}\right\}$ . Denote $\tau = {\tau }_{c} + {\tau }_{e}$ .
+
+## C. Safety-critical Obstacle Avoidance Controller
+
+In this part, considering the collision with obstacles and ASV to design the optimal surge and sway force of safety conditions. From (8), we can get
+
+$$
+\dot{\bar{p}} = \bar{q} \tag{28}
+$$
+
+$$
+\dot{\bar{q}} = {\widehat{\xi }}_{2} + {\tau }_{2} - {\widetilde{\xi }}_{2}
+$$
+
+where $\bar{p}$ denotes ${\left\lbrack x\left( t\right) , y\left( t\right) \right\rbrack }^{T},\bar{q}$ denotes ${R}_{2}\left( \psi \right) {\left\lbrack u, v\right\rbrack }^{T}.{\widehat{\xi }}_{2},{\widetilde{\xi }}_{2}$ and ${\tau }_{2}$ is the first two dimensions of $\widehat{\xi },\widetilde{\xi }$ and $\tau$ , respectively. ${\bar{p}}_{k} = {\left\lbrack {x}_{k},{y}_{k}\right\rbrack }^{T}$ is position of $k$ th obstacle.
+
+We choose the following candidate ISSf-ECBF
+
+$$
+{S}_{k}\left( s\right) = {\begin{Vmatrix}{\bar{p}}_{ek}\end{Vmatrix}}^{2} - {\left( {r}_{k} + {d}_{k}\right) }^{2} \tag{29}
+$$
+
+where ${\bar{p}}_{ek} = \bar{p} - {\bar{p}}_{k}, s = {\left\lbrack {\bar{p}}^{T},{\bar{q}}^{T}\right\rbrack }^{T}$ . To achieve the objective of obstacle avoidance, the set $\mathcal{C}$ can be obtained
+
+$$
+\mathcal{C} = \left\{ {\bar{p} \in {\mathbb{R}}^{2} : {S}_{k}\left( s\right) = {\begin{Vmatrix}{\bar{p}}_{ek}\end{Vmatrix}}^{2} - {\left( {r}_{k} + {d}_{k}\right) }^{2} \geq 0}\right\} \tag{30}
+$$
+
+For ease of notation, it is denoted by ${S}_{k}$ in the sequel. The safety constraint with ${S}_{k}\left( s\right)$ is described as
+
+$$
+\mathcal{U} = \left\{ {{\tau }_{2} : {\mathcal{L}}_{f}^{2}{S}_{k} + {\mathcal{L}}_{g}{\mathcal{L}}_{f}{S}_{k}{\tau }_{2} - {\left( \frac{\partial \left( {{\mathcal{L}}_{f}{S}_{k}}\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}{S}_{k}}\right) }{\partial x}\right) }\right.
+$$
+
+$$
+\left. {+{\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s} \geq 0}\right\} \tag{31}
+$$
+
+where ${\mathcal{L}}_{f}^{2}{S}_{k} = 2{\bar{q}}^{T}\bar{q} + 2{\bar{p}}_{ek}^{T}{\widehat{\xi }}_{2},{\mathcal{L}}_{g}{\mathcal{L}}_{f}{S}_{k} = 2{\bar{p}}_{ek}^{T},{\mathcal{T}}_{s} =$ ${\left\lbrack {\beta }^{2},2\beta \right\rbrack }^{T}$ . For the ASV, ensuring safety takes precedence over geometric objectives. Based on the safety constraint (31), the following quadratic programming problem is constructed.
+
+$$
+{\tau }^{ * } = \mathop{\operatorname{argmin}}\limits_{{\tau \in {\Re }^{m}}}J\left( \tau \right) = {\begin{Vmatrix}\tau - {\tau }_{c}\end{Vmatrix}}^{2}
+$$
+
+$$
+\text{s.t.} - {\mathcal{L}}_{g}{\mathcal{L}}_{f}{S}_{k}\tau \leq \phi \tag{32}
+$$
+
+where $\phi = 2{\bar{q}}^{T}\bar{q} - - {\left( \frac{\partial \left( {{\mathcal{L}}_{f}{S}_{k}}\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}{S}_{k}}\right) }{\partial x}\right) + 2{\bar{p}}_{ek}^{T}{\widehat{\xi }}_{2} + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}$ . The ${\tau }^{ * }$ is obtained by solving the above quadratic programming problem.
+
+Remark 2. The proposed safety-critical controller can avoid obstacles while ensuring minimal impact on the given tracking task.
+
+## IV. STABILITY AND SAFETY ANALYSIS
+
+In this section, we will conduct stability and safety analysis of the closed-loop system.
+
+## A. Stability Analysis
+
+Lemma 2. The observer error subsystem in (19) is ISS, and the error signals being $\widetilde{q}$ and $\widetilde{f}$ are bounded by
+
+$$
+\begin{Vmatrix}{{E}_{o}\left( t\right) }\end{Vmatrix} \leq \sqrt{\frac{{\lambda }_{\max }\left( N\right) }{{\lambda }_{\min }\left( N\right) }}\max \left\{ {\begin{Vmatrix}{{E}_{o}\left( {t}_{0}\right) }\end{Vmatrix}{e}^{-{\gamma }_{1}\left( {t - {t}_{0}}\right) /2},}\right.
+$$
+
+$$
+\frac{2\parallel {ND}\parallel {\xi }^{ * }}{\varsigma {}_{1}\kappa }\} ,\forall t \geq {t}_{0} \tag{33}
+$$
+
+where ${\gamma }_{1} = \left( {\left\lbrack {{\varsigma }_{1}\left( {1 - \kappa }\right) }\right\rbrack /\left\lbrack {{\lambda }_{\max }\left( N\right) }\right\rbrack }\right)$ and $0 < \kappa < 1$ provided that
+
+$$
+{T}^{T}N + {NT} \leq - {\varsigma }_{1}I \tag{34}
+$$
+
+where ${\varsigma }_{1} \in \mathfrak{R}$ is a positive constant.
+
+Proof. Choose the following Lyapunov function
+
+$$
+{V}_{1} = \left( {1/2}\right) {E}_{o}^{\mathrm{T}}\left( t\right) N{E}_{o}\left( t\right) . \tag{35}
+$$
+
+Taking (34) into account, one has ${\dot{V}}_{1} = {E}_{o}{\left( t\right) }^{\mathrm{T}}N\left( {T{E}_{o}\left( t\right) - }\right.$ $\left. {D\dot{\xi }\left( t\right) }\right) \leq - \frac{{\varsigma }_{1}}{2}{\begin{Vmatrix}{E}_{o}\left( t\right) \end{Vmatrix}}^{2} + \begin{Vmatrix}{{E}_{o}\left( t\right) }\end{Vmatrix}\parallel {ND}\parallel \parallel \dot{\xi }\left( t\right) \parallel$ Since $\begin{Vmatrix}{{E}_{o}\left( t\right) }\end{Vmatrix} \geq \left\lbrack \left( {2\parallel {ND}\parallel \parallel \dot{\xi }\left( t\right) \parallel /{\varsigma }_{1}\kappa }\right) \right\rbrack$ , we have
+
+$$
+{\dot{V}}_{1} \leq - \frac{{\varsigma }_{1}}{2}\left( {1 - \kappa }\right) {\begin{Vmatrix}{E}_{o}\left( t\right) \end{Vmatrix}}^{2}. \tag{36}
+$$
+
+It follows that the observer error subsystem described by (19) is ISS. It is important to note that ${V}_{1}$ is bounded and satisfies the inequality $\left( {\left\lbrack {{\lambda }_{\min }\left( N\right) }\right\rbrack /2}\right) {\begin{Vmatrix}{E}_{o}\left( t\right) \end{Vmatrix}}^{2} \leq {V}_{1} \leq$ $\left( {\left\lbrack {{\lambda }_{\max }\left( N\right) }\right\rbrack /2}\right) {\begin{Vmatrix}{E}_{o}\left( t\right) \end{Vmatrix}}^{2}$ . From this, we can derive (33).
+
+Next, we will outline the stability analysis of the closed-loop system.
+
+Lemma 3. Taking into account the error dynamics represented by (21) and (26), the error signals ${e}_{1},{e}_{2},{e}_{r},\vartheta$ , and $\widetilde{\gamma }$ are uniformly ultimately bounded by
+
+$$
+\parallel E\parallel \leq \sqrt{\frac{{\lambda }_{\max }\left( Q\right) }{{\lambda }_{\min }\left( Q\right) }}\max \left\{ {\begin{Vmatrix}{E\left( {t}_{0}\right) }\end{Vmatrix}{e}^{-{\gamma }_{2}\left( {t - {t}_{0}}\right) /2},}\right.
+$$
+
+$$
+\frac{{E}_{o} + \begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }{\epsilon {\varsigma }_{2}},\forall t \geq {t}_{0} \tag{37}
+$$
+
+where $Q = \operatorname{diag}\{ 1,1/\ell \mu \} ,{\gamma }_{2} = 2{\varsigma }_{2}\left( {1 - \epsilon }\right) /{\lambda }_{\max }\left( Q\right)$ .
+
+Proof. The constructed Lyapunov function is
+
+$$
+{V}_{2} = \frac{1}{2}\left( {{e}_{1}^{\mathrm{T}}{e}_{1} + {e}_{2}^{\mathrm{T}}{e}_{2} + {e}_{d}^{\mathrm{T}}{e}_{d}}\right) + \frac{{\vartheta }^{2}}{2\ell \mu }.
+$$
+
+According to (20),(23)-(27) , the time derivative of ${V}_{2}$ is
+
+$$
+{\dot{V}}_{2} = {e}_{1}^{\mathrm{T}}\left( {{e}_{2} + {e}_{d}}\right) - {e}_{1}^{\mathrm{T}}{k}_{1}{e}_{1} + {e}_{2}^{\mathrm{T}}f + {e}_{2}^{\mathrm{T}}\frac{{e}_{d}}{{t}_{d}} + {e}_{d}^{\mathrm{T}}\left( {-\frac{{e}_{d}}{{t}_{d}} - a}\right)
+$$
+
+$$
++ {e}_{2}^{\mathrm{T}}\left( {-\widehat{f} - {e}_{1} - \frac{{e}_{d}}{{t}_{d}} - {k}_{2}{e}_{2}}\right) + {e}_{2}^{\mathrm{T}}R{M}^{-1}{\tau }_{e} - \frac{{\vartheta }^{2}}{\mu }. \tag{38}
+$$
+
+Finally, we can obtain
+
+$$
+{\dot{V}}_{2} \leq - \left( {{\lambda }_{\min }\left( {k}_{1}\right) - \frac{1}{2}}\right) {\begin{Vmatrix}{e}_{1}\end{Vmatrix}}^{2} - \left( {\frac{1}{{t}_{d}} - \frac{1}{2}}\right) {\begin{Vmatrix}{e}_{d}\end{Vmatrix}}^{2} + \begin{Vmatrix}{e}_{d}\end{Vmatrix}\begin{Vmatrix}{a}^{ * }\end{Vmatrix}
+$$
+
+$$
+- {\lambda }_{\min }\left( {k}_{2}\right) {\begin{Vmatrix}{e}_{2}\end{Vmatrix}}^{2} + \begin{Vmatrix}{e}_{2}\end{Vmatrix}\parallel \widetilde{f}\parallel + \begin{Vmatrix}{e}_{2}^{\mathrm{T}}\end{Vmatrix}\begin{Vmatrix}{M{\tau }_{e}}\end{Vmatrix} - \frac{{\vartheta }^{2}}{\mu }. \tag{39}
+$$
+
+Choose ${\lambda }_{\min }\left( {k}_{1}\right) - \frac{1}{2} > 0,\frac{1}{{t}_{d}} - \frac{1}{2} > 0$ . Then, define ${\varsigma }_{2} = \min \left( {{\lambda }_{\min }\left( {k}_{1}\right) - \frac{1}{2},{\lambda }_{\min }\left( {k}_{2}\right) ,\frac{1}{{t}_{d}} - \frac{1}{2},\frac{1}{\mu }}\right) ,\varpi =$ $\begin{Vmatrix}{M{\tau }_{e}}\end{Vmatrix}, E = {\left\lbrack \begin{array}{llll} {e}_{1}^{\mathrm{T}} & {e}_{2}^{\mathrm{T}} & {e}_{d}^{\mathrm{T}} & \vartheta \end{array}\right\rbrack }^{\mathrm{T}}$ . Hence,(39) becomes ${\dot{V}}_{2} \leq$ $- {\varsigma }_{2}\parallel \mid E{\parallel }^{2} + \parallel E\parallel \left( {{E}_{o} + \begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }\right)$ and ${\dot{V}}_{2} \leq - {\varsigma }_{2}\left( {1 - \epsilon }\right) \parallel E{\parallel }^{2} +$ $\parallel E\parallel \left( {-\epsilon {\varsigma }_{2}\parallel E\parallel + {E}_{o} + \begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }\right)$ where $\epsilon \leq 1$ .
+
+Note that
+
+$$
+\parallel E\parallel \geq \frac{{E}_{o} + \begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }{\epsilon {\varsigma }_{2}}
+$$
+
+renders
+
+$$
+{\dot{V}}_{2} \leq - {\varsigma }_{2}\left( {1 - \epsilon }\right) \parallel E{\parallel }^{2}. \tag{40}
+$$
+
+It can be established that the error subsystem related to obstacle avoidance control is ISS. Additionally, the errors of the closed-loop system satisfies (37). B. Safe Analysis
+
+The subsequent lemma presents the safety analysis of the ASV.
+
+Lemma 4. Given the dynamics of the ASV as outlined in (6), if $\bar{p}\left( {t}_{0}\right) \in \mathcal{U}$ and ${\bar{\tau }}^{ * } \in \mathcal{U}$ are satisfied for all $t > {t}_{0}$ , the closed-loop system will be ISSf.
+
+Proof. According to Lemma 1, if ${\bar{\tau }}^{ * } \in \mathcal{U}$ holds, then $\mathcal{C}$ is forward invariant, meaning that the set $\mathcal{C}$ is ISSf. In other words, as long as $\bar{p}\left( {t}_{0}\right)$ is within $\mathcal{C}$ , the position $\bar{p}\left( t\right)$ will remain in $\mathcal{C}$ indefinitely. Therefore, the closed-loop system is ISSf.
+
+Theorem 2. The closed-loop system is shown to achieve ISSf, indicating that collision avoidance is feasible. Furthermore, all error signals in the closed-loop system are uniformly ultimately bounded.
+
+Proof. According to Lemma 4, the ASV will meet the safety constraint, meaning that the safety objective (11) is fulfilled. We employ Lemmas 2, 3, and [27, Lemma 1], which enable us to deduce that the closed-loop system is ISSf. The norm $\parallel E\parallel$ is uniformly ultimately bounded by
+
+$$
+\parallel E\parallel \leq \sqrt{\frac{{\lambda }_{\max }\left( Q\right) }{{\lambda }_{\min }\left( Q\right) }}\left( {\sqrt{\frac{{\lambda }_{\max }\left( N\right) }{{\lambda }_{\min }\left( N\right) }}\frac{2\parallel {ND}\parallel {\xi }^{ * }}{{\varsigma }_{1}{\kappa \epsilon }{\varsigma }_{2}}}\right.
+$$
+
+$$
+\left. {+\frac{\begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }{\epsilon {\varsigma }_{2}}}\right) \text{.} \tag{41}
+$$
+
+Given that $E$ is bounded, we can deduce that ${e}_{1}$ and $\vartheta$ are also bounded. As a result, it follows that $\begin{Vmatrix}{p\left( t\right) - {p}_{0}\left( \theta \right) }\end{Vmatrix} =$ $\begin{Vmatrix}{e}_{1}\end{Vmatrix}$ and $\begin{Vmatrix}{\dot{\theta }\left( t\right) - {u}_{d}}\end{Vmatrix}$ remain bounded, that is,(9) and (10) hold.
+
+## Remark 3.
+
+## V. Simulation Results
+
+To validate the effectiveness of the proposed control strategy, this paper conducts simulations using Cybership II in [28]. The simulation parameters are set as follows: $w = {40}$ , $\Omega = {0.1}, l = 1,\mu = {0.1},\beta = {30},{d}_{k} = {0.5},{r}_{k} = 1$ , ${k}_{1} = \operatorname{diag}\{ 3,2,8\} ,{k}_{2} = \operatorname{diag}\{ {16},{22},{28}\} ,{t}_{d} = {0.1},{d}_{w} =$ ${\left\lbrack \begin{array}{lll} 3\cos \left( t\right) \sin \left( {0.5t}\right) & 4\sin \left( {0.5t}\right) \cos \left( {0.5t}\right) & {0.2}\sin \left( t\right) \end{array}\right\rbrack }^{\mathrm{T}}$ . The desired parameterized path is ${x}_{d}\left( {\vartheta }_{0}\right) = {y}_{d}\left( {\vartheta }_{0}\right) = {0.06}{\vartheta }_{0} + {0.5}$ , ${\psi }_{d} = \pi /4$ . The position of static obstacle is ${\bar{p}}_{1} = {\left\lbrack \begin{array}{ll} 3 & {2.5} \end{array}\right\rbrack }^{\mathrm{T}}$ , ${\bar{p}}_{2} = {\left\lbrack \begin{array}{ll} 6 & 7 \end{array}\right\rbrack }^{\mathrm{T}}.$
+
+
+
+Fig. 2. Control performance of the proposed control strategy for ASV.
+
+Fig. 2 illustrates the effectiveness of the safety-critical controller proposed for the autonomous vessel. The upper section demonstrates that the vessel prioritizes obstacle avoidance to ensure safety. Once the obstacle avoidance operation is complete, it can proceed with the tracking task. Fig. 3 illustrates the observation effect of the extended state observer. The velocity and aggregated disturbances of the ASV can be accurately estimated. Fig. 4 and Fig. 5 depict the comparisons of the tracking errors and velocities, respectively.
+
+
+
+Fig. 3. The estimates of lumped disturbances.
+
+
+
+Fig. 4. Tracking errors of the ASV.
+
+
+
+Fig. 5. Velocity comparisons of the ASV.
+
+## VI. CONCLUSION
+
+This paper introduces a safety-critical control strategy for ASVs that considers external marine disturbances and internal model uncertainties. Initially, an anti-disturbance controller is devised based on the estimation of lumped disturbances using an ESO. Following this, a quadratic optimization problem is established by incorporating ISSf-ECBFs to enforce safety constraints on the control inputs. By solving this problem, a safety-critical controller is derived, significantly improving the safety and robustness of the system. The closed-loop system is demonstrated to be ISSf, with error signals shown to be uniformly ultimately bounded. The effectiveness of the proposed control strategy is verified through simulation results.
+
+## REFERENCES
+
+[1] T. Kang, N. Gu, D. Wang, L. Liu, Q. Hu, and Z. Peng, "Neurodynamics-based attack-defense guidance of autonomous surface vehicles against multiple attackers for domain protection," IEEE Transactions on Industrial Electronics, vol. 71, no. 10, pp. 12655-12663, 2024.
+
+[2] Z.-Q. Liu, X. Ge, Q.-L. Han, Y.-L. Wang, and X.-M. Zhang, "Secure cooperative path following of autonomous surface vehicles under cyber and physical attacks," IEEE Transactions on Intelligent Vehicles, vol. 8, no. 6, pp. 3680-3691, 2023.
+
+[3] C. Tang, H.-T. Zhang, H. Cao, and J. Wang, "Time-varying formation control of autonomous surface vehicles based on affine observer," IEEE Transactions on Industrial Electronics, 2024.
+
+[4] G. Wu, D. Li, H. Ding, D. Shi, and B. Han, "An overview of developments and challenges for unmanned surface vehicle autonomous berthing," Complex & Intelligent Systems, vol. 10, no. 1, pp. 981-1003, 2024.
+
+[5] H. Bao, Y. Wang, H. Zhu, and D. Wang, "Area complete coverage path planning for offshore seabed organisms fishing autonomous underwater vehicle based on improved whale optimization algorithm," IEEE Sensors Journal, vol. 24, no. 8, pp. 12887-12903, 2024.
+
+[6] D. Madeo, A. Pozzebon, C. Mocenni, and D. Bertoni, "A low-cost unmanned surface vehicle for pervasive water quality monitoring," IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 4, pp. 1433-1444, 2020.
+
+[7] T. Yang, Z. Jiang, R. Sun, N. Cheng, and H. Feng, "Maritime search and rescue based on group mobile computing for unmanned aerial vehicles and unmanned surface vehicles," IEEE Transactions on Industrial Informatics, vol. 16, no. 12, pp. 7700-7708, 2020.
+
+[8] Z. Peng, J. Wang, D. Wang, and Q.-L. Han, "An overview of recent advances in coordinated control of multiple autonomous surface vehicles," IEEE Transactions on Industrial Informatics, vol. 17, no. 2, pp. 732-745, 2020.
+
+[9] R. Cui, X. Zhang, and D. Cui, "Adaptive sliding-mode attitude control for autonomous underwater vehicles with input nonlinearities," Ocean Engineering, vol. 123, pp. 45-54, 2016.
+
+[10] Z. Peng, N. Gu, Y. Zhang, Y. Liu, D. Wang, and L. Liu, "Path-guided time-varying formation control with collision avoidance and connectivity preservation of under-actuated autonomous surface vehicles subject to unknown input gains," Ocean Engineering, vol. 191, p. 106501, 2019.
+
+[11] Z. Xiao, X. Lu, J. Ning, and D. Liu, "Colregs-compliant unmanned surface vehicles collision avoidance based on improved differential evolution algorithm," Expert Systems with Applications, vol. 237, p. 121499, 2024.
+
+[12] Z. Peng, D. Wang, T. Li, and M. Han, "Output-feedback cooperative formation maneuvering of autonomous surface vehicles with connectivity preservation and collision avoidance," IEEE Transactions on Cybernetics, vol. 50, no. 6, pp. 2527-2535, 2019.
+
+[13] Y. Wang, B. Jiang, Z.-G. Wu, S. Xie, and Y. Peng, "Adaptive sliding mode fault-tolerant fuzzy tracking control with application to unmanned marine vehicles," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 11, pp. 6691-6700, 2020.
+
+[14] W. Guan and K. Wang, "Autonomous collision avoidance of unmanned surface vehicles based on improved a-star and dynamic window approach algorithms," IEEE Intelligent Transportation Systems Magazine, vol. 15, no. 3, pp. 36-50, 2023.
+
+[15] P. Wieland and F. Allgöwer, "Constructive safety using control barrier
+
+functions," IFAC Proceedings Volumes, vol. 40, no. 12, pp. 462-467, 2007.
+
+[16] L. Wang, A. D. Ames, and M. Egerstedt, "Safety barrier certificates for collisions-free multirobot systems," IEEE Transactions on Robotics, vol. 33, no. 3, pp. 661-674, 2017.
+
+[17] S. Kolathaya and A. D. Ames, "Input-to-state safety with control barrier functions," IEEE Control Systems Letters, vol. 3, no. 1, pp. 108-113, 2018.
+
+[18] B. T. Lopez, J.-J. E. Slotine, and J. P. How, "Robust adaptive control barrier functions: An adaptive and data-driven approach to safety," IEEE Control Systems Letters, vol. 5, no. 3, pp. 1031-1036, 2020.
+
+[19] N. Gu, D. Wang, Z. Peng, and J. Wang, "Safety-critical containment maneuvering of underactuated autonomous surface vehicles based on neurodynamic optimization with control barrier functions," IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 6, pp. 2882-2895, 2021.
+
+[20] W. Xiao and C. Belta, "High-order control barrier functions," IEEE Transactions on Automatic Control, vol. 67, no. 7, pp. 3655-3662, 2021.
+
+[21] M. Li and Z. Sun, "A new perspective on projection-to-state safety and its application to robotic arms," in Proceeding of the American Control Conference. San Diego, CA, 2023, pp. 2430-2435.
+
+[22] L.-Y. Hao, G. Dong, T. Li, and Z. Peng, "Path-following control with obstacle avoidance of autonomous surface vehicles subject to actuator faults," IEEE/CAA Journal of Automatica Sinica, vol. 11, no. 4, pp. 956-964, 2024.
+
+[23] S. He, M. Wang, S.-L. Dai, and F. Luo, "Leader-follower formation control of usvs with prescribed performance and collision avoidance," IEEE Transactions on Industrial Informatics, vol. 15, no. 1, pp. 572- 581, 2018.
+
+[24] Z. Peng, Y. Jiang, L. Liu, and Y. Shi, "Path-guided model-free flocking control of unmanned surface vehicles based on concurrent learning extended state observers," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 8, pp. 4729-4739, 2023.
+
+[25] A. D. Ames, X. Xu, J. W. Grizzle, and P. Tabuada, "Control barrier function based quadratic programs for safety critical systems," IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 3861-3876, 2016.
+
+[26] D. Wang and J. Huang, "Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form," IEEE Transactions on Neural Networks, vol. 16, no. 1, pp. 195- 202, 2005.
+
+[27] Z. Peng, D. Wang, and J. Wang, "Cooperative dynamic positioning of multiple marine offshore vessels: A modular design," IEEE/ASME Transactions on Mechatronics, vol. 21, no. 3, pp. 1210-1221, 2015.
+
+[28] R. Skjetne, T. I. Fossen, and P. V. Kokotović, "Adaptive maneuvering, with experiments, for a model ship in a marine control laboratory," Automatica, vol. 41, no. 2, pp. 289-298, 2005.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NKhQ1UEQFb/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NKhQ1UEQFb/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3d8866536331fd3d338e1be5ce0023cc8bfb2e64
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NKhQ1UEQFb/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,537 @@
+§ SAFETY-CRITICAL OBSTACLE AVOIDANCE CONTROL OF AUTONOMOUS SURFACE VEHICLES WITH UNCERTAINTIES AND DISTURBANCES
+
+${1}^{\text{ st }}$ Gege Dong
+
+College of Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+donggege0507@163.com
+
+${2}^{\text{ nd }}$ Li-Ying Hao*
+
+College of Marine Electrical Engineering
+
+Dalian Maritime University
+
+Dalian, China
+
+haoliying_0305@163.com
+
+${Abstract}$ -This paper proposes a safety-critical obstacle avoidance control approach for autonomous surface vehicles (ASVs) with disturbances and uncertainties. The existing exponential control barrier functions (ECBF) are extended to handle unknown disturbances, leading to the development of input-to-state safe exponential control barrier functions (ISSf-ECBFs). An extended state observer is used to estimate unknown external marine disturbances and internal model uncertainties, based on which an anti-disturbance controller is designed. Based on the proposed ISSf-ECBFs, a quadratic programming problem is formulated to determine the optimal control input. It is proven that the closed-loop system is input-to-state safe and the errors of the closed-loop system are uniformly ultimately bounded. Simulations validate the effectiveness of the proposed control strategy.
+
+Index Terms-Autonomous surface vehicles (ASVs), safety-critical control, obstacles avoidance, input-to-state safe exponential control barrier functions (ISSf-ECBFs)
+
+§ I. INTRODUCTION
+
+Autonomous Surface Vehicles (ASVs) are gaining attention for their ability to enhance maritime operations [1]-[3]. With advanced sensors and navigation systems, ASVs can navigate complex environments and perform diverse tasks [4]. They are increasingly utilized in search and rescue, fisheries management, hydrographic surveying, and offshore energy, making them a focal point for researchers in ASV control [5]-[7].
+
+ASVs navigating in dynamic marine environments face numerous challenges, primarily internal model uncertainties and external disturbances [8]. Internal uncertainties arise from modeling inaccuracies, parameter variations, and sensor noise. Additionally, ASVs must navigate unpredictable ocean conditions, such as waves, currents, and winds. These factors can adversely affect the performance of control strategy. To address this challenge, researchers have proposed various methods to enhance the robustness of the system, such as sliding mode control [9], adaptive control, and neural network control [10].
+
+The Extended State Observer (ESO) can estimate disturbances in real-time and dynamically adjust the control strategy. By treating internal model uncertainties and external disturbances as lumped disturbances for estimation, the reliance on the model can be reduced, thereby enhancing the robustness of the system.
+
+In complex maritime environments, ASVs face significant threats from various obstacles, including vessels, islands, and reefs [11]. To mitigate these risks, researchers have proposed several obstacle avoidance strategies, such as the artificial potential method [12], the velocity obstacle method [13], and the dynamic window approach [14]. Control barrier functions (CBFs), introduced in [15], have proven effective in ensuring real-time safety. In [16], the nominal controller was modified to formally adhere to safety constraints for successful obstacle avoidance. However, the control strategy in [16] did not account for model uncertainties or disturbances. To address this, [17] introduced input-to-state safe control barrier functions (ISSf-CBFs). Furthermore, [18] proposed a framework to ensure safety for uncertain nonlinear systems with structured parametric uncertainty. In [19], a collision avoidance strategy for ASVs was proposed using ISSf-CBFs. However, these functions have a relative degree of one, limiting their use in higher-order systems. To address this, [20] introduced exponential control barrier functions (ECBFs). [21] further explored ISSf-ECBFs under known perturbation bounds, but measuring such disturbances is challenging. Therefore, a safety-critical controller based on ISSf-ECBFs is crucial for ASVs dealing with unknown model uncertainties and external disturbances.
+
+This paper presents a safety-critical control strategy for Autonomous Surface Vehicles (ASVs) that accounts for external marine disturbances and internal model uncertainties. The key contributions are as follows:
+
+1) While the existing method [21] constructs safety constraints only under known disturbances or their upper bounds, this paper extends the results of input-to-state safe control barrier functions (ISSf-ECBFs) to develop safety constraints for unknown disturbances.
+
+This work was funded by the National Natural Science Foundation of China (51939001, 52171292, 51979020, 61976033), Dalian Outstanding Young Talents Program (2022RJ05), the Topnotch Young Talents Program of China (36261402), and the Liaoning Revitalization Talents Program (XLYC2007188).
+
+2) Unlike previous work [12], [22]-[24], this paper formulates a safety-critical controller based on ISSf-ECBFs by constructing a quadratic programming problem to facilitate collision avoidance with obstacles.
+
+The structure of the paper includes the following sections: The preliminaries and problem statement are covered in Section II. Section III gives the safety-critical controller design and Section IV is stability and safety analysis. Simulations are carried out in Section V. Section VI summarizes this article.
+
+§ II. PRELIMINARIES AND PROBLEM STATEMENT
+
+§ A. NOTATION
+
+In this paper, the notation $\parallel \cdot \parallel$ denotes the 2-norm of a vector, and $\mathfrak{R}$ represents the set of real numbers. The symbols ${\lambda }_{\min }\left( \cdot \right)$ and ${\lambda }_{\max }\left( \cdot \right)$ indicate the smallest and largest eigenvalues of a symmetric matrix, respectively.
+
+Let $\beta \left( r\right)$ be a scalar continuous function defined for $r \in$ $\lbrack - b,a)$ . If $a = \infty ,b = 0$ , and $\beta \left( r\right) \rightarrow \infty$ as $r \rightarrow \infty$ , then $\beta \left( r\right)$ belongs to class ${\mathcal{K}}_{\infty }$ . If $a,b = \infty ,\beta \left( r\right) \rightarrow \infty$ as $r \rightarrow \infty$ , and $\beta \left( r\right) \rightarrow - \infty$ as $r \rightarrow - \infty$ , it represents an extended class ${\mathcal{K}}_{\infty }$ , denoted as ${\mathcal{K}}_{\infty ,e}$ .
+
+B. Input-to-state Safe Exponential Control Barrier Functions Consider the following system
+
+$$
+\dot{x} = f\left( x\right) + g\left( x\right) u + {d}_{w} \tag{1}
+$$
+
+where $x\left( t\right) \in {\Re }^{n}$ denotes state and $u \in {\Re }^{m}$ denotes control input. The term ${d}_{w}$ denotes bounded disturbances. The function $f\left( x\right) \in {\Re }^{n}$ and $g\left( x\right) \in {\Re }^{n \times m}$ are locally Lipschitz continuous.
+
+Definition 1. [25] The set $\mathcal{C} \in {\Re }^{n}$ is described as
+
+$$
+\mathcal{C} \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) \geq 0}\right\}
+$$
+
+$$
+\partial \mathcal{C} \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) = 0}\right\}
+$$
+
+$$
+\operatorname{Int}\left( \mathcal{C}\right) \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) > 0}\right\} \tag{2}
+$$
+
+where $h\left( \cdot \right) \in {\Re }^{n} \mapsto \Re$ represents a continuously differentiable function, and $\mathcal{C}$ is referred to as the safe set. If for all ${x}_{0} \in \mathcal{C}$ it holds that $x\left( t\right) \in \mathcal{C}$ for every $t \in I\left( {x}_{0}\right)$ , then the set $\mathcal{C}$ is considered forward invariant. Consequently, the system described by (1) with ${d}_{w}\left( t\right) = 0$ can be deemed safe on $\mathcal{C}$ .
+
+Definition 2. The relative degree of $S\left( x\right) : {\Re }^{n} \rightarrow \Re$ with respect to the system (1) refers to the number of derivatives required along the dynamics of (1) before the control input $u$ explicitly appears.
+
+Definition 3. [17] For system (1), an extended set ${\mathcal{C}}_{d} \supset \mathcal{C}$ is expressed as follows
+
+$$
+{\mathcal{C}}_{d} \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) + {\beta }_{d}\left( {\begin{Vmatrix}{d}_{w}\left( t\right) \end{Vmatrix}}_{\infty }\right) \geq 0}\right\}
+$$
+
+$$
+\partial {\mathcal{C}}_{d} \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) + {\beta }_{d}\left( {\begin{Vmatrix}{d}_{w}\left( t\right) \end{Vmatrix}}_{\infty }\right) = 0}\right\}
+$$
+
+$$
+\operatorname{Int}\left( {\mathcal{C}}_{d}\right) \triangleq \left\{ {x \in {\Re }^{n} \mid S\left( x\right) + {\beta }_{d}\left( {\begin{Vmatrix}{d}_{w}\left( t\right) \end{Vmatrix}}_{\infty }\right) > 0}\right\} \tag{3}
+$$
+
+where ${\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty } \leq {\bar{d}}_{w}$ , a positive constant, and $S\left( x\right)$ is a continuous function, with ${\beta }_{d}\left( \cdot \right) \in {\mathcal{K}}_{\infty ,e}$ .
+
+Definition 4. (ISSf [17]) If the control input $u$ and the function ${\beta }_{d}$ ensure the forward invariance of the set ${\mathcal{C}}_{d}$ , then the system (1) with disturbances is ISSf on $\mathcal{C}$ .
+
+Definition 5. (ISSf-ECBF [17]) Considering the sets ${\mathcal{C}}_{d}$ defined by (3), $S\left( x\right)$ , which has a relative degree $\rho > 1$ , qualifies as an ISSf-ECBF for the system described in (1). This holds true if, for all $x \in {\Re }^{n}$ , there exist a bound ${\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty } \leq {\bar{\tau }}_{w}$ and a function $\gamma \left( \cdot \right) \in {\mathcal{K}}_{\infty ,e}$ that satisfies
+
+$$
+\mathop{\sup }\limits_{{u \in \mathcal{U}}}\left\lbrack {{\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u + {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}{d}_{w}}\right.
+$$
+
+$$
+\left. {+{\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}}\right\rbrack \geq - \gamma \left( {\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }\right) \tag{4}
+$$
+
+The terms ${\mathcal{L}}_{f}^{\rho }$ and ${\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}$ represent the Lie derivatives of the function $S\left( x\right) .{\mathcal{T}}_{s} = {\left\lbrack \begin{array}{llll} {p}_{0} & {p}_{1} & \ldots & {p}_{\iota } \end{array}\right\rbrack }^{T}$ where ${p}_{i}$ is positive constant. ${\mathcal{H}}_{s} = {\left\lbrack \begin{array}{llll} S\left( x\right) & {\mathcal{L}}_{f}S\left( x\right) & \ldots & {\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) \end{array}\right\rbrack }^{T}$ .
+
+Lemma 1. If $S\left( x\right)$ functions as an ISSf-ECBF for the system (1) in the set $\mathcal{C}$ , then any controller $u \in \mathcal{U}$ that is Lipschitz continuous and valid for all $x \in {\Re }^{n}$ must satisfy
+
+$$
+\mathcal{U}\left( x\right) = \left\{ {u \in {\Re }^{m} : {\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u}\right.
+$$
+
+$$
+\left. {+{\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}{d}_{w} + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s} \geq - \gamma \left( {\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }\right) }\right\} . \tag{5}
+$$
+
+This implies that the set ${\mathcal{C}}_{d}$ is forward invariant. In other words, the system (1) is ISSf on the set $\mathcal{C}$ .
+
+§ C.ASV MODEL
+
+The kinematics and kinetics of ASV can be described as:
+
+$$
+\dot{\eta }\left( t\right) = R\left( \psi \right) \nu \left( t\right) \tag{6}
+$$
+
+$$
+M\dot{\nu }\left( t\right) = f\left( \nu \right) + {d}_{w}\left( t\right) + \tau \left( t\right)
+$$
+
+where $\eta \left( t\right) = {\left\lbrack \begin{array}{ll} \bar{p}\left( t\right) & \psi \left( t\right) \end{array}\right\rbrack }^{T} \in {\Re }^{3}$ represents the position and heading of ASV. $R\left( \psi \right) = \operatorname{diag}\left\{ {{R}_{2}\left( \psi \right) ,1}\right\}$ is a rotate matrix
+
+with
+
+$$
+{R}_{2}\left( \psi \right) = \left\lbrack \begin{matrix} \cos \left( \psi \right) & - \sin \left( \psi \right) \\ \sin \left( \psi \right) & \cos \left( \psi \right) \end{matrix}\right\rbrack . \tag{7}
+$$
+
+The vector $\nu \left( t\right) = {\left\lbrack \begin{array}{lll} u\left( t\right) & v\left( t\right) & r\left( t\right) \end{array}\right\rbrack }^{T} \in {\Re }^{3}$ represents the surge velocity, sway velocity, and yaw velocity, respectively. The matrix $M$ denotes the inertial matrix. $f\left( \nu \right)$ represents the Coriolis and centripetal matrix, damping matrix, and unmodeled hydrodynamics. The vector $\tau \left( t\right)$ signifies the forces produced by the actuators. The external disturbances, caused by wind, waves, and ocean currents, are represented by ${d}_{w}\left( t\right) = {\left\lbrack \begin{array}{lll} {d}_{w1}\left( t\right) & {d}_{w2}\left( t\right) & {d}_{w3}\left( t\right) \end{array}\right\rbrack }^{T} \in {\Re }^{3}.$
+
+Letting $q\left( t\right) = R\left( \psi \right) \nu \left( t\right)$ ,(6) can be rewritten as
+
+$$
+\dot{p} = q \tag{8}
+$$
+
+$$
+\dot{q} = \xi + R{M}^{-1}\tau
+$$
+
+where $\xi = R{M}^{-1}\left( {{d}_{w} + f\left( \nu \right) }\right) + \dot{R}\nu$ .
+
+The desired parameterized path is set as ${p}_{0}\left( \theta \right) =$ ${\left\lbrack {x}_{0}\left( \theta \right) ,{y}_{0}\left( \theta \right) ,{\psi }_{0}\left( \theta \right) \right\rbrack }^{\mathrm{T}},{\psi }_{0}\left( \theta \right) = \arctan \left( {{y}_{0}^{\theta }\left( \theta \right) /{x}_{0}^{\theta }\left( \theta \right) }\right)$ where $\theta$ represents path variable. ${y}_{0}^{\theta }\left( \theta \right)$ and ${x}_{0}^{\theta }\left( \theta \right)$ is the partial derivative of ${y}_{0}\left( \theta \right)$ and ${x}_{0}\left( \theta \right)$ , respectively. In addition, it is assumed that ${p}_{0}^{\theta }\left( \theta \right)$ is bounded.
+
+§ D. PROBLEM FORMULATION
+
+The safety-critical obstacle avoidance controller of ASV is required to achieve the following tasks:
+
+(1) Geometric task: Ensure that the ASV follows the desired path, meaning that
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow \infty }}\begin{Vmatrix}{p\left( t\right) - {p}_{0}\left( \theta \right) }\end{Vmatrix} < {l}_{1} \tag{9}
+$$
+
+where ${l}_{1} \in \mathfrak{R}$ denotes a small positive constant.
+
+(2) Dynamic task: The derivative of the path variable $\theta$ converge to the desired speed
+
+$$
+\mathop{\lim }\limits_{{t \rightarrow \infty }}\begin{Vmatrix}{\dot{\theta }\left( t\right) - {u}_{d}\left( t\right) }\end{Vmatrix} < {l}_{2} \tag{10}
+$$
+
+where ${u}_{d}\left( t\right)$ represents desired speed and ${l}_{2}$ is a small positive constant.
+
+(3) Obstacle avoidance task: To prevent collisions between the ASV and obstacles, the following condition must be met
+
+$$
+\begin{Vmatrix}{\bar{p}\left( t\right) - {\bar{p}}_{k}\left( t\right) }\end{Vmatrix} > {r}_{k} + {d}_{k} \tag{11}
+$$
+
+where ${\bar{p}}_{k}\left( t\right) ,{r}_{k}$ , and ${d}_{k}$ represent the position, the radius, and the minimum obstacle avoidance distance of the $k$ th obstacle.
+
+§ III. MAIN RESULTS
+
+§ A. ISSF-ECBF WITH UNKNOWN DISTURBANCCES
+
+While the previous studies have made substantial progress, they were mainly directed at scenarios with known disturbances or predefined upper bounds. To alleviate this limitation, the following theorem is presented to account for unknown disturbances.
+
+Theorem 1. Given the ISSf-ECBF $S\left( x\right)$ as defined in Definition 5 for the system (1) on the set $\mathcal{C}$ , if there exists a bound ${\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty } \leq {\bar{d}}_{w}$ such that for every $x \in {\Re }^{n}$ , the following inequality holds
+
+$$
+\mathop{\sup }\limits_{{u \in \mathcal{U}}}\left\lbrack {{\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}}\right.
+$$
+
+$$
+\left. {-{\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }\right\rbrack \geq 0 \tag{12}
+$$
+
+and the admissible control set satisfies as
+
+$$
+\mathcal{U}\left( x\right) = \left\{ {u \in {\Re }^{m} : {\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}}\right.
+$$
+
+$$
+- {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) \geq 0\} . \tag{13}
+$$
+
+Then, we can obtain that the system (1) is ISSf on $\mathcal{C}$ .
+
+Proof. For $u \in \mathcal{U}\left( x\right)$ , one has
+
+$$
+{\mathcal{L}}_{f}^{\rho }S\left( x\right) + {\mathcal{L}}_{g}{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) u + {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}{d}_{w} + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}
+$$
+
+$$
+\geq {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) + {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}{d}_{w}
+$$
+
+$$
+\geq {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\right)
+$$
+
+$$
+- \parallel \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x}\parallel {\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }. \tag{14}
+$$
+
+Adding and subtracting $\frac{{\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }^{2}}{4}$ yields
+
+$$
+\dot{h} \geq {\left( \frac{\partial \left( {{\mathcal{L}}_{f}^{\rho - 1}S\left( x\right) }\right) }{\partial x} - \frac{{\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }}{2}\right) }^{2} - \frac{{\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }^{2}}{4}
+$$
+
+$$
+\geq - \frac{{\begin{Vmatrix}{d}_{w}\end{Vmatrix}}_{\infty }^{2}}{4} \tag{15}
+$$
+
+which is of the form (4).
+
+Remark 1. Compared with [21], the proposed ISSf-ECBF can deal with unknown perturbations. Although asymptotically stable ESO is used in reference 21, in practice, the disturbance estimation error is difficult to be 0 . Thus, it is essential to develop ISSf-ECBFs that ensure safety in the presence of unknown disturbances.
+
+§ B. ANTI-DISTURBANCE CONTROLLER DESIGN
+
+In this section, we will focus on designing a safety-critical controller. The control architecture for the proposed strategy is illustrated in Figure 1.
+
+ < g r a p h i c s >
+
+Fig. 1. Control architecture of the safety-critical controller for the ASV.
+
+Firstly, we utilize the ESO to obtain the estimations of the model uncertainties, external disturbances in this part. In addition, the ESO relies on the following assumption.
+
+Assumption 1. $\dot{\xi }\left( t\right)$ is a bounded function meeting
+
+$$
+\parallel \dot{\xi }\left( t\right) \parallel \leq {\xi }^{ * } \tag{16}
+$$
+
+where ${\xi }^{ * }$ is a positive constant.
+
+Then, the ESO is devised to estimate model uncertainties, external disturbances.
+
+$$
+\left\{ \begin{array}{l} \dot{\widehat{q}}\left( t\right) = - {K}_{1}\widetilde{q}\left( t\right) + \widehat{\xi }\left( t\right) + R{M}^{-1}\tau \\ \dot{\widehat{\xi }}\left( t\right) = - {K}_{2}\widetilde{q}\left( t\right) \end{array}\right. \tag{17}
+$$
+
+where $\widehat{q}\left( t\right)$ and $\widehat{\xi }\left( t\right)$ represent the estimates of $q\left( t\right)$ and $\xi \left( t\right)$ . The observer matrices ${\left\lbrack \begin{array}{ll} {K}_{1} & {K}_{2} \end{array}\right\rbrack }^{T} = {\left\lbrack \begin{array}{ll} {2w}{I}_{3} & {w}^{2}{I}_{3} \end{array}\right\rbrack }^{T}$ where $w$ is the observer bandwidth.
+
+Defining $\widetilde{q}\left( t\right) = \widehat{q}\left( t\right) - q\left( t\right)$ , and $\widetilde{\xi }\left( t\right) = \widehat{\xi }\left( t\right) - \xi \left( t\right)$ are the estimates. The dynamics of $\widetilde{q}\left( t\right)$ and $\widetilde{\xi }\left( t\right)$ can be written as
+
+$$
+\left\{ \begin{array}{l} \dot{\widetilde{q}}\left( t\right) = - {K}_{1}\widetilde{q}\left( t\right) + \widetilde{\xi }\left( t\right) \\ \dot{\widetilde{\xi }}\left( t\right) = - {K}_{2}\widetilde{q}\left( t\right) - \dot{\xi }\left( t\right) . \end{array}\right. \tag{18}
+$$
+
+Next, (18) can be rewritten as
+
+$$
+{\dot{E}}_{o}\left( t\right) = T{E}_{o}\left( t\right) - D\dot{\xi }\left( t\right) \tag{19}
+$$
+
+where ${E}_{o}\left( t\right) = {\left\lbrack \begin{array}{ll} {\widetilde{q}}^{\mathrm{T}}\left( t\right) & {\widetilde{\xi }}^{\mathrm{T}}\left( t\right) \end{array}\right\rbrack }^{\mathrm{T}} \in {\Re }^{6}$ and
+
+$$
+T = \left\lbrack \begin{array}{ll} - {K}_{1} & {I}_{3} \\ - {K}_{2} & {0}_{3} \end{array}\right\rbrack ,D = \left\lbrack \begin{array}{l} {0}_{3} \\ {I}_{3} \end{array}\right\rbrack .
+$$
+
+Then, the following tracking error is defined as ${e}_{1} = p -$ ${p}_{0}\left( \theta \right)$ . By taking the derivative of ${e}_{1}$ and using (6), we can
+
+get
+
+$$
+{\dot{e}}_{1} = q - {p}_{0}^{\theta }\left( \theta \right) \dot{\theta }. \tag{20}
+$$
+
+Let ${u}_{d} - \vartheta \left( t\right) = \dot{\theta }\left( t\right)$ , one can obtain
+
+$$
+{\dot{e}}_{1} = q - {p}_{0}^{\theta }\left( \theta \right) \left( {{u}_{d} - \vartheta }\right) . \tag{21}
+$$
+
+The kinematic guidance law ${q}_{d}$ is designed as follows to stabilize ${e}_{1}$ :
+
+$$
+{q}_{d} = - {k}_{1}{e}_{1} + {p}_{0}^{\theta }\left( \theta \right) {u}_{d} \tag{22}
+$$
+
+and
+
+$$
+\dot{\vartheta } = - \ell \left( {\vartheta + \mu {p}_{0}^{\theta }{\left( \theta \right) }^{\mathrm{T}}{e}_{1}}\right) \tag{23}
+$$
+
+where ${k}_{1} = \operatorname{diag}\left\{ {{k}_{11},{k}_{12},{k}_{13}}\right\} ,\ell$ and $\mu$ are positive constants.
+
+To proceed, defining ${e}_{2} = q - {\widehat{q}}_{d}$ , where ${\widehat{q}}_{d}$ is the estimate of ${q}_{d}$ . ${\widehat{q}}_{d}$ can be obtained by using the following filtering scheme:
+
+$$
+{t}_{d}{\dot{\widehat{q}}}_{d} + {\widehat{q}}_{d} = {q}_{d},\;{\widehat{q}}_{d}\left( 0\right) = {q}_{d}\left( 0\right) \tag{24}
+$$
+
+where ${t}_{d}$ is a positive constant. Let
+
+$$
+{e}_{d} = {\widehat{q}}_{d} - {q}_{d}. \tag{25}
+$$
+
+And ${\dot{q}}_{d} \triangleq a = {\left\lbrack \begin{array}{lll} {a}_{1} & {a}_{2} & {a}_{3} \end{array}\right\rbrack }^{T} \cdot {a}_{j}$ is bounded by $\left| {a}_{j}\right| \leq {a}_{j}^{ * }$ , $j = 1,2,3$ , where ${a}_{j}^{ * }$ is a positive constant. For details, please refer to [26].
+
+Then, the time derivative of ${e}_{2}$ yields
+
+$$
+{\dot{e}}_{2} = \xi + R{M}^{-1}\tau \left( t\right) + \frac{{e}_{d}}{{t}_{d}}. \tag{26}
+$$
+
+To stabilize ${e}_{2}$ , the anti-disturbance control law is developed as follows:
+
+$$
+{\tau }_{c}\left( t\right) = M{R}^{T}\left( {-\widehat{\xi } - {e}_{1} - \frac{{e}_{d}}{{t}_{d}} - {k}_{2}{e}_{2}}\right) \tag{27}
+$$
+
+where ${k}_{2} = \operatorname{diag}\left\{ {{k}_{21},{k}_{22},{k}_{23}}\right\}$ . Denote $\tau = {\tau }_{c} + {\tau }_{e}$ .
+
+§ C. SAFETY-CRITICAL OBSTACLE AVOIDANCE CONTROLLER
+
+In this part, considering the collision with obstacles and ASV to design the optimal surge and sway force of safety conditions. From (8), we can get
+
+$$
+\dot{\bar{p}} = \bar{q} \tag{28}
+$$
+
+$$
+\dot{\bar{q}} = {\widehat{\xi }}_{2} + {\tau }_{2} - {\widetilde{\xi }}_{2}
+$$
+
+where $\bar{p}$ denotes ${\left\lbrack x\left( t\right) ,y\left( t\right) \right\rbrack }^{T},\bar{q}$ denotes ${R}_{2}\left( \psi \right) {\left\lbrack u,v\right\rbrack }^{T}.{\widehat{\xi }}_{2},{\widetilde{\xi }}_{2}$ and ${\tau }_{2}$ is the first two dimensions of $\widehat{\xi },\widetilde{\xi }$ and $\tau$ , respectively. ${\bar{p}}_{k} = {\left\lbrack {x}_{k},{y}_{k}\right\rbrack }^{T}$ is position of $k$ th obstacle.
+
+We choose the following candidate ISSf-ECBF
+
+$$
+{S}_{k}\left( s\right) = {\begin{Vmatrix}{\bar{p}}_{ek}\end{Vmatrix}}^{2} - {\left( {r}_{k} + {d}_{k}\right) }^{2} \tag{29}
+$$
+
+where ${\bar{p}}_{ek} = \bar{p} - {\bar{p}}_{k},s = {\left\lbrack {\bar{p}}^{T},{\bar{q}}^{T}\right\rbrack }^{T}$ . To achieve the objective of obstacle avoidance, the set $\mathcal{C}$ can be obtained
+
+$$
+\mathcal{C} = \left\{ {\bar{p} \in {\mathbb{R}}^{2} : {S}_{k}\left( s\right) = {\begin{Vmatrix}{\bar{p}}_{ek}\end{Vmatrix}}^{2} - {\left( {r}_{k} + {d}_{k}\right) }^{2} \geq 0}\right\} \tag{30}
+$$
+
+For ease of notation, it is denoted by ${S}_{k}$ in the sequel. The safety constraint with ${S}_{k}\left( s\right)$ is described as
+
+$$
+\mathcal{U} = \left\{ {{\tau }_{2} : {\mathcal{L}}_{f}^{2}{S}_{k} + {\mathcal{L}}_{g}{\mathcal{L}}_{f}{S}_{k}{\tau }_{2} - {\left( \frac{\partial \left( {{\mathcal{L}}_{f}{S}_{k}}\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}{S}_{k}}\right) }{\partial x}\right) }\right.
+$$
+
+$$
+\left. {+{\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s} \geq 0}\right\} \tag{31}
+$$
+
+where ${\mathcal{L}}_{f}^{2}{S}_{k} = 2{\bar{q}}^{T}\bar{q} + 2{\bar{p}}_{ek}^{T}{\widehat{\xi }}_{2},{\mathcal{L}}_{g}{\mathcal{L}}_{f}{S}_{k} = 2{\bar{p}}_{ek}^{T},{\mathcal{T}}_{s} =$ ${\left\lbrack {\beta }^{2},2\beta \right\rbrack }^{T}$ . For the ASV, ensuring safety takes precedence over geometric objectives. Based on the safety constraint (31), the following quadratic programming problem is constructed.
+
+$$
+{\tau }^{ * } = \mathop{\operatorname{argmin}}\limits_{{\tau \in {\Re }^{m}}}J\left( \tau \right) = {\begin{Vmatrix}\tau - {\tau }_{c}\end{Vmatrix}}^{2}
+$$
+
+$$
+\text{ s.t. } - {\mathcal{L}}_{g}{\mathcal{L}}_{f}{S}_{k}\tau \leq \phi \tag{32}
+$$
+
+where $\phi = 2{\bar{q}}^{T}\bar{q} - - {\left( \frac{\partial \left( {{\mathcal{L}}_{f}{S}_{k}}\right) }{\partial x}\right) }^{T}\left( \frac{\partial \left( {{\mathcal{L}}_{f}{S}_{k}}\right) }{\partial x}\right) + 2{\bar{p}}_{ek}^{T}{\widehat{\xi }}_{2} + {\mathcal{T}}_{s}^{T}{\mathcal{H}}_{s}$ . The ${\tau }^{ * }$ is obtained by solving the above quadratic programming problem.
+
+Remark 2. The proposed safety-critical controller can avoid obstacles while ensuring minimal impact on the given tracking task.
+
+§ IV. STABILITY AND SAFETY ANALYSIS
+
+In this section, we will conduct stability and safety analysis of the closed-loop system.
+
+§ A. STABILITY ANALYSIS
+
+Lemma 2. The observer error subsystem in (19) is ISS, and the error signals being $\widetilde{q}$ and $\widetilde{f}$ are bounded by
+
+$$
+\begin{Vmatrix}{{E}_{o}\left( t\right) }\end{Vmatrix} \leq \sqrt{\frac{{\lambda }_{\max }\left( N\right) }{{\lambda }_{\min }\left( N\right) }}\max \left\{ {\begin{Vmatrix}{{E}_{o}\left( {t}_{0}\right) }\end{Vmatrix}{e}^{-{\gamma }_{1}\left( {t - {t}_{0}}\right) /2},}\right.
+$$
+
+$$
+\frac{2\parallel {ND}\parallel {\xi }^{ * }}{\varsigma {}_{1}\kappa }\} ,\forall t \geq {t}_{0} \tag{33}
+$$
+
+where ${\gamma }_{1} = \left( {\left\lbrack {{\varsigma }_{1}\left( {1 - \kappa }\right) }\right\rbrack /\left\lbrack {{\lambda }_{\max }\left( N\right) }\right\rbrack }\right)$ and $0 < \kappa < 1$ provided that
+
+$$
+{T}^{T}N + {NT} \leq - {\varsigma }_{1}I \tag{34}
+$$
+
+where ${\varsigma }_{1} \in \mathfrak{R}$ is a positive constant.
+
+Proof. Choose the following Lyapunov function
+
+$$
+{V}_{1} = \left( {1/2}\right) {E}_{o}^{\mathrm{T}}\left( t\right) N{E}_{o}\left( t\right) . \tag{35}
+$$
+
+Taking (34) into account, one has ${\dot{V}}_{1} = {E}_{o}{\left( t\right) }^{\mathrm{T}}N\left( {T{E}_{o}\left( t\right) - }\right.$ $\left. {D\dot{\xi }\left( t\right) }\right) \leq - \frac{{\varsigma }_{1}}{2}{\begin{Vmatrix}{E}_{o}\left( t\right) \end{Vmatrix}}^{2} + \begin{Vmatrix}{{E}_{o}\left( t\right) }\end{Vmatrix}\parallel {ND}\parallel \parallel \dot{\xi }\left( t\right) \parallel$ Since $\begin{Vmatrix}{{E}_{o}\left( t\right) }\end{Vmatrix} \geq \left\lbrack \left( {2\parallel {ND}\parallel \parallel \dot{\xi }\left( t\right) \parallel /{\varsigma }_{1}\kappa }\right) \right\rbrack$ , we have
+
+$$
+{\dot{V}}_{1} \leq - \frac{{\varsigma }_{1}}{2}\left( {1 - \kappa }\right) {\begin{Vmatrix}{E}_{o}\left( t\right) \end{Vmatrix}}^{2}. \tag{36}
+$$
+
+It follows that the observer error subsystem described by (19) is ISS. It is important to note that ${V}_{1}$ is bounded and satisfies the inequality $\left( {\left\lbrack {{\lambda }_{\min }\left( N\right) }\right\rbrack /2}\right) {\begin{Vmatrix}{E}_{o}\left( t\right) \end{Vmatrix}}^{2} \leq {V}_{1} \leq$ $\left( {\left\lbrack {{\lambda }_{\max }\left( N\right) }\right\rbrack /2}\right) {\begin{Vmatrix}{E}_{o}\left( t\right) \end{Vmatrix}}^{2}$ . From this, we can derive (33).
+
+Next, we will outline the stability analysis of the closed-loop system.
+
+Lemma 3. Taking into account the error dynamics represented by (21) and (26), the error signals ${e}_{1},{e}_{2},{e}_{r},\vartheta$ , and $\widetilde{\gamma }$ are uniformly ultimately bounded by
+
+$$
+\parallel E\parallel \leq \sqrt{\frac{{\lambda }_{\max }\left( Q\right) }{{\lambda }_{\min }\left( Q\right) }}\max \left\{ {\begin{Vmatrix}{E\left( {t}_{0}\right) }\end{Vmatrix}{e}^{-{\gamma }_{2}\left( {t - {t}_{0}}\right) /2},}\right.
+$$
+
+$$
+\frac{{E}_{o} + \begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }{\epsilon {\varsigma }_{2}},\forall t \geq {t}_{0} \tag{37}
+$$
+
+where $Q = \operatorname{diag}\{ 1,1/\ell \mu \} ,{\gamma }_{2} = 2{\varsigma }_{2}\left( {1 - \epsilon }\right) /{\lambda }_{\max }\left( Q\right)$ .
+
+Proof. The constructed Lyapunov function is
+
+$$
+{V}_{2} = \frac{1}{2}\left( {{e}_{1}^{\mathrm{T}}{e}_{1} + {e}_{2}^{\mathrm{T}}{e}_{2} + {e}_{d}^{\mathrm{T}}{e}_{d}}\right) + \frac{{\vartheta }^{2}}{2\ell \mu }.
+$$
+
+According to (20),(23)-(27), the time derivative of ${V}_{2}$ is
+
+$$
+{\dot{V}}_{2} = {e}_{1}^{\mathrm{T}}\left( {{e}_{2} + {e}_{d}}\right) - {e}_{1}^{\mathrm{T}}{k}_{1}{e}_{1} + {e}_{2}^{\mathrm{T}}f + {e}_{2}^{\mathrm{T}}\frac{{e}_{d}}{{t}_{d}} + {e}_{d}^{\mathrm{T}}\left( {-\frac{{e}_{d}}{{t}_{d}} - a}\right)
+$$
+
+$$
++ {e}_{2}^{\mathrm{T}}\left( {-\widehat{f} - {e}_{1} - \frac{{e}_{d}}{{t}_{d}} - {k}_{2}{e}_{2}}\right) + {e}_{2}^{\mathrm{T}}R{M}^{-1}{\tau }_{e} - \frac{{\vartheta }^{2}}{\mu }. \tag{38}
+$$
+
+Finally, we can obtain
+
+$$
+{\dot{V}}_{2} \leq - \left( {{\lambda }_{\min }\left( {k}_{1}\right) - \frac{1}{2}}\right) {\begin{Vmatrix}{e}_{1}\end{Vmatrix}}^{2} - \left( {\frac{1}{{t}_{d}} - \frac{1}{2}}\right) {\begin{Vmatrix}{e}_{d}\end{Vmatrix}}^{2} + \begin{Vmatrix}{e}_{d}\end{Vmatrix}\begin{Vmatrix}{a}^{ * }\end{Vmatrix}
+$$
+
+$$
+- {\lambda }_{\min }\left( {k}_{2}\right) {\begin{Vmatrix}{e}_{2}\end{Vmatrix}}^{2} + \begin{Vmatrix}{e}_{2}\end{Vmatrix}\parallel \widetilde{f}\parallel + \begin{Vmatrix}{e}_{2}^{\mathrm{T}}\end{Vmatrix}\begin{Vmatrix}{M{\tau }_{e}}\end{Vmatrix} - \frac{{\vartheta }^{2}}{\mu }. \tag{39}
+$$
+
+Choose ${\lambda }_{\min }\left( {k}_{1}\right) - \frac{1}{2} > 0,\frac{1}{{t}_{d}} - \frac{1}{2} > 0$ . Then, define ${\varsigma }_{2} = \min \left( {{\lambda }_{\min }\left( {k}_{1}\right) - \frac{1}{2},{\lambda }_{\min }\left( {k}_{2}\right) ,\frac{1}{{t}_{d}} - \frac{1}{2},\frac{1}{\mu }}\right) ,\varpi =$ $\begin{Vmatrix}{M{\tau }_{e}}\end{Vmatrix},E = {\left\lbrack \begin{array}{llll} {e}_{1}^{\mathrm{T}} & {e}_{2}^{\mathrm{T}} & {e}_{d}^{\mathrm{T}} & \vartheta \end{array}\right\rbrack }^{\mathrm{T}}$ . Hence,(39) becomes ${\dot{V}}_{2} \leq$ $- {\varsigma }_{2}\parallel \mid E{\parallel }^{2} + \parallel E\parallel \left( {{E}_{o} + \begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }\right)$ and ${\dot{V}}_{2} \leq - {\varsigma }_{2}\left( {1 - \epsilon }\right) \parallel E{\parallel }^{2} +$ $\parallel E\parallel \left( {-\epsilon {\varsigma }_{2}\parallel E\parallel + {E}_{o} + \begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }\right)$ where $\epsilon \leq 1$ .
+
+Note that
+
+$$
+\parallel E\parallel \geq \frac{{E}_{o} + \begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }{\epsilon {\varsigma }_{2}}
+$$
+
+renders
+
+$$
+{\dot{V}}_{2} \leq - {\varsigma }_{2}\left( {1 - \epsilon }\right) \parallel E{\parallel }^{2}. \tag{40}
+$$
+
+It can be established that the error subsystem related to obstacle avoidance control is ISS. Additionally, the errors of the closed-loop system satisfies (37). B. Safe Analysis
+
+The subsequent lemma presents the safety analysis of the ASV.
+
+Lemma 4. Given the dynamics of the ASV as outlined in (6), if $\bar{p}\left( {t}_{0}\right) \in \mathcal{U}$ and ${\bar{\tau }}^{ * } \in \mathcal{U}$ are satisfied for all $t > {t}_{0}$ , the closed-loop system will be ISSf.
+
+Proof. According to Lemma 1, if ${\bar{\tau }}^{ * } \in \mathcal{U}$ holds, then $\mathcal{C}$ is forward invariant, meaning that the set $\mathcal{C}$ is ISSf. In other words, as long as $\bar{p}\left( {t}_{0}\right)$ is within $\mathcal{C}$ , the position $\bar{p}\left( t\right)$ will remain in $\mathcal{C}$ indefinitely. Therefore, the closed-loop system is ISSf.
+
+Theorem 2. The closed-loop system is shown to achieve ISSf, indicating that collision avoidance is feasible. Furthermore, all error signals in the closed-loop system are uniformly ultimately bounded.
+
+Proof. According to Lemma 4, the ASV will meet the safety constraint, meaning that the safety objective (11) is fulfilled. We employ Lemmas 2, 3, and [27, Lemma 1], which enable us to deduce that the closed-loop system is ISSf. The norm $\parallel E\parallel$ is uniformly ultimately bounded by
+
+$$
+\parallel E\parallel \leq \sqrt{\frac{{\lambda }_{\max }\left( Q\right) }{{\lambda }_{\min }\left( Q\right) }}\left( {\sqrt{\frac{{\lambda }_{\max }\left( N\right) }{{\lambda }_{\min }\left( N\right) }}\frac{2\parallel {ND}\parallel {\xi }^{ * }}{{\varsigma }_{1}{\kappa \epsilon }{\varsigma }_{2}}}\right.
+$$
+
+$$
+\left. {+\frac{\begin{Vmatrix}{a}^{ * }\end{Vmatrix} + \varpi }{\epsilon {\varsigma }_{2}}}\right) \text{ . } \tag{41}
+$$
+
+Given that $E$ is bounded, we can deduce that ${e}_{1}$ and $\vartheta$ are also bounded. As a result, it follows that $\begin{Vmatrix}{p\left( t\right) - {p}_{0}\left( \theta \right) }\end{Vmatrix} =$ $\begin{Vmatrix}{e}_{1}\end{Vmatrix}$ and $\begin{Vmatrix}{\dot{\theta }\left( t\right) - {u}_{d}}\end{Vmatrix}$ remain bounded, that is,(9) and (10) hold.
+
+§ REMARK 3.
+
+§ V. SIMULATION RESULTS
+
+To validate the effectiveness of the proposed control strategy, this paper conducts simulations using Cybership II in [28]. The simulation parameters are set as follows: $w = {40}$ , $\Omega = {0.1},l = 1,\mu = {0.1},\beta = {30},{d}_{k} = {0.5},{r}_{k} = 1$ , ${k}_{1} = \operatorname{diag}\{ 3,2,8\} ,{k}_{2} = \operatorname{diag}\{ {16},{22},{28}\} ,{t}_{d} = {0.1},{d}_{w} =$ ${\left\lbrack \begin{array}{lll} 3\cos \left( t\right) \sin \left( {0.5t}\right) & 4\sin \left( {0.5t}\right) \cos \left( {0.5t}\right) & {0.2}\sin \left( t\right) \end{array}\right\rbrack }^{\mathrm{T}}$ . The desired parameterized path is ${x}_{d}\left( {\vartheta }_{0}\right) = {y}_{d}\left( {\vartheta }_{0}\right) = {0.06}{\vartheta }_{0} + {0.5}$ , ${\psi }_{d} = \pi /4$ . The position of static obstacle is ${\bar{p}}_{1} = {\left\lbrack \begin{array}{ll} 3 & {2.5} \end{array}\right\rbrack }^{\mathrm{T}}$ , ${\bar{p}}_{2} = {\left\lbrack \begin{array}{ll} 6 & 7 \end{array}\right\rbrack }^{\mathrm{T}}.$
+
+ < g r a p h i c s >
+
+Fig. 2. Control performance of the proposed control strategy for ASV.
+
+Fig. 2 illustrates the effectiveness of the safety-critical controller proposed for the autonomous vessel. The upper section demonstrates that the vessel prioritizes obstacle avoidance to ensure safety. Once the obstacle avoidance operation is complete, it can proceed with the tracking task. Fig. 3 illustrates the observation effect of the extended state observer. The velocity and aggregated disturbances of the ASV can be accurately estimated. Fig. 4 and Fig. 5 depict the comparisons of the tracking errors and velocities, respectively.
+
+ < g r a p h i c s >
+
+Fig. 3. The estimates of lumped disturbances.
+
+ < g r a p h i c s >
+
+Fig. 4. Tracking errors of the ASV.
+
+ < g r a p h i c s >
+
+Fig. 5. Velocity comparisons of the ASV.
+
+§ VI. CONCLUSION
+
+This paper introduces a safety-critical control strategy for ASVs that considers external marine disturbances and internal model uncertainties. Initially, an anti-disturbance controller is devised based on the estimation of lumped disturbances using an ESO. Following this, a quadratic optimization problem is established by incorporating ISSf-ECBFs to enforce safety constraints on the control inputs. By solving this problem, a safety-critical controller is derived, significantly improving the safety and robustness of the system. The closed-loop system is demonstrated to be ISSf, with error signals shown to be uniformly ultimately bounded. The effectiveness of the proposed control strategy is verified through simulation results.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NLNBi9lbov/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NLNBi9lbov/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..87e893ff56ac54ef5ddb559fa73017f2081d48d8
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/NLNBi9lbov/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,462 @@
+§ REMOTE SENSING OBJECT DETECTION BASED ON FUSION OF SPATIAL AND CHANNEL ATTENTION
+
+Wenyun Sun
+
+School of Artificial Intelligence
+
+Nanjing University of
+
+Information Science and
+
+Technology
+
+Nanjing, China
+
+wenyunsun@nuist.edu.cn
+
+Long Ji
+
+School of Computer Science
+
+Nanjing University of
+
+Information Science and
+
+Technology
+
+Nanjing, China
+
+202212490374@nuist.edu.cn
+
+Abstract-Remote sensing object detection faces unique challenges due to objects' varied scales and orientations. To address these challenges, we propose the Spatial Channel Attention Fusion Module (SCAF-Module), designed to enhance detection accuracy by integrating multi-scale convolutions, adaptive rotated convolutions, and parallel spatial channel attention mechanisms. The experiments, conducted using the DOTA-v1.0 and HRSC2016 datasets, demonstrate the efficacy of the SCAF-Module. We achieved mean Average Precision (mAP) scores of 80.94% and 98.23% on these datasets, respectively. Comparative experiments reveal that the SCAF-Module surpasses several advanced models, including the baseline Oriented R-CNN. Additionally, ablation studies highlight the significance of the spatial and channel attention mechanisms and the impact of rotated convolutions on detection performance. The SCAF-Module presents a robust and adaptable framework for remote sensing object detection, offering significant improvements over existing methods. This work paves the way for further optimization and application of the module in other challenging remote sensing tasks.
+
+Keywords-Remote Sensing, Object Detection, Attention Mechanism, Neural Networks
+
+§ I. INTRODUCTION
+
+Remote sensing imagery plays a pivotal role in a wide array of applications, including environmental monitoring, urban planning, disaster management, and agricultural assessment. These applications demand accurate and efficient object detection methods to extract meaningful information from complex and large-scale images. However, detecting objects in remote sensing images presents unique challenges compared to natural image datasets. The diverse scales, orientations, and dense distribution of objects within cluttered backgrounds significantly hinder the performance of traditional detection algorithms.
+
+Conventional object detection models, such as YOLO [1] and Faster R-CNN [2], have achieved significant success in natural images. However, these models often encounter limitations when applied to remote sensing imagery. Specifically, traditional methods struggle to manage objects with varying scales, accurately detect objects with arbitrary orientations, and adapt to the complex background clutter typical in remote sensing environments. Additionally, most conventional models rely on axis-aligned bounding boxes, which are not well-suited for the detection of oriented objects, leading to reduced accuracy. Several approaches have been proposed to address these issues, such as incorporating rotated bounding boxes and employing multi-scale feature extraction techniques. However, these methods often involve trade-offs between computational efficiency and detection accuracy, and may still fall short in environments with highly varied object scales and orientations. Moreover, these models' integration of attention mechanisms has been relatively limited, often focusing on spatial or channel attention separately, rather than leveraging their combined potential.
+
+To overcome these challenges, we propose the Spatial Channel Attention Fusion Module (SCAF-Module). This module integrates multi-scale convolutions, adaptive rotated convolutions, and parallel spatial channel attention mechanisms to enhance detection accuracy. The multi-scale convolutions include a $3 \times 3$ rotated convolution, a $3 \times 3$ dilated rotated convolution, and a $5 \times 5$ dilated convolution, each contributing to the detection of objects at various scales and orientations. The spatial and channel attention mechanisms further refine these features, allowing the model to selectively focus on important regions and adapt to different object characteristics.
+
+We define multi-scale convolution as the application of convolutional layers with different receptive fields, designed to capture features at various scales [3]. Adaptive rotated convolution [4] is defined as a type of convolution that adapts to the orientation of objects, enhancing the model's sensitivity to rotated objects. The Spatial Channel Attention Fusion is a technique that enhances feature representation by focusing on significant spatial regions and feature channels [5].
+
+Our contributions are as follows:
+
+ * Multi-scale and Adaptive Rotated Convolutions: By incorporating convolutions with different receptive fields and adaptive rotated convolutions, the SCAF-Module effectively captures objects of varying scales and orientations.
+
+ * Spatial and Channel Attention Mechanisms: These mechanisms enhance the model's ability to focus on significant regions and channels, improving detection performance.
+
+ * Comprehensive Evaluation: Extensive experiments on the DOTA-v1.0 and HRSC2016 datasets demonstrate the SCAF-Module's effectiveness, achieving mAP scores of 80.94% and 98.23%, respectively.
+
+This work was supported by the National Natural Science Foundation of China under Grant No. 61702340.
+
+The remainder of this paper is organized as follows: Section II reviews related work in remote sensing object detection. Section III details the proposed SCAF-Module and its integration into the backbone network. Section IV presents the experiment setup, results, and ablation studies. Finally, Section TABLE VI. concludes the paper and discusses future work.
+
+§ II. RELATED WORK
+
+§ A. REMOTE SENSING OBJECT DETECTION FRAMEWORKS
+
+Object detection in remote sensing images has garnered significant attention due to its critical applications in areas such as urban planning, disaster management, and environmental monitoring. Traditional object detection frameworks designed for natural images, such as YOLO and Faster R-CNN, have been adapted to remote sensing scenarios. However, these frameworks face challenges unique to remote sensing images, such as the need to detect objects at various orientations and scales. Consequently, specialized frameworks have been developed to address these challenges, focusing on rotated object detection, multi-scale feature extraction, and robust performance in diverse and cluttered environments.
+
+To mitigate the abundance of rotated anchors and to minimize the disparity between the feature representations and the actual objects, Ding et al. [6] have introduced the RoI transformer. This technique, which extracts rotated RoIs from the horizontal ones yielded by the RPN, substantially enhances the precision of detecting objects with orientation. Nonetheless, the incorporation of fully-connected layers and the RoI alignment process during the learning phase adds a layer of complexity and computational demands to the network.
+
+To tackle the detection of small-scale, densely packed, and rotated objects, Yang et al. [7] have crafted an oriented object detection approach that integrates with the established Faster R-CNN framework. Additionally, a novel representation for oriented objects, known as gliding vertexes [8], has been put forward. This method refines the detection process by acquiring four vertex gliding offsets from the regression component of the Faster R-CNN architecture.
+
+Despite these advancements, the reliance on horizontal RoIs for classification and oriented bounding box regression in these methods leads to significant misalignment issues between the objects and their corresponding features. Furthermore, various studies have delved into one-stage or anchor-free oriented object detection frameworks, which forgo the need for region proposal generation and RoI alignment, directly outputting object classes and oriented bounding boxes. For instance, a refined one-stage oriented object detector [9] has been proposed, featuring two pivotal enhancements: feature refinement and progressive regression, addressing the misalignment of features. A new label assignment strategy for one-stage oriented object detection, inspired by RetinaNet, dynamically assigns anchors as either positive or negative through an innovative matching approach. A single-shot alignment network (S ${}^{2}$ ANet) [10] has been introduced for oriented object detection, focusing on harmonizing the classification score with location precision through deep feature alignment. Lastly, a dynamic refinement network (DRN) [11]has been conceptualized for oriented object detection, leveraging the anchor-free detection approach of CenterNet [12].
+
+§ B. ADDRESSING METRIC AND LOSS INCONSISTENCY
+
+The issue of inconsistency between metrics and loss functions is prevalent in horizontal bounding box object detection and becomes even more pronounced in remote sensing object detection due to the introduction of angle parameters. In horizontal bounding box detection, new IoU (Intersection over Union) calculation methods like DIoU (Distance Intersection over Union) [13] and GIoU (Generalized Intersection over Union) [14] have been proposed to alleviate inconsistency problems. However, these methods are non-differentiable and thus not directly applicable to remote sensing object detection.
+
+Existing solutions to inconsistency in remote sensing object detection are limited, primarily focusing on designing new loss functions. These can be categorized into bounding box-based, pixel-based, and Gaussian distribution-based loss functions. Most current detection methods calculate the IoU of two inclined bounding boxes, often using smooth L1 as the regression loss function. However, for near-square targets, high IoU can still result in a significant loss. To address this, Yang et al. [7] proposed the IoU-smooth L1 loss, which combines IoU and smooth L1 to mitigate the problem. The overlapping forms of two inclined bounding boxes vary greatly. Zheng et al. [15] addressed this by proposing a rotation-robust IoU (RIoU) calculation method for $3\mathrm{D}$ object detection, which can also be applied to 2D rotated object detection. This method defines a pair of projected rectangles to calculate the overlap area, allowing for the regression of bounding boxes at any angle. For anchor-free detection methods, Guo et al. [16] proposed using a convex hull formed by a set of irregular points to represent each rotated target, then optimizing the detector using a convex hull-based CIoU (Complete Intersection over Union) loss. Additionally, the smooth L1 loss is insensitive to large aspect ratio targets. To address this, Chen et al. [17] proposed PIoU (Pixels Intersection over Union) loss, which determines whether pixels are within the rotated box and calculates the rotated IoU by accumulating these pixels. This loss function can be applied to both anchor-based and anchor-free frameworks, though its accuracy needs improvement.
+
+§ C. GAUSSIAN-BASED LOSS
+
+Recently, methods based on 2D Gaussian distributions have garnered significant attention. Yang et al. [18] analyzed the impact of angle differences, center point deviations, and different aspect ratios between rotated candidate boxes and ground truth on loss function changes. They designed a new loss function based on Gaussian Wasserstein distance. The approach involves converting rotated bounding boxes to 2D Gaussian distributions, calculating the Gaussian Wasserstein distance between the distributions of the ground truth and predicted boxes to derive the new loss function. However, this method lacks scale invariance, and optimizing only the rotation center can lead to positional deviations in the detection results. To address the scale variation issue brought by Gaussian Wasserstein distance, KL divergence [19] has been used as a substitute for loss calculation. This method, similar to Gaussian Wasserstein distance, derives a theoretical explanation for selecting distribution distance metrics to maintain detection accuracy and scale invariance after transforming parameters into 2D Gaussian distributions. Both loss functions introduce additional hyperparameters, but the key to maintaining consistency between evaluation and loss is ensuring their trends remain consistent. Inspired by Kalman filtering, the KFIoU (Kalman Filter Intersection over Union) loss [20] was proposed. The basic steps involve modeling the ground truth and predicted bounding boxes as Gaussian distributions, aligning the center points of the two distributions, obtaining the Gaussian distribution of the overlapping area through Kalman filtering, and converting it back to a rotated bounding box. This approach approximates the rotated IoU.
+
+§ D. BACKBONE NETWORK DESIGN
+
+Designing an effective backbone network for remote sensing images is crucial due to the varying scales and orientations of objects. Li et al. [3] introduced a strategy incorporating large kernel convolutions with different receptive fields into the backbone network. This approach dynamically adjusts the receptive fields to capture features at various scales. However, large kernel convolutions may lead to information loss for small objects, as their large receptive fields might cover multiple objects or noise regions. Pu et al. [4] employed adaptive rotated convolutions, rotating the convolutional kernels to achieve rotational sampling. While effective, rotating large kernels significantly increases computational complexity without a corresponding improvement in accuracy. This trade-off highlights the need for a balanced approach that can capture features at different scales and orientations without excessive computational cost.
+
+§ E. ATTENTION MECHANISMS
+
+Attention mechanisms serve as a straightforward yet potent means of augmenting neural network representations across a multitude of applications. The channel-wise attention mechanism, exemplified by the SE block [5], leverages the insights from global averaging to recalibrate the importance of feature channels. Concurrently, spatial attention schemes such as those found in GENet [21], GCNet [22], and SGE [23], fortify the network's capacity to incorporate contextual cues through spatial filtering techniques. The CBAM [24] and BAM [25] architectures amalgamate channel and spatial attention, capitalizing on the strengths of both to refine feature representation.
+
+Our proposed method focuses on the integration of backbone network design and attention mechanisms. Li et al. [3] incorporated prior knowledge from remote sensing images to develop large kernel selective convolutions; however, these large kernel convolutions can reduce the network's sensitivity to small objects, which are prevalent in remote sensing imagery. Pu et al. [4] designed rotated convolutions that are sensitive to object angles, yet this approach introduces significant computational overhead. In contrast, our work combines both strategies by implementing spatial attention, utilizing smaller rotated convolutions alongside larger standard convolutions. This approach effectively balances the detection of small and large objects without substantially increasing computational costs. Additionally, channel attention mechanisms [5] are employed to suppress irrelevant features while enhancing the importance of relevant ones, thereby improving overall detection performance.
+
+§ III. PROPOSED METHOD
+
+In this section, we introduce the architecture and components of the Spatial Channel Attention Fusion Module (SCAF-Module), designed to enhance remote sensing object detection by integrating multi-scale convolutions, adaptive rotated convolutions, and parallel spatial channel attention mechanisms. We also detail the overall backbone structure, which incorporates the SCAF-Module into a hierarchical framework to effectively capture and represent features at multiple levels. The SCAF-Module is built upon the following assumptions: (1) Objects in remote sensing images vary greatly in scale and orientation [20], necessitating a detection method that can handle these variations effectively. (2) Multi-scale [3] and adaptive rotated convolutions [4] are effective in capturing detailed features across different scales and orientations. (3) Spatial and channel attention mechanisms [5] can further enhance the detection performance by emphasizing important features.
+
+§ A. SPATIAL CHANNEL ATTENTION FUSION MODULE
+
+The Spatial Channel Attention Fusion Module (SCAF-Module) is designed to enhance feature representation by integrating spatial [3]and channel attention [5] mechanisms with multi-scale and rotated convolutions [4]. This module aims to address the challenges posed by the diverse scales and orientations of objects in remote sensing images. The overall structure of the SCAF-Module is depicted in Fig. 1.
+
+§ 1) MULTI-SCALE CONVOLUTIONS
+
+The SCAF-Module begins by processing the input feature map $X$ through three convolutional layers, each with a different receptive field to capture features at various scales. The three convolutional layers include:
+
+ * 3 - 3 Rotated Convolution: Captures fine-grained details and small-scale features with enhanced sensitivity to object orientations, improving detection accuracy for rotated objects.
+
+ * $3 \times 3$ Rotated Dilated Convolution (dilation rate = 2): Utilizes a dilation rate of 2 to expand the receptive field without increasing the number of parameters, capturing medium-scale features while maintaining orientation adaptability.
+
+ * $5 \times 5$ Standard Dilated Convolution (dilation rate = 2): Further expands the receptive field to capture larger-scale features, ensuring comprehensive feature extraction across various object scales.
+
+These layers generate three feature maps ${F}_{1},{F}_{2}$ , and ${F}_{3}$ respectively. The use of different receptive fields allows the module to balance sensitivity to both large and small objects, which is crucial for the diverse object sizes found in remote sensing images.
+
+§ 2) ROTATED CONVOLUTION
+
+Rotated convolution is designed to address the unique challenges posed by remote sensing images, particularly the diverse orientations of objects. Traditional convolutional layers are limited by their fixed orientation, which can hinder the model's ability to accurately capture features of rotated objects. The rotated convolution mechanism introduces a way to dynamically adjust the orientation of the convolutional filters, enabling better alignment with the objects in the input image. The primary advantage of rotated convolution is its ability to rotate the convolutional kernels to match the orientation of the target objects. This is particularly beneficial for remote sensing applications where objects such as buildings, vehicles, and agricultural fields can appear at various angles. By aligning the convolutional filters with the orientation of these objects, the rotated convolution can more effectively capture the relevant features, leading to improved detection accuracy. The process of rotated convolution involves the following steps:
+
+ < g r a p h i c s >
+
+Fig. 1. The overall structure of the SCAF-Module.
+
+ * Kernel Rotation: The convolutional kernel is treated as a set of sampling points in the kernel space. These sampling points are then rotated by an angle $\theta$ , which is dynamically determined based on the input feature map. This rotation allows the kernel to align with the orientation of the objects in the image.
+
+ * Bilinear Interpolation: After rotating the sampling points, bilinear interpolation is used to map the original convolution parameters to the new rotated positions. This ensures that the rotated kernel retains the characteristics of the original kernel while adapting to the new orientation.
+
+ * Dynamic Angle Generation: The rotation angle $\theta$ is not fixed but is generated dynamically by a routing function based on the input features. This allows the model to adapt to different orientations present in the input image, providing a flexible and robust solution for capturing rotated objects.
+
+Rotated convolution addresses the limitations of traditional convolutional layers by introducing orientation adaptability. This innovation is crucial for improving the accuracy of object detection in remote sensing images, where the orientation of objects is often varied and unpredictable. By aligning the convolutional filters with the objects' orientations, the SCAF-Module can more effectively capture and represent the relevant features, leading to superior detection performance.
+
+§ 3) SPATIAL ATTENTION MECHANISM
+
+The spatial attention mechanism is designed to focus on important regions within the feature maps, addressing the variability in the shapes and scales of objects. As shown in Fig. 2, the spatial attention mechanism operates as follows:
+
+ * Concatenation: The feature maps ${F}_{1},{F}_{2}$ , and ${F}_{3}$ are concatenated along the channel dimension to form a combined feature map $F$ .
+
+ * Pooling: The combined feature map $F$ undergoes average pooling and max pooling along the channel dimension, producing the average feature map ${AF}$ and maximum feature map ${MF}$ .
+
+ * Fusion: The pooled feature maps ${AF}$ and ${MF}$ are concatenated along the channel dimension to form the fused feature map ${AMF}$ .
+
+ * Convolution and Activation The fused feature map ${AMF}$ is passed through a convolutional layer and a sigmoid activation function to produce the spatial attention map ${SF}$ :
+
+$$
+{SF} = \sigma \left( {\operatorname{Conv}\left( \left\lbrack {\operatorname{AvgPool}\left( F\right) ,\operatorname{MaxPool}\left( F\right) }\right\rbrack \right) }\right) , \tag{1}
+$$
+
+where $\sigma$ denotes the sigmoid activation function, which maps the output to a range of $\left\lbrack {0,1}\right\rbrack$ , serving as the spatial attention weights. This mechanism enables the model to selectively focus on significant regions in the feature maps, enhancing the detection of objects with varying shapes and scales.
+
+§ 4) CHANNEL ATTENTION MECHANISM
+
+The channel attention mechanism dynamically adjusts the weights of different channels, emphasizing channels that are more informative for the task at hand. This mechanism is crucial for optimizing the feature representation by selectively enhancing the most relevant channels based on the global context of the feature map. As shown in Fig. 3, the channel attention mechanism operates as follows:
+
+ < g r a p h i c s >
+
+Fig. 2. Spatial attention mechanism.
+
+ * Global Average Pooling: Each feature map ${F}_{i}$ undergoes global average pooling to capture the global context of the feature map, resulting in a descriptor vector $z$ . The global average pooling operation aggregates the spatial information across the entire feature map, producing a channel-wise descriptor that summarizes the global context.
+
+ * Squeeze and Excitation: The descriptor vector $z$ is passed through two convolutional layers. The first convolutional layer reduces the number of channels by a ratio (typically 16), and the second convolutional layer restores the original number of channels. This sequence of operations is designed to learn the importance of each channel dynamically:
+
+$$
+{s}_{c} = \sigma \left( {{\operatorname{Conv}}_{2}\left( {\operatorname{ReLU}\left( {{\operatorname{Conv}}_{1}\left( z\right) }\right) }\right) }\right) , \tag{2}
+$$
+
+where ${\mathrm{{Conv}}}_{1}$ and ${\mathrm{{Conv}}}_{2}$ are the convolutional layers used for compression and excitation, respectively, and $\sigma$ denotes the sigmoid activation function.
+
+ * Reweighting: The learned channel weights ${s}_{c}$ are applied to the corresponding feature map ${F}_{i}$ , producing the channel-weighted feature map $C{F}_{i}$ :
+
+$$
+C{F}_{i} = {F}_{i} \odot {s}_{c}, \tag{3}
+$$
+
+where $\odot$ denotes the element-wise multiplication.
+
+The core idea behind this mechanism is to use global information to recalibrate the feature map in a channel-wise manner, enhancing the model's ability to focus on the most informative channels and improving the overall feature representation. By integrating this SE-based channel attention mechanism, the SCAF-Module can effectively capture and utilize the global context, leading to improved performance in remote sensing object detection tasks.
+
+§ 5) FEATURE FUSION
+
+The outputs from the spatial and channel attention mechanisms are element-wise multiplied and summed to produce a new feature map ${FF}$ :
+
+$$
+{FF} = \mathop{\sum }\limits_{{i = 1}}^{3}S{F}_{i} \odot C{F}_{i}. \tag{4}
+$$
+
+where $\odot$ denotes the element-wise multiplication. This fusion step integrates spatial and channel attention, enhancing the feature representation by focusing on both important regions and informative channels. Finally, the fused feature map ${FF}$ is element-wise multiplied with the input feature map $X$ to produce the final output $Y$ of the module:
+
+$$
+Y = X \odot {FF}\text{ . } \tag{5}
+$$
+
+This final step ensures that the enhanced features are integrated with the original input, maintaining the integrity of the input information while incorporating the attention mechanisms' enhancements. The SCAF-Module, through its combination of multi-scale convolutions, spatial attention, and channel attention, effectively addresses the challenges of detecting objects with varying scales and orientations in remote sensing images.
+
+§ B. BACKBONE STRUCTURE
+
+The backbone structure of the SCAF-Module is meticulously designed to effectively capture and process the diverse features present in remote sensing images. This section provides an in-depth look at the backbone architecture, consisting of multiple stages, each composed of several blocks that integrate the SCAF-Module to enhance feature representation.
+
+§ 1) BLOCK STRUCTURE
+
+Each block within the backbone is constructed to maintain the shape and channel dimensions of the input while enhancing the feature representation through a series of operations. The block structure is as follows:
+
+ * Normalization 1: The input feature map undergoes a normalization process to stabilize and accelerate the training process.
+
+ * Fully Connected Layer: A fully connected (FC) layer is applied to the normalized features, transforming them into a different feature space.
+
+ < g r a p h i c s >
+
+Fig. 3. Channel attention mechanism.
+
+ * GELU Activation: The output from the fully connected layer is passed through a GELU (Gaussian Error Linear Unit) activation function to introduce non-linearity.
+
+ * SCAF-Module: The activated features are then processed by the SCAF-Module, which applies multi-scale convolutions, spatial attention, and channel attention to enhance the feature representation.
+
+ * Fully Connected Layer: Another fully connected layer is applied to the features output by the SCAF-Module.
+
+ * Normalization 2: A second normalization layer is used to further stabilize the feature representations.
+
+ * MLP: Finally, the features pass through a multi-layer perceptron (MLP) for additional transformation and refinement.
+
+The block incorporates two residual connections to preserve the original input features and prevent the degradation problem commonly encountered in deep networks. The first residual connection adds the input to the feature map before the second normalization step, while the second residual connection adds the input to the final output of the block, ensuring that the input-output shape and channel dimensions remain unchanged. The block structure is illustrated in Fig. 4.
+
+§ 2) STAGE STRUCTURE
+
+The backbone is organized into multiple stages, each consisting of several blocks to progressively extract and refine features at different scales and resolutions. Each stage operates as follows:
+
+ * Shape and Channel Adjustment: At the beginning of each stage, a convolutional layer adjusts the shape and channel dimensions of the input feature map to prepare it for further processing.
+
+ * Repeated Blocks: The adjusted feature map is then passed through a series of blocks. Each block applies the operations described above, progressively enhancing the feature representation.
+
+ * Normalization: The output of the final block in each stage undergoes normalization to ensure stable feature distribution before passing to the next stage.
+
+The multi-stage structure allows the backbone to capture features at varying levels of abstraction, from low-level edges and textures to high-level semantic information.
+
+3) Integration with Oriented R-CNN
+
+The backbone is integrated into the Oriented R-CNN framework, replacing the original ResNet backbone. The Oriented R-CNN is specifically designed for object detection in remote sensing images, where objects often appear in arbitrary orientations. By incorporating the SCAF-Module-based backbone, the Oriented R-CNN benefits from improved feature extraction capabilities, particularly in handling the diverse scales and orientations of objects in remote sensing imagery.
+
+The backbone structure leveraging the SCAF-Module significantly enhances the capability of the Oriented R-CNN to accurately detect and classify objects in remote sensing images. The multi-scale convolutions, combined with spatial and channel attention mechanisms, ensure that the model captures a comprehensive set of features, leading to superior detection performance.
+
+§ IV. EXPERIMENTS
+
+In this section, we present the experiments conducted to evaluate the performance of the proposed SCAF-Module. We detail the datasets used, the evaluation metrics, and the experiment setup. Finally, we present and analyze the results, demonstrating the effectiveness of our approach.
+
+§ A. DATASETS AND EVALUATION
+
+We evaluate the SCAF-Module on two widely used remote sensing image datasets: DOTA-v1.0 and HRSC2016. These datasets are chosen for their diversity in object scales, orientations, and complexity of scenes, which pose significant challenges for object detection models.
+
+DOTA-v1.0 [26]: The following fifteen object classes are covered in this dataset: Plane (PL), Baseball diamond (BD), Bridge (BR), Ground track field (GTF), Small vehicle (SV), Large vehicle (LV), Ship (SH), Tennis court (TC), Basketball court (BC), Storage tank (ST), Soccer-ball field (SBF), Roundabout (RA), Harbor (HA), Swimming pool (SP), and Helicopter (HC). The dataset contains a wide variety of object scales and orientations, making it a suitable benchmark for evaluating our model's ability to handle diverse object characteristics. Due to the large size of images, offline data augmentation is typically used. For single-scale training and testing, images are cropped to ${1024} \times {1024}$ patches with 200 pixels overlap. For multi-scale training and testing, images are first resized to 0.5, 1.0, and 1.5 times their original size, and then cropped to ${1024} \times {1024}$ patches with 500 pixels overlap.
+
+HRSC2016 [27]: This dataset focuses on ship detection and includes images captured from various angles and distances, with ships annotated using oriented bounding boxes. The variability in ship sizes and orientations makes this dataset an excellent testbed for our model's robustness.
+
+ < g r a p h i c s >
+
+Fig. 4. Block structure.
+
+The primary evaluation metric used in The experiments is the mean Average Precision (mAP), which measures the precision-recall performance across different object categories. We report the mAP scores to provide a comprehensive assessment of our model's detection capabilities.
+
+§ B. EXPERIMENT SETUP
+
+The experiments are conducted using the MMRotate framework on an NVIDIA GeForce RTX 3090 GPU with a batch size of 2 for training and evaluation. The optimizer used is AdamW with a learning rate of $5 \times {10}^{-5},{\beta }_{1} = {0.9},{\beta }_{2} =$ 0.999, and a weight decay of 0.05 . The learning rate follows a step policy with an initial linear warmup for 500 iterations starting at a third of the base learning rate, then decaying at epochs 8 and 11. Image normalization is applied with mean values [123.675, 116.28, 103.53] and standard deviations $\left\lbrack {{58.395},{57.12},{57.375}}\right\rbrack$ . The training pipeline includes resizing, random flipping (horizontal, vertical, diagonal), random rotation, normalization, padding, and data collection. The test pipeline involves multi-scale augmentation and normalization.
+
+The SCAF-Module is integrated into the Oriented R-CNN framework, replacing the original ResNet backbone, to evaluate its performance in detecting objects with arbitrary orientations in remote sensing images. For the ablation experiments, the backbone is not pre-trained on ImageNet to enhance experimental efficiency. In contrast, for the comparative experiments, the backbone undergoes pre-training on ImageNet for 300 epochs before being fine-tuned on the DOTA-v1.0 and HRSC2016 datasets to achieve higher performance.
+
+§ C. COMPARATIVE EXPERIMENTS
+
+In this section, we evaluate the performance of the SCAF-Module against six advanced models, including the baseline Oriented R-CNN, using two widely adopted remote sensing image datasets: DOTA-v1.0 and HRSC2016. The comparison focuses on mean Average Precision (mAP) as the primary metric.
+
+1) DOTA-v1.0 Dataset
+
+The DOTA-v1.0 dataset is a standard benchmark for remote sensing object detection. We conducted experiments using both single-scale and multi-scale training and testing protocols to assess the robustness of our model.
+
+For the single-scale evaluation, large images from the DOTA-v1.0 dataset were divided into ${1024} \times {1024}$ patches with a 200-pixel overlap. The results, summarized in Table I, show that the SCAF-Module achieved a significant mAP of 78.96%, outperforming all compared models. This demonstrates the module's ability to effectively capture fine-grained details and accurately detect objects at a fixed scale. In the multi-scale evaluation, images were rescaled to0.5,1.0, and 1.5 times their original sizes, with a 500-pixel overlap during patching. As shown in Table II, the SCAF-Module achieved an mAP of 80.94%, again surpassing the other models. This result highlights the module's robustness in adapting to various object scales, a critical requirement in remote sensing tasks.
+
+ < g r a p h i c s >
+
+Fig. 5. Detection Results on DOTA-V1.0 Dataset.
+
+These evaluations on the DOTA-v1.0 dataset confirm the superior performance of the SCAF-Module, particularly in its ability to enhance detection accuracy across both single and multi-scale scenarios. To visually represent the effectiveness of our model on the DOTA-v1.0 dataset, we present a series of detection result images in Fig. 5.
+
+§ 2) HRSC2016 DATASET
+
+The HRSC2016 dataset focuses on ship detection, providing a rigorous test of model precision and robustness. We evaluated our model under the PASCAL VOC 2007 and VOC 2012 metrics to ensure a thorough assessment. As detailed in Table III, the SCAF-Module achieved mAP scores of 90.61% under the VOC 2007 metric and 98.23% under the VOC 2012 metric, marking a notable improvement over the other models. This performance can be attributed to the module's advanced attention mechanisms and multi-scale convolutional structure, which enhance its ability to detect ships with varying orientations- a frequent challenge in remote sensing imagery.
+
+Overall, the results from the HRSC2016 dataset further validate the effectiveness of the SCAF-Module in detecting objects with diverse scales and orientations, reinforcing its value as a robust tool for remote sensing object detection tasks.
+
+§ D. ABLATION STUDY
+
+To understand the contribution of each component in the SCAF-Module, we conduct ablation studies on the DOTA-v1.0 dataset. The goal is to analyze the effects of Spatial Attention, Channel Attention, their order, and the use of Rotated Convolutions on the overall performance of the model.
+
+§ CONTRIBUTION OF INDIVIDUAL COMPONENTS
+
+We investigate the contributions of various components to the overall performance of our model. The baseline configuration employs multi-scale convolutional layers without any additional mechanisms. We then incrementally add the following components: rotated convolutions, channel attention, and spatial attention. All experiments are conducted using single-scale training and testing on the DOTA-v1.0 dataset.
+
+The baseline is multi-scale convolutional layers without any attention mechanisms. This setup serves as the foundational model, capturing features at different scales. We first incorporate spatial attention into the baseline. The spatial attention mechanism enables the model to focus on important spatial regions, enhancing the detection of objects of varying shapes and scales within the image. Next, we add the channel attention mechanism to the model with rotated convolutions. This mechanism allows the model to emphasize important channels, improving the representation of semantic information critical for accurate object detection. Finally, we integrate rotated convolutions into the model. Rotated convolutions enhance the model's ability to capture features at various orientations, crucial for remote sensing images where objects often appear in arbitrary orientations.
+
+The results of these experiments are summarized in Table IV. Each row in the table shows the mAP achieved by the model with the addition of the respective component. The performance improvements with each added component demonstrate the effectiveness of both the rotated convolutions and the attention mechanisms in enhancing the detection capabilities of the model.
+
+§ 1) ORDER OF SPATIAL AND CHANNEL ATTENTION
+
+We also tested different sequences of applying Spatial and Channel Attention. The results of these experiments are presented in Table V.
+
+When comparing the sequences of applying Spatial and Channel Attention, it is observed that the parallel application of both attentions yields the best results. Sequential applications result in slightly lower mAP scores, indicating that the simultaneous focus on both spatial and channel aspects is more beneficial.
+
+§ 2) EFFECTS OF ROTATED CONVOLUTIONS
+
+Finally, we tested the impact of using Rotated Convolutions in different configurations. The results of these experiments are presented in Table VI.
+
+TABLE I. AP FOR EACH CATEGORY AND OVERALL MAP ON DOTA-V1.0 (SINGLE-SCALE).
+
+max width=
+
+Model $\mathbf{{PL}}$ BD $\mathbf{{BR}}$ $\mathbf{{GTF}}$ SV LV SH $\mathbf{{TC}}$ $\mathbf{{BC}}$ ST SBF $\mathbf{{RA}}$ HA SP $\mathbf{{HC}}$ mAP
+
+1-17
+GWD [18] 88.92 77.08 45.91 69.30 72.52 64.05 76.33 90.87 79.18 80.45 57.67 64.36 63.60 64.75 48.24 69.55
+
+1-17
+${\mathrm{R}}^{3}$ Det [9] 89.30 73.36 45.10 71.21 76.51 74.01 81.03 90.89 79.01 83.54 59.37 63.47 63.04 65.93 37.02 70.19
+
+1-17
+${\mathrm{S}}^{2}\mathrm{\;A}$ - $\mathrm{{Net}}$ [10] 88.70 81.41 54.28 69.75 78.04 78.23 80.54 90.69 84.75 86.22 65.03 65.81 76.16 73.37 58.86 76.11
+
+1-17
+ReDet [28] 88.79 82.64 53.97 74.00 78.13 84.06 88.04 90.89 87.78 85.75 61.76 60.39 75.96 68.07 63.59 76.25
+
+1-17
+Oriented R- CNN [29] 88.86 83.48 55.27 76.92 74.27 82.10 87.52 90.90 85.56 85.33 65.51 66.82 74.36 70.15 57.28 76.28
+
+1-17
+LSKNet [3] 89.78 81.24 54.09 75.96 79.31 85.13 88.49 90.90 87.41 84.87 64.12 64.31 77.03 78.22 67.02 77.86
+
+1-17
+SCAF(ours) 89.72 85.25 55.38 76.10 79.55 84.85 88.43 90.85 87.46 85.71 66.99 68.54 76.85 79.79 68.87 78.96
+
+1-17
+
+TABLE II. MAP ON DOTA-V1.0 (MULTI-SCALE).
+
+max width=
+
+Model mAP
+
+1-2
+${\mathrm{R}}^{3}$ Det 76.47
+
+1-2
+${\mathrm{S}}^{2}\mathrm{\;A}$ - $\mathrm{{Net}}$ 79.42
+
+1-2
+ReDet 79.87
+
+1-2
+GWD 80.23
+
+1-2
+LSKNet 80.32
+
+1-2
+O-RCNN 80.62
+
+1-2
+SCAF(ours) 80.94
+
+1-2
+
+TABLE III. MAP ON HRSC2016 (VOC 2007 AND VOC 2012).
+
+max width=
+
+Model mAP(07) mAP(12)
+
+1-3
+${\mathrm{S}}^{2}\mathrm{\;A}$ - $\mathrm{{Net}}$ 90.17 95.01
+
+1-3
+R3Det 89.26 96.01
+
+1-3
+GWD 89.85 97.37
+
+1-3
+O-RCNN 90.50 97.60
+
+1-3
+ReDet 90.46 97.63
+
+1-3
+LSKNet 90.27 97.80
+
+1-3
+
+max width=
+
+SCAF(ours) 90.61 98.23
+
+1-3
+
+TABLE IV. CONTRIBUTION OF INDIVIDUAL COMPONENTS.
+
+max width=
+
+Configuration mAP
+
+1-2
+Baseline 67.62
+
+1-2
++ Spatial Attention 68.45
+
+1-2
++ Channel Attention 69.32
+
+1-2
++ Rotated Convolution 69.79
+
+1-2
+
+TABLE V. ORDER OF SPATIAL AND CHANNEL ATTENTION.
+
+max width=
+
+Configuration mAP
+
+1-2
+Spatial then Channel Attention 67.77
+
+1-2
+Channel then Spatial Attention 68.44
+
+1-2
+Parallel Spatial & Channel Attention 69.32
+
+1-2
+
+TABLE VI. EFFECTS OF ROTATED CONVOLUTIONS.
+
+max width=
+
+Configuration mAP
+
+1-2
+All ordinary convolutions 69.32
+
+1-2
+The first convolution rotated 69.59
+
+1-2
+The first and second convolutions rotated 69.79
+
+1-2
+All convolutions rotated 69.70
+
+1-2
+
+In the experiments focusing on the Rotated Convolutions, replacing the first and second convolutions with rotated ones gives the best performance. Using only ordinary convolutions or replacing all convolutions with rotated ones results in lower mAP scores. This suggests that a balanced combination of ordinary and rotated convolutions is most effective for capturing diverse object orientations in remote sensing images.
+
+The ablation study demonstrates the effectiveness of each component within the SCAF-Module. Spatial Attention enhances the model's ability to focus on important regions within the image, Channel Attention dynamically adjusts the significance of different feature channels, and Rotated Convolutions improve the detection of objects with varying orientations. The combination of these components results in a significant performance boost, validating the design choices made in the development of the SCAF-Module.
+
+§ V. CONCLUSION
+
+In this paper, we introduced the Spatial Channel Attention Fusion Module (SCAF-Module), a novel approach designed to enhance remote sensing object detection. By integrating multi-scale convolutions, adaptive rotated convolutions, and parallel spatial channel attention mechanisms, our model effectively addresses the challenges posed by the diverse scales and orientations of objects in remote sensing images.
+
+The experiment results on the DOTA-v1.0 and HRSC2016 datasets demonstrate the effectiveness of the SCAF-Module. Specifically, the module achieved impressive mean Average Precision (mAP) scores of 80.94% and 98.23% on the DOTA-v1.0 and HRSC2016 datasets, respectively. These results underscore the adaptability and robustness of our approach in handling various object detection scenarios in remote sensing imagery. Furthermore, the ablation studies validate the individual contributions of the spatial and channel attention mechanisms, as well as the impact of rotated convolutions on improving detection accuracy. The comparative experiments show that the SCAF-Module outperforms several advanced models, including the baseline Oriented R-CNN, highlighting its superior performance.
+
+Overall, the SCAF-Module offers a significant advancement in remote sensing object detection by providing a more comprehensive and adaptable framework. Future work will focus on further optimizing the module and exploring its application in other challenging remote sensing tasks.
+
+§ ACKNOWLEDGMENT
+
+This work was supported by the National Natural Science Foundation of China under Grant No. 61702340.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aBxc4ADTyx/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aBxc4ADTyx/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8991e10965d9263b66dcb9395be244045214c140
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aBxc4ADTyx/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,205 @@
+# Design and Implementation of Telemedicine System Using Light Fidelity and PIC16F877A Microcontroller
+
+M.R.Ezilarasan
+
+Department of Electronics and
+
+Communication Engineering
+
+Vel Tech Rangarajan Dr. Sagunthala
+
+$R\& D$ Institute of Science and Technology
+
+Chennai, India
+
+drezilarasan@veltech.edu.in
+
+Man-Fai Leung
+
+School of computing and Information
+
+Science, Faculty of science and
+
+Engineering
+
+Anglia Ruskin University, Cambridge,
+
+United kingdom
+
+Man-fai.leung@aru.ac.uk
+
+Xiangguang Dai*
+
+Chongqing Engineering Research
+
+Center of Internet of Things and
+
+Intelligent Control Technology,
+
+Chongqing Three Gorges University,
+
+Chongqing, China,
+
+daiziangguang@163.com
+
+${Abstract}$ — Medical body area networks currently use wireless communication technology to give patients and carers more freedom and convenience and radio frequency (RF) is the existing medium in healthcare applications. In this the possibility of electromagnetic waves interfering with precision medical equipment still exists, though. This study uses the newly developed wireless visible light communication (VLC) technology to provide a novel design and implementation of a medical healthcare information system. VLC is also referred as light fidelity (Li-Fi). Visible light-emitting diodes (LEDs), which are expected to overtake traditional incandescent and fluorescent lights as the dominant lighting source in the near future owing to their energy efficiency, can be used with VLC. In this research the patient's health is monitored using sensors, and an LCD will display the results. An LED lightbulb will communicate the data in the form of light. The photo-detector at the receiving end gathers the transmitted data, shows the output on an LCD, and informs the carers by beeping every 15 seconds for respiration and 30 seconds for heartbeat. These lighting fixtures can function as wireless data transmission devices in addition to sources of illumination by utilising the fast switching power of LEDs. Hospital regions with restricted radio frequencies can benefit from data services and monitoring provided by the prototype VLC-based medical healthcare system.
+
+Keywords- VLC, Li-Fi, Healthcare, Optical wireless communication, LED, Photodetector.
+
+## I. INTRODUCTION
+
+Over the last ten years, optical wireless communication (OWC) has drawn a great deal of attention. OWC is viewed as an advantageous and complementary communication method to traditional radio frequency (RF) technology [1], which operates within a licensed and regulated electromagnetic spectrum band between ${30}\mathrm{{kHz}}$ and 300 GHz. Electromagnetic spectrum range is shown in below figure 1. Electromagnetic spectrum has different frequency ranges and for applications. This spectrum, commonly used for various wireless communication services such as mobile phones, Wi-Fi, and Bluetooth, based on their frequency ranges and it is experiencing increasing challenges due to the exponential growth in wireless data traffic and the proliferation of sophisticated applications. The surge in demand for wireless data has led to spectral congestion, where the available bandwidth is insufficient to support the volume of data being transmitted. This congestion results in slower data rates, increased latency, and degraded overall performance of wireless networks. Traditional RF technologies, while effective, are struggling to keep up with the demand, leading to a so-called "spectrum crisis." This crisis is characterized by the saturation of the RF spectrum, making it difficult to achieve high maximum data rates and maintain reliable communication.
+
+
+
+In contrast, OWC utilizes the optical spectrum, which is much broader than the RF spectrum and less congested. This allows OWC to offer higher data rates, lower latency, and improved performance in environments where RF communication may be less effective. For instance, OWC can be particularly advantageous in scenarios where RF interference is a problem or where high-density data transmission is required, such as in urban areas, office buildings, and industrial settings. Moreover, OWC can operate in unlicensed bands of the optical spectrum, providing greater flexibility and cost-effectiveness for deployment. It can be used in various applications, including indoor communication using visible light (often referred to as Li-Fi), outdoor communication for point-to-point links, and underwater communication where RF signals are highly attenuated.
+
+Today, there is significance focus on researching new wireless communication alternatives that can provide massive connectivity, diverse data rates, low latency, high capacity, efficiency, and enhanced security [2]. In this context, VLC has emerged as promising wireless communication technique that could address key challenges within the wireless communications infrastructure [3]. VLC uses light that is visible as the transmission medium, leveraging LED's to transmit data. This technique offers several advantages over traditional RF-based systems. For instance, VLC has many advantages like high spped transmission, high data rate, security [4].
+
+One of the most notable advantages of VLC is its potential to provide an alternative to the heavily congested RF spectrum, offering up to 10,000 times more capacity. Additionally, the VLC spectrum is unregulated and unlicensed, presenting a vast and untapped resource for data transmission. This makes VLC is a user friendly bandwidth solution that can help with the problem RF spectrum shortage. Moreover, VLC can be used in conjunction with other essential communication systems and devices without causing interference with the electromagnetic fields generated by RF devices. This characteristic is particularly valuable in sensitive environments such as airplanes and hospitals, where RF interference can pose significant risks. There are ongoing studies exploring the transmission of medical data, such as photo plethysmography, electrocardiography, and body temperature using VLC applications $\left\lbrack {5,6}\right\rbrack$ .
+
+Another key advantage of VLC is its inherent security featureDespite RF signals, visible light is suitable for highly secure connections where wireless data transfer is meant to stay within range of an access point because it cannot pass through walls and does not expand uncontrollably. This characteristic makes VLC an ideal solution for environments where data security is paramount. For example, in corporate offices and government facilities, where sensitive information is frequently transmitted, VLC can ensure that data remains confined within the physical boundaries of a room or building. This physical limitation drastically reduces the risk of eavesdropping and unauthorized access compared to traditional RF-based systems, where signals can be intercepted from a distance. In residential settings, VLC can offer secure communication for smart home devices, preventing potential hackers from accessing personal data transmitted over wireless networks.
+
+Furthermore, VLC can be seamlessly integrated into existing lighting infrastructure, offering dual functionality in the same hardware. This dual-purpose capability means that buildings can have efficient lighting and secure data transmission without the need for additional installations, reducing costs and complexity. For instance, LED lights in a smart home could provide both illumination and secure network connectivity to various IoT devices, such as security cameras, smart locks, and home automation systems.
+
+For all these reasons, VLC can be applied to most IoT-based smart systems [7]. In the context of smart cities, VLC can be used for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, enhancing traffic management and safety. Intelligent transportation systems can leverage VLC to provide real-time data exchange between traffic lights and autonomous vehicles, reducing the risk of accidents and improving traffic flow. Additionally, VLC can be employed in smart grids for secure and efficient communication between various components of the electrical grid, facilitating better energy management and reducing the risk of cyberattacks on critical infrastructure. This research paper is organized as follows: Section 2 reviews the existing literature and previous studies relevant to our research, providing a comprehensive background and context while highlighting gaps and opportunities that our study aims to address. Section 3 delves into the concept of VLC, explaining its fundamental principles, advantages, applications, current state of development, and technical challenges. Section 4 presents our proposed methodology and approach, detailing the experimental setup, data collection methods, and analysis procedures, and outlining the innovative aspects of our work. Finally, Section 5 summarizes the key findings, discusses the implications and contributions to the field, and suggests potential areas.
+
+## II. RELATED WORKS
+
+Few more research in recent years, a revolutionary approach to wireless communication has emerged, known as the Internet of LED. This paradigm shift integrates the Internet of Things (IoT) with VLC using LED technology, opening up a realm of possibilities across various industries and applications. One of the most intriguing applications of this technology is seen in indoor navigation and art gallery monitoring is given in [8].In this a prestigious museum adorned with an array of LED lights embedded in the ceiling. These LEDs serve a dual purpose providing ambient illumination and acting as data transmitters. Through sophisticated algorithms and protocols, these LED arrays communicate with users' mobile devices, offering precise positioning information about products, exhibits, or artworks. This not only enhances the overall experience for visitors but also streamlines operations and enhances security in such environments.
+
+The automotive sector has also embraced the Internet of LED with enthusiasm. Modern vehicles are equipped with advanced LED headlights and taillights, which are not just efficient in lighting up the road but also serve as integral components in vehicular VLC systems [9]. These systems utilize the rapid flickering capabilities of LEDs to transmit data, enabling real-time communication for collision prevention systems and enhancing overall road safety. While traditional wireless technologies heavily rely on RF systems, such as Bluetooth, Zig bee, WLAN, and WPAN, there's a growing realization of the limitations and potential risks associated with RF in certain environments, especially healthcare settings.
+
+Concerns about electromagnetic interference and its impact on sensitive medical equipment and patient safety have spurred interest in VLC using energy-efficient LEDs as a viable alternative $\left\lbrack {{10},{11}}\right\rbrack$ . LEDs offer a multitude of advantages over conventional light sources. Their long operating lifetimes, minimal power consumption, and exceptional reliability make them ideal candidates for applications requiring continuous and efficient data transmission [12].
+
+Researchers are actively exploring how VLC using LEDs can not only address concerns about RF-related interference but also revolutionize wireless communication in critical environments like hospitals and clinics. The ongoing research and development in the Internet of LED are poised to reshape the wireless communication landscape. From enhancing indoor navigation experiences to improving road safety and revolutionizing healthcare communication, the fusion of IoT with VLC using LED technology promises efficient, secure, and reliable connectivity across diverse sectors. As this technology continues to evolve, we can expect even more innovative applications and transformative impacts on how we communicate and interact in the digital age.
+
+## III. VISIBLE LIGHT COMMUNICATION
+
+Based on a numerical estimate of optical transmission needs, [13] University in Japan was the first to propose VLC, which uses white LEDs to transport data. Evolution of light is shown in figure 2
+
+
+
+Figure 2 Evolution of Light [14]
+
+Since then, a great deal of research has been done on the use of commercial white LEDs for short-range indoor communications. Owing to the characteristic of short-range communication restricted by the light beam's range, [15] suggested that VLC is more suited for location-based service applications, including giving consumers access to location-specific information. Additionally, new developments for VLC applications are always being made.
+
+In [16] suggested a system of information that would allow medical facilities to use the hospital illumination network to offer private or public data services. Nevertheless, the study's primary focus was on the information system's architectural concepts; the actual design and implementation of medical data transfer via VLC has not yet been documented. In this work, we provide a safe and readily available substitute for RF wireless technology: a wireless VLC-based medical healthcare information system. VLC is a great option for wireless access in the medical and healthcare sectors where electromagnetic interference (EMI) and RF pollution are major concerns.
+
+The greatest frequency utilized in RF technology is 10,000 times lower than the wide bandwidth of VLC [17]. Unlike the RF band, which is crowded and has issues with frequency allocation, the VLC band is currently unlicensed. Since communication takes place inside the visible light spectrum, the Federal Communications Commission (FCC) has not established any regulations. Due to the preinstallation of LED devices indoors and their compatibility with inexpensive electronic drive circuits, VLC-based systems also offer cost-effective installation.
+
+Numerous problems with wireless RF networking methods were examined in [18]. Studies were conducted with an eye towards resolving these problems with VLC systems. In addition, a talk about applications, ways to solve current VLC problems, and upcoming advancements was given. In [19] Author has designed and implemented heterogeneous systems via wireless RF and VLC. A hybrid WiFi-VLC network makes up one system, while the other is created by aggregating Wifi and VLC in parallel utilising the Linux operating system's bonding approach. The downlink for the hybrid network is a VLC channel, which is only intended to be utilised in one way.
+
+## A. VISIBLE LIGHT COMMUNICATION IN TELEMEDICINE.
+
+Telemedicine refers to the use of telecommunications to transfer medical information and provide healthcare to patients remotely. The goal is to provide evidence-based medical treatment to anybody, anywhere, at any time. Research investigations and real-world deployments are increasingly using wireless technologies for telemedicine. It enhances the quality of care by facilitating the flexible collection of medical records and by giving users convenience and mobility. Numerous primary uses, including vital sign monitoring, electronic medical record maintenance, orders to carers, and nonmedical services like entertainment, can be realised with the use of wireless technologies.
+
+### 3.1 Proposed system design
+
+Fig. 3 shows the entire system architecture for transferring multiple medical data. The transmitter module, the receiver module, the processing module, and the monitoring system are the four components that make up this system.
+
+
+
+Figure 3. Proposed block diagram.
+
+First, patients' biomedical signals-such as their temperature and heartbeat-are converted and sent via LEDs. In this case, it is assumed that every transmitter and receiver are perfectly synchronised. At the optical receiver, the data from the optical channel are transformed into a voltage signal after passing via a photodiode. The voltage signal was then detected in the monitoring system after demodulating to a digital signal in the processing module. Each step's specifics are listed below.
+
+## B. Transmitter Section:
+
+The transmitter section consists of several key components designed to ensure proper voltage regulation and sensor interfacing. Initially, a step-down transformer is used to convert the standard ${230}\mathrm{\;V}\mathrm{{AC}}$ supply to a lower voltage of $5\mathrm{\;V}$ DC. This step-down transformer is crucial for safely powering the subsequent components. Following the transformer, a bridge rectifier is employed. The bridge rectifier's role is to convert the alternating current (AC) from the transformer into direct current (DC), which is necessary for the operation of electronic components. To further refine the DC output and ensure a stable voltage, a voltage regulator, specifically the LM7805, is used.
+
+The LM7805 maintains a consistent output of $5\mathrm{\;V}$ DC, protecting the circuitry from voltage fluctuations that could potentially cause damage or unreliable operation. Additionally, a filter capacitor with a capacitance of 1000 microfarads $\left( {\mu \mathrm{F}}\right)$ is incorporated. This capacitor smooths out any remaining ripples in the DC output, providing a clean and steady voltage for the transmitter section. Various sensors, including heartbeat, temperature, and sound sensors, are connected to the PIC16F877A microcontroller. The PIC16F877A is a low-power, high-performance microcontroller that features $8\mathrm{{KB}}$ of in-system programmable memory, allowing for flexibility in programming and updating. One of the standout features of the PIC16F877A microcontroller is its inbuilt Universal Asynchronous Transmitter/Receiver (UART). This UART module facilitates serial communication, enabling efficient data transmission between the microcontroller and other components or systems. Figure 4 illustrates the proposed block diagram for the transmitter section of the Healthcare Monitoring System (HMS) utilizing VLC This diagram provides a visual representation of how the components are interconnected and function together to monitor various health parameters.
+
+
+
+Figure 4. Block diagram of transmitter
+
+Each sensor in the system generates its own analog or digital values, depending on the type of sensor. These values are then fed into the PIC16F877A microcontroller. This microcontroller plays a crucial role in analyzing the data received from the different sensors, processing it accordingly, and displaying the processed information on an LCD screen for easy monitoring and interpretation. In addition to handling sensor data, the PIC16F877A microcontroller is interfaced with a Li-Fi transmitter, which in this case is an LED bulb. Once the data from the sensors is processed by the microcontroller, the PIC establishes serial communication with the Li-Fi transmitter. This enables the transmission of data through VLC. The Li-Fi transmitter uses the LED bulb to transmit the data by modulating the light at very high speeds. The switching frequency of the LED must be high enough to avoid any flickering, which is crucial for ensuring the safety and comfort of human eyes. The rapid modulation allows the LED to transmit data without perceptible changes in light intensity, maintaining a consistent illumination while still enabling data communication. Figure 5.illustrates the working model of the transmitter for the HMS utilizing VLC. This figure provides a detailed visual representation of how the system components interact, showcasing the integration of sensors, the PIC microcontroller, and the Li-Fi transmitter in the overall design of the transmitter section. C. Receiver section:
+
+
+
+Figure 5. Working model of transmitter for HMS using VLC
+
+This section is also known as the monitoring section because the patient's results are monitored continuously through an LCD display. The monitoring section consists of several key components, including a Li-Fi receiver (photo detector), an Arduino UNO328P, an LCD, and a buzzer. A photodiode is used as the Li-Fi receiver. The photodiode functions as a light-to-electricity converter, detecting the light signals transmitted by the Li-Fi transmitter and converting them into corresponding electrical signals. However, the initial electrical signal generated by the photodiode tends to be weak and noisy. to address these issues, the signal undergoes processing through several stages. First, it passes through signal processing and amplification units to strengthen the signal and reduce noise. Following amplification, an envelope detector is used to demodulate the signal. The envelope detector extracts the original data signal from the modulated carrier wave. Next, a low-pass filter is employed to remove high-frequency noise from the signal, ensuring that the resulting signal is clean and suitable for further processing. Once the signal has been filtered, it is fed into a voltage comparator. The voltage comparator transforms the analog signal into a digital format, making it compatible with digital processing systems. The digital signal is then passed to the Arduino UNO328P for further processing. The Arduino microcontroller analyzes the incoming data and displays the relevant information on the LCD screen, providing real-time monitoring of the patient's health parameters. Additionally, the system can trigger a buzzer to alert healthcare personnel in case of critical readings or emergencies. Figure 6 illustrates the proposed block diagram for the receiver section of the HMS utilizing VLC. This diagram visually represents the flow of data from the Li-Fi receiver to the Arduino, highlighting the key components involved in signal processing and data presentation.
+
+
+
+Figure 6 block diagram of receiver
+
+For the system to function effectively, both the transmitter and receiver must be in a line of sight (LOS) position. This requirement ensures that the light signals can be transmitted and received without any obstructions. The information received by the receiver can be depicted in digital form, allowing for detailed analysis of the patient's health by displaying the data on an LCD screen. Additionally, the buzzer is connected to provide alertness to caregivers about the patient's health condition, with alerts being issued every 15 seconds for respiration values and every 30 seconds for heartbeat values.
+
+Figure 7 shows the working model of the receiver section, detailing the flow and processing of data from the Li-Fi receiver to the LCD display and the buzzer.
+
+
+
+Figure 7 working model of receiver
+
+Figure 8 shows the Transmitter and receiver section which contains LED as transmitting medium and photo detector as receiving medium. The sensors connected with the transmitting provide the details of health condition and those are transmitted through the light. This can be applied all medical fields which can be used without any interference.
+
+
+
+Figure 8 Hardware setup of HMS using VLC
+
+TABLE 1. COMPARISON BETWEEN EXISTING RADIO FREQUENCY AND VISIBLE LIGHT COMMUNICATION
+
+| Parameters | Visible light communication | Radio frequency |
| Distance coverage | Narrow | wide |
| Medium | Light illumination | Access point |
| Power | Based on LED | Low/medium |
| Security | Low | Enhanced |
| Electromagnetic interference | No | Yes |
| Multipath | (Line of sight) | High |
+
+TABLE 2: COMPARISON OF OTHER WIRELESS COMMUNICATION TECHNOLOGIES WITH VLC.
+
+| Wirele SS Techn ologies | Bandwidt $\mathbf{h}$ | Lin e of sigh t | Power consu mption | Cover age | Standa rds |
| VLC | 380-750 ns | Yes | LED consum es low power with illumin ation | limited | IEEE 802.15. 7 |
| Infrar ed | Regulated from Radio frequency/ Limited | Yes | Low power | Short range | - |
| Blueto oth | Regulated from Radio frequency/ Limited | No | Low power | Short range | IEEE 802.15. 1 |
| WIFI | Regulated from Radio frequency/ Limited | No | Averag e | limited | IEEE 802.15. 11 |
| Zigbee | Regulated from Radio frequency/ Limited | No | Low power | Short range | IEEE 802.15. 4 |
+
+Table 1 and 2 list the existing technologies used for signal transmissions each with its own characteristics. All these technologies uses radio frequency signals, which can leads for RF congestion. The proposed VLC can be used as an alternative way for data transmission, serving both as communication medium and as source illumination.
+
+## Limitations:
+
+- Line of sight: The transmitter and receiver must maintain a perfect line of sight to effectively transmit and receive data. Any obstruction can disrupt the communication.
+
+- Limited range: The range of the light beams used in Li-Fi technology is relatively short, typically about 5 to 10 meters. This limits the distance over which data can be transmitted effectively.
+
+- Device compatibility: Li-Fi technology only works on devices that are equipped with a Li-Fi receptor. This means that not all tablets, smartphones, and other devices can utilize Li-Fi without the necessary hardware.
+
+- Infrastructure Requirement: Implementing Li-Fi technology requires the construction of a whole new infrastructure. This includes installing LED bulbs capable of transmitting data and ensuring all relevant devices have the necessary receptors.
+
+- Limited Penetration: Light signals used in Li-Fi cannot penetrate through bricks or walls. As a result, Li-Fi can only be used within a single room, limiting its application in larger or multi-room environments.
+
+## IV. CONCLUSION
+
+$\mathrm{{Li}} - \mathrm{{Fi}}$ is emerging as a more suitable network for next-generation healthcare services in hospitals. Patient monitoring can be done efficiently using Li-Fi technology, which offers several advantages over traditional communication methods. In this paper, we demonstrated the application of Visible Light Communication (VLC) in a Health Monitoring System using a prototype model. It is shown that a Li-Fi network can be successfully utilized as a high-speed, secure, and safe method for data communication, providing real-time monitoring of vital signs such as heartbeats, respiration, and temperature. Li-Fi technology significantly reduces radio interference in the human body, an important consideration in sensitive healthcare environments. The system measures the patient's data automatically and continuously, ensuring constant monitoring without manual intervention. In the future, this system can be expanded to monitor multiple patients simultaneously. Each LED bulb in the hospital can serve as a monitoring point for a patient, leveraging the widespread presence of lighting infrastructure. Using this technology in the medical field offers several benefits, including faster diagnosis and the ability to access the internet alongside devices that use radio waves. The proposed system is fully automated, which means it operates without the need for continuous human oversight, enhancing efficiency and reliability. If successfully implemented, this system could represent a significant milestone in the medical field, revolutionizing the way patient monitoring is conducted and improving overall healthcare delivery.
+
+## REFERENCES
+
+[1] M. A. Khalighi and M. Uysal, "Survey on free space optical communication: a communication theory perspective," IEEE Communications Surveys & Tutorials, vol. 16, no. 4, pp. 2231-2258, 2014.
+
+[2] Ismail, Saif & Salih, Muataz. (2020). A review of visible light communication (VLC) technology. AIP Conference Proceedings. 2213. 020289. 10.1063/5.0000109.
+
+[3] Y. Zhuang, L. Hua, L. Qi et al., "A survey of positioning systems using visible LED lights," IEEE Communications Surveys & Tutorials, vol. 20, no. 3, pp. 1963-1988, 2018.
+
+[4] Yu, T.C., Huang, W.T., Lee, W.B., Chow, C.W., Chang, S.W. and Kuo, H.C., 2021. Visible light communication system technology review: Devices, architectures, and applications. Crystals, 11(9), p. 1098.
+
+[5] An. J., & Chung, W.-Y. (2016). A novel indoor healthcare with time hopping-based visible light communication. In 2016 IEEE 3rd World Forum on Internet of Things (WF-IoT). 2016 IEEE 3rd World Forum on Internet of Things (WF-IoT). IEEE. https://doi.org/10.1109/wf-iot.2016.7845438
+
+[6] Almadani, Y.; Plets, D.; Bastiaens, S.; Joseph, W.; Ijaz, M.; Ghassemlooy, Z.; Rajbhandari, S. Visible Light Communications for Industrial Applications-Challenges and Potentials. Electronics 2020, 9, 2157.
+
+[7] Chen, C.-W.; Wang, W.-C.; Wu, J.-T.; Chen, H.-Y.; Liang, K.; Wei, L.- Y.; Hsu, Y.; Hsu, C.-W.; Chow, C.-W.; Yeh, C.-H.; et al. Visible light communications for the implementation of internet-of-things. Opt. Eng. 2016, 55, 060501.
+
+[8] Meucci, M., Seminara, M., Tarani, F., Riminesi, C. and Catani, J., 2021. Visible light communications through diffusive illumination of sculptures in a real museum. Journal of Sensor and Actuator Networks, 10(3), p.45.
+
+[9] Meucci, M.; Seminara, M.; Nawaz, T.; Caputo, S.; Mucchi, L.; Catani, J. Bidirectional Vehicle-to-Vehicle Communication System Based on VLC: Outdoor Tests and Performance Analysis. IEEE Trans. Intell. Transp. Syst. 2021.
+
+[10] Tan, Y.Y., Jung, S.J. and Chung, W.Y., 2013, July. Real time biomedical signal transmission of mixed ECG Signal and patient information using visible light communication. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (pp. 4791-4794). IEEE.
+
+[11] Galisteo, A.; Juara, D.; Giustiniano, D. Research in visible light communication systems with OpenVLC1.3. In Proceedings of the 2019 IEEE 5th World Forum on Internet of Things (WF-IoT), Limerick, Ireland, 15-18 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 539-544.
+
+[12] Ezilarasan M.R., N. Vignesh Prasanna.,(2018) International Journal of Innovative Technology and Exploring Engineering (IJITEE). (n.d.). H6645068819. [online] Available at: https://ijitee.org/portfolio-item/h6645068819/ [Accessed 30 May 2024].
+
+[13] Ibhaze, A.E., Orukpe, P.E. and Edeko, F.O., 2020. High capacity data rate system: Review of visible light communications technology. Journal of Electronic Science and Technology, 18(3), p. 100055.
+
+[14] "How lighting evolved over the years?- @zodhyatech Medium," 2017. [15] Ho, S.W., Duan, J. and Chen, C.S., 2017. Location-based information transmission systems using visible light communications. Transactions on Emerging Telecommunications Technologies, 28(1), p.e2922.
+
+[16] Ng, X.W. and Chung, W.Y., 2012. VLC-based medical healthcare information system. Biomedical Engineering: Applications, Basis and Communications, 24(02), pp.155-163.
+
+[17] Abuella, H., Elamassie, M., Uysal, M., Xu, Z., Serpedin, E., Qaraqe, K.A. and Ekin, S., 2021. Hybrid RF/VLC systems: A comprehensive survey on network topologies, performance analyses, applications, and future directions. IEEE Access, 9, pp. 160402-160436.
+
+[18] Chowdhury, M.Z., Hossan, M.T., Islam, A. and Jang, Y.M., 2018. A comparative survey of optical wireless technologies: Architectures and applications. ieee Access, 6, pp. 9819-9840.
+
+[19] Shao, S. and Khreishah, A., 2016. Delay analysis of unsaturated heterogeneous omnidirectional-directional small cell wireless networks: The case of RF-VLC coexistence. IEEE Transactions on Wireless Communications, 15(12), pp.8406-8421.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aBxc4ADTyx/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aBxc4ADTyx/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..20a69411048d3b7c8af56ab856f277f28c28ab1a
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aBxc4ADTyx/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,208 @@
+§ DESIGN AND IMPLEMENTATION OF TELEMEDICINE SYSTEM USING LIGHT FIDELITY AND PIC16F877A MICROCONTROLLER
+
+M.R.Ezilarasan
+
+Department of Electronics and
+
+Communication Engineering
+
+Vel Tech Rangarajan Dr. Sagunthala
+
+$R\& D$ Institute of Science and Technology
+
+Chennai, India
+
+drezilarasan@veltech.edu.in
+
+Man-Fai Leung
+
+School of computing and Information
+
+Science, Faculty of science and
+
+Engineering
+
+Anglia Ruskin University, Cambridge,
+
+United kingdom
+
+Man-fai.leung@aru.ac.uk
+
+Xiangguang Dai*
+
+Chongqing Engineering Research
+
+Center of Internet of Things and
+
+Intelligent Control Technology,
+
+Chongqing Three Gorges University,
+
+Chongqing, China,
+
+daiziangguang@163.com
+
+${Abstract}$ — Medical body area networks currently use wireless communication technology to give patients and carers more freedom and convenience and radio frequency (RF) is the existing medium in healthcare applications. In this the possibility of electromagnetic waves interfering with precision medical equipment still exists, though. This study uses the newly developed wireless visible light communication (VLC) technology to provide a novel design and implementation of a medical healthcare information system. VLC is also referred as light fidelity (Li-Fi). Visible light-emitting diodes (LEDs), which are expected to overtake traditional incandescent and fluorescent lights as the dominant lighting source in the near future owing to their energy efficiency, can be used with VLC. In this research the patient's health is monitored using sensors, and an LCD will display the results. An LED lightbulb will communicate the data in the form of light. The photo-detector at the receiving end gathers the transmitted data, shows the output on an LCD, and informs the carers by beeping every 15 seconds for respiration and 30 seconds for heartbeat. These lighting fixtures can function as wireless data transmission devices in addition to sources of illumination by utilising the fast switching power of LEDs. Hospital regions with restricted radio frequencies can benefit from data services and monitoring provided by the prototype VLC-based medical healthcare system.
+
+Keywords- VLC, Li-Fi, Healthcare, Optical wireless communication, LED, Photodetector.
+
+§ I. INTRODUCTION
+
+Over the last ten years, optical wireless communication (OWC) has drawn a great deal of attention. OWC is viewed as an advantageous and complementary communication method to traditional radio frequency (RF) technology [1], which operates within a licensed and regulated electromagnetic spectrum band between ${30}\mathrm{{kHz}}$ and 300 GHz. Electromagnetic spectrum range is shown in below figure 1. Electromagnetic spectrum has different frequency ranges and for applications. This spectrum, commonly used for various wireless communication services such as mobile phones, Wi-Fi, and Bluetooth, based on their frequency ranges and it is experiencing increasing challenges due to the exponential growth in wireless data traffic and the proliferation of sophisticated applications. The surge in demand for wireless data has led to spectral congestion, where the available bandwidth is insufficient to support the volume of data being transmitted. This congestion results in slower data rates, increased latency, and degraded overall performance of wireless networks. Traditional RF technologies, while effective, are struggling to keep up with the demand, leading to a so-called "spectrum crisis." This crisis is characterized by the saturation of the RF spectrum, making it difficult to achieve high maximum data rates and maintain reliable communication.
+
+ < g r a p h i c s >
+
+In contrast, OWC utilizes the optical spectrum, which is much broader than the RF spectrum and less congested. This allows OWC to offer higher data rates, lower latency, and improved performance in environments where RF communication may be less effective. For instance, OWC can be particularly advantageous in scenarios where RF interference is a problem or where high-density data transmission is required, such as in urban areas, office buildings, and industrial settings. Moreover, OWC can operate in unlicensed bands of the optical spectrum, providing greater flexibility and cost-effectiveness for deployment. It can be used in various applications, including indoor communication using visible light (often referred to as Li-Fi), outdoor communication for point-to-point links, and underwater communication where RF signals are highly attenuated.
+
+Today, there is significance focus on researching new wireless communication alternatives that can provide massive connectivity, diverse data rates, low latency, high capacity, efficiency, and enhanced security [2]. In this context, VLC has emerged as promising wireless communication technique that could address key challenges within the wireless communications infrastructure [3]. VLC uses light that is visible as the transmission medium, leveraging LED's to transmit data. This technique offers several advantages over traditional RF-based systems. For instance, VLC has many advantages like high spped transmission, high data rate, security [4].
+
+One of the most notable advantages of VLC is its potential to provide an alternative to the heavily congested RF spectrum, offering up to 10,000 times more capacity. Additionally, the VLC spectrum is unregulated and unlicensed, presenting a vast and untapped resource for data transmission. This makes VLC is a user friendly bandwidth solution that can help with the problem RF spectrum shortage. Moreover, VLC can be used in conjunction with other essential communication systems and devices without causing interference with the electromagnetic fields generated by RF devices. This characteristic is particularly valuable in sensitive environments such as airplanes and hospitals, where RF interference can pose significant risks. There are ongoing studies exploring the transmission of medical data, such as photo plethysmography, electrocardiography, and body temperature using VLC applications $\left\lbrack {5,6}\right\rbrack$ .
+
+Another key advantage of VLC is its inherent security featureDespite RF signals, visible light is suitable for highly secure connections where wireless data transfer is meant to stay within range of an access point because it cannot pass through walls and does not expand uncontrollably. This characteristic makes VLC an ideal solution for environments where data security is paramount. For example, in corporate offices and government facilities, where sensitive information is frequently transmitted, VLC can ensure that data remains confined within the physical boundaries of a room or building. This physical limitation drastically reduces the risk of eavesdropping and unauthorized access compared to traditional RF-based systems, where signals can be intercepted from a distance. In residential settings, VLC can offer secure communication for smart home devices, preventing potential hackers from accessing personal data transmitted over wireless networks.
+
+Furthermore, VLC can be seamlessly integrated into existing lighting infrastructure, offering dual functionality in the same hardware. This dual-purpose capability means that buildings can have efficient lighting and secure data transmission without the need for additional installations, reducing costs and complexity. For instance, LED lights in a smart home could provide both illumination and secure network connectivity to various IoT devices, such as security cameras, smart locks, and home automation systems.
+
+For all these reasons, VLC can be applied to most IoT-based smart systems [7]. In the context of smart cities, VLC can be used for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, enhancing traffic management and safety. Intelligent transportation systems can leverage VLC to provide real-time data exchange between traffic lights and autonomous vehicles, reducing the risk of accidents and improving traffic flow. Additionally, VLC can be employed in smart grids for secure and efficient communication between various components of the electrical grid, facilitating better energy management and reducing the risk of cyberattacks on critical infrastructure. This research paper is organized as follows: Section 2 reviews the existing literature and previous studies relevant to our research, providing a comprehensive background and context while highlighting gaps and opportunities that our study aims to address. Section 3 delves into the concept of VLC, explaining its fundamental principles, advantages, applications, current state of development, and technical challenges. Section 4 presents our proposed methodology and approach, detailing the experimental setup, data collection methods, and analysis procedures, and outlining the innovative aspects of our work. Finally, Section 5 summarizes the key findings, discusses the implications and contributions to the field, and suggests potential areas.
+
+§ II. RELATED WORKS
+
+Few more research in recent years, a revolutionary approach to wireless communication has emerged, known as the Internet of LED. This paradigm shift integrates the Internet of Things (IoT) with VLC using LED technology, opening up a realm of possibilities across various industries and applications. One of the most intriguing applications of this technology is seen in indoor navigation and art gallery monitoring is given in [8].In this a prestigious museum adorned with an array of LED lights embedded in the ceiling. These LEDs serve a dual purpose providing ambient illumination and acting as data transmitters. Through sophisticated algorithms and protocols, these LED arrays communicate with users' mobile devices, offering precise positioning information about products, exhibits, or artworks. This not only enhances the overall experience for visitors but also streamlines operations and enhances security in such environments.
+
+The automotive sector has also embraced the Internet of LED with enthusiasm. Modern vehicles are equipped with advanced LED headlights and taillights, which are not just efficient in lighting up the road but also serve as integral components in vehicular VLC systems [9]. These systems utilize the rapid flickering capabilities of LEDs to transmit data, enabling real-time communication for collision prevention systems and enhancing overall road safety. While traditional wireless technologies heavily rely on RF systems, such as Bluetooth, Zig bee, WLAN, and WPAN, there's a growing realization of the limitations and potential risks associated with RF in certain environments, especially healthcare settings.
+
+Concerns about electromagnetic interference and its impact on sensitive medical equipment and patient safety have spurred interest in VLC using energy-efficient LEDs as a viable alternative $\left\lbrack {{10},{11}}\right\rbrack$ . LEDs offer a multitude of advantages over conventional light sources. Their long operating lifetimes, minimal power consumption, and exceptional reliability make them ideal candidates for applications requiring continuous and efficient data transmission [12].
+
+Researchers are actively exploring how VLC using LEDs can not only address concerns about RF-related interference but also revolutionize wireless communication in critical environments like hospitals and clinics. The ongoing research and development in the Internet of LED are poised to reshape the wireless communication landscape. From enhancing indoor navigation experiences to improving road safety and revolutionizing healthcare communication, the fusion of IoT with VLC using LED technology promises efficient, secure, and reliable connectivity across diverse sectors. As this technology continues to evolve, we can expect even more innovative applications and transformative impacts on how we communicate and interact in the digital age.
+
+§ III. VISIBLE LIGHT COMMUNICATION
+
+Based on a numerical estimate of optical transmission needs, [13] University in Japan was the first to propose VLC, which uses white LEDs to transport data. Evolution of light is shown in figure 2
+
+ < g r a p h i c s >
+
+Figure 2 Evolution of Light [14]
+
+Since then, a great deal of research has been done on the use of commercial white LEDs for short-range indoor communications. Owing to the characteristic of short-range communication restricted by the light beam's range, [15] suggested that VLC is more suited for location-based service applications, including giving consumers access to location-specific information. Additionally, new developments for VLC applications are always being made.
+
+In [16] suggested a system of information that would allow medical facilities to use the hospital illumination network to offer private or public data services. Nevertheless, the study's primary focus was on the information system's architectural concepts; the actual design and implementation of medical data transfer via VLC has not yet been documented. In this work, we provide a safe and readily available substitute for RF wireless technology: a wireless VLC-based medical healthcare information system. VLC is a great option for wireless access in the medical and healthcare sectors where electromagnetic interference (EMI) and RF pollution are major concerns.
+
+The greatest frequency utilized in RF technology is 10,000 times lower than the wide bandwidth of VLC [17]. Unlike the RF band, which is crowded and has issues with frequency allocation, the VLC band is currently unlicensed. Since communication takes place inside the visible light spectrum, the Federal Communications Commission (FCC) has not established any regulations. Due to the preinstallation of LED devices indoors and their compatibility with inexpensive electronic drive circuits, VLC-based systems also offer cost-effective installation.
+
+Numerous problems with wireless RF networking methods were examined in [18]. Studies were conducted with an eye towards resolving these problems with VLC systems. In addition, a talk about applications, ways to solve current VLC problems, and upcoming advancements was given. In [19] Author has designed and implemented heterogeneous systems via wireless RF and VLC. A hybrid WiFi-VLC network makes up one system, while the other is created by aggregating Wifi and VLC in parallel utilising the Linux operating system's bonding approach. The downlink for the hybrid network is a VLC channel, which is only intended to be utilised in one way.
+
+§ A. VISIBLE LIGHT COMMUNICATION IN TELEMEDICINE.
+
+Telemedicine refers to the use of telecommunications to transfer medical information and provide healthcare to patients remotely. The goal is to provide evidence-based medical treatment to anybody, anywhere, at any time. Research investigations and real-world deployments are increasingly using wireless technologies for telemedicine. It enhances the quality of care by facilitating the flexible collection of medical records and by giving users convenience and mobility. Numerous primary uses, including vital sign monitoring, electronic medical record maintenance, orders to carers, and nonmedical services like entertainment, can be realised with the use of wireless technologies.
+
+§ 3.1 PROPOSED SYSTEM DESIGN
+
+Fig. 3 shows the entire system architecture for transferring multiple medical data. The transmitter module, the receiver module, the processing module, and the monitoring system are the four components that make up this system.
+
+ < g r a p h i c s >
+
+Figure 3. Proposed block diagram.
+
+First, patients' biomedical signals-such as their temperature and heartbeat-are converted and sent via LEDs. In this case, it is assumed that every transmitter and receiver are perfectly synchronised. At the optical receiver, the data from the optical channel are transformed into a voltage signal after passing via a photodiode. The voltage signal was then detected in the monitoring system after demodulating to a digital signal in the processing module. Each step's specifics are listed below.
+
+§ B. TRANSMITTER SECTION:
+
+The transmitter section consists of several key components designed to ensure proper voltage regulation and sensor interfacing. Initially, a step-down transformer is used to convert the standard ${230}\mathrm{\;V}\mathrm{{AC}}$ supply to a lower voltage of $5\mathrm{\;V}$ DC. This step-down transformer is crucial for safely powering the subsequent components. Following the transformer, a bridge rectifier is employed. The bridge rectifier's role is to convert the alternating current (AC) from the transformer into direct current (DC), which is necessary for the operation of electronic components. To further refine the DC output and ensure a stable voltage, a voltage regulator, specifically the LM7805, is used.
+
+The LM7805 maintains a consistent output of $5\mathrm{\;V}$ DC, protecting the circuitry from voltage fluctuations that could potentially cause damage or unreliable operation. Additionally, a filter capacitor with a capacitance of 1000 microfarads $\left( {\mu \mathrm{F}}\right)$ is incorporated. This capacitor smooths out any remaining ripples in the DC output, providing a clean and steady voltage for the transmitter section. Various sensors, including heartbeat, temperature, and sound sensors, are connected to the PIC16F877A microcontroller. The PIC16F877A is a low-power, high-performance microcontroller that features $8\mathrm{{KB}}$ of in-system programmable memory, allowing for flexibility in programming and updating. One of the standout features of the PIC16F877A microcontroller is its inbuilt Universal Asynchronous Transmitter/Receiver (UART). This UART module facilitates serial communication, enabling efficient data transmission between the microcontroller and other components or systems. Figure 4 illustrates the proposed block diagram for the transmitter section of the Healthcare Monitoring System (HMS) utilizing VLC This diagram provides a visual representation of how the components are interconnected and function together to monitor various health parameters.
+
+ < g r a p h i c s >
+
+Figure 4. Block diagram of transmitter
+
+Each sensor in the system generates its own analog or digital values, depending on the type of sensor. These values are then fed into the PIC16F877A microcontroller. This microcontroller plays a crucial role in analyzing the data received from the different sensors, processing it accordingly, and displaying the processed information on an LCD screen for easy monitoring and interpretation. In addition to handling sensor data, the PIC16F877A microcontroller is interfaced with a Li-Fi transmitter, which in this case is an LED bulb. Once the data from the sensors is processed by the microcontroller, the PIC establishes serial communication with the Li-Fi transmitter. This enables the transmission of data through VLC. The Li-Fi transmitter uses the LED bulb to transmit the data by modulating the light at very high speeds. The switching frequency of the LED must be high enough to avoid any flickering, which is crucial for ensuring the safety and comfort of human eyes. The rapid modulation allows the LED to transmit data without perceptible changes in light intensity, maintaining a consistent illumination while still enabling data communication. Figure 5.illustrates the working model of the transmitter for the HMS utilizing VLC. This figure provides a detailed visual representation of how the system components interact, showcasing the integration of sensors, the PIC microcontroller, and the Li-Fi transmitter in the overall design of the transmitter section. C. Receiver section:
+
+ < g r a p h i c s >
+
+Figure 5. Working model of transmitter for HMS using VLC
+
+This section is also known as the monitoring section because the patient's results are monitored continuously through an LCD display. The monitoring section consists of several key components, including a Li-Fi receiver (photo detector), an Arduino UNO328P, an LCD, and a buzzer. A photodiode is used as the Li-Fi receiver. The photodiode functions as a light-to-electricity converter, detecting the light signals transmitted by the Li-Fi transmitter and converting them into corresponding electrical signals. However, the initial electrical signal generated by the photodiode tends to be weak and noisy. to address these issues, the signal undergoes processing through several stages. First, it passes through signal processing and amplification units to strengthen the signal and reduce noise. Following amplification, an envelope detector is used to demodulate the signal. The envelope detector extracts the original data signal from the modulated carrier wave. Next, a low-pass filter is employed to remove high-frequency noise from the signal, ensuring that the resulting signal is clean and suitable for further processing. Once the signal has been filtered, it is fed into a voltage comparator. The voltage comparator transforms the analog signal into a digital format, making it compatible with digital processing systems. The digital signal is then passed to the Arduino UNO328P for further processing. The Arduino microcontroller analyzes the incoming data and displays the relevant information on the LCD screen, providing real-time monitoring of the patient's health parameters. Additionally, the system can trigger a buzzer to alert healthcare personnel in case of critical readings or emergencies. Figure 6 illustrates the proposed block diagram for the receiver section of the HMS utilizing VLC. This diagram visually represents the flow of data from the Li-Fi receiver to the Arduino, highlighting the key components involved in signal processing and data presentation.
+
+ < g r a p h i c s >
+
+Figure 6 block diagram of receiver
+
+For the system to function effectively, both the transmitter and receiver must be in a line of sight (LOS) position. This requirement ensures that the light signals can be transmitted and received without any obstructions. The information received by the receiver can be depicted in digital form, allowing for detailed analysis of the patient's health by displaying the data on an LCD screen. Additionally, the buzzer is connected to provide alertness to caregivers about the patient's health condition, with alerts being issued every 15 seconds for respiration values and every 30 seconds for heartbeat values.
+
+Figure 7 shows the working model of the receiver section, detailing the flow and processing of data from the Li-Fi receiver to the LCD display and the buzzer.
+
+ < g r a p h i c s >
+
+Figure 7 working model of receiver
+
+Figure 8 shows the Transmitter and receiver section which contains LED as transmitting medium and photo detector as receiving medium. The sensors connected with the transmitting provide the details of health condition and those are transmitted through the light. This can be applied all medical fields which can be used without any interference.
+
+ < g r a p h i c s >
+
+Figure 8 Hardware setup of HMS using VLC
+
+TABLE 1. COMPARISON BETWEEN EXISTING RADIO FREQUENCY AND VISIBLE LIGHT COMMUNICATION
+
+max width=
+
+Parameters Visible light communication Radio frequency
+
+1-3
+Distance coverage Narrow wide
+
+1-3
+Medium Light illumination Access point
+
+1-3
+Power Based on LED Low/medium
+
+1-3
+Security Low Enhanced
+
+1-3
+Electromagnetic interference No Yes
+
+1-3
+Multipath (Line of sight) High
+
+1-3
+
+TABLE 2: COMPARISON OF OTHER WIRELESS COMMUNICATION TECHNOLOGIES WITH VLC.
+
+max width=
+
+Wirele SS Techn ologies Bandwidt $\mathbf{h}$ Lin e of sigh t Power consu mption Cover age Standa rds
+
+1-6
+VLC 380-750 ns Yes LED consum es low power with illumin ation limited IEEE 802.15. 7
+
+1-6
+Infrar ed Regulated from Radio frequency/ Limited Yes Low power Short range -
+
+1-6
+Blueto oth Regulated from Radio frequency/ Limited No Low power Short range IEEE 802.15. 1
+
+1-6
+WIFI Regulated from Radio frequency/ Limited No Averag e limited IEEE 802.15. 11
+
+1-6
+Zigbee Regulated from Radio frequency/ Limited No Low power Short range IEEE 802.15. 4
+
+1-6
+
+Table 1 and 2 list the existing technologies used for signal transmissions each with its own characteristics. All these technologies uses radio frequency signals, which can leads for RF congestion. The proposed VLC can be used as an alternative way for data transmission, serving both as communication medium and as source illumination.
+
+§ LIMITATIONS:
+
+ * Line of sight: The transmitter and receiver must maintain a perfect line of sight to effectively transmit and receive data. Any obstruction can disrupt the communication.
+
+ * Limited range: The range of the light beams used in Li-Fi technology is relatively short, typically about 5 to 10 meters. This limits the distance over which data can be transmitted effectively.
+
+ * Device compatibility: Li-Fi technology only works on devices that are equipped with a Li-Fi receptor. This means that not all tablets, smartphones, and other devices can utilize Li-Fi without the necessary hardware.
+
+ * Infrastructure Requirement: Implementing Li-Fi technology requires the construction of a whole new infrastructure. This includes installing LED bulbs capable of transmitting data and ensuring all relevant devices have the necessary receptors.
+
+ * Limited Penetration: Light signals used in Li-Fi cannot penetrate through bricks or walls. As a result, Li-Fi can only be used within a single room, limiting its application in larger or multi-room environments.
+
+§ IV. CONCLUSION
+
+$\mathrm{{Li}} - \mathrm{{Fi}}$ is emerging as a more suitable network for next-generation healthcare services in hospitals. Patient monitoring can be done efficiently using Li-Fi technology, which offers several advantages over traditional communication methods. In this paper, we demonstrated the application of Visible Light Communication (VLC) in a Health Monitoring System using a prototype model. It is shown that a Li-Fi network can be successfully utilized as a high-speed, secure, and safe method for data communication, providing real-time monitoring of vital signs such as heartbeats, respiration, and temperature. Li-Fi technology significantly reduces radio interference in the human body, an important consideration in sensitive healthcare environments. The system measures the patient's data automatically and continuously, ensuring constant monitoring without manual intervention. In the future, this system can be expanded to monitor multiple patients simultaneously. Each LED bulb in the hospital can serve as a monitoring point for a patient, leveraging the widespread presence of lighting infrastructure. Using this technology in the medical field offers several benefits, including faster diagnosis and the ability to access the internet alongside devices that use radio waves. The proposed system is fully automated, which means it operates without the need for continuous human oversight, enhancing efficiency and reliability. If successfully implemented, this system could represent a significant milestone in the medical field, revolutionizing the way patient monitoring is conducted and improving overall healthcare delivery.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aSYzSmasZz/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aSYzSmasZz/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e92726a546de11b301a5f78c77d88ad69e607e77
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aSYzSmasZz/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,403 @@
+# Path Planning of USV Based on the Improved Differential Evolution Algorithm
+
+Zhongming Xiao
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+xiaozhongming@dlmu.edu.cn
+
+Baoyi Hou
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+houbaoyi@dlmu.edu.cn
+
+Jun Ning
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+junning@dlmu.edu.cn
+
+Bin Lin
+
+Information Science and Technology College
+
+Dalian Maritime University
+
+Dalian, China
+
+binlin@dlmu.edu.cn
+
+Zhengjiang Liu
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+liuzhengjiang@dlmu.edu.cn
+
+${Abstract}$ -Planning a reasonable path and avoiding collisions with surrounding obstacles are among the most critical aspects of Unmanned Surface Vehicle (USV) navigation, which has drawn considerable attention from researchers in recent years, with various heuristic and intelligent optimization algorithms being applied to path planning. However, most existing algorithms have not sufficiently integrated safety and economy, leading to the planned paths that may not align with maritime practice. Therefore, to tackle the aforementioned issues, this paper introduces a differential evolution algorithm (DE) with an adaptive crossover factor for path planning and collision avoidance in USV. The collision risk index (CRI) is integrated with the DE, and the CRI is improved by introducing a restriction factor when selecting the degree of membership for the distance to closest point of approach (DCPA). The experimental results demonstrate that, compared with the other three algorithms, the improved DE exhibits greater advantages in terms of minimum distance to the target ship, minimum distance to obstacles, and total yaw distance, thereby validating the effectiveness of the algorithm.
+
+Index Terms-path planning, collision avoidance, collision risk index, differential evolution algorithm.
+
+## I. INTRODUCTION
+
+Unmanned surface vehicles (USVs) are intelligent control system that integrates path planning, communications, autonomous decision-making, and automatic target recognition, as well as a range of other advanced technologies. USVs utilize radar and AIS to continuously monitor their surroundings, enabling dynamic adjustments in course and speed to effectively avoid collisions with other ships or unknown obstacles at sea. With the continuous development of USV technology, the operational capabilities of USVs in various complex marine environments have steadily improved. Consequently, USVs are being increasingly used in diverse domains of daily life, such as waterway patrol and safety monitoring, ocean exploration and geological surveys, and marine biodiversity conservation.
+
+Path planning and collision avoidance technologies, as the core technologies of USV, have played a crucial role in their development. In light of this, scholars have conducted extensive research on the technologies. In past studies, many researchers have applied various heuristic algorithms to USV path planning, such as the A* algorithm[1] and the Dijkstra algorithm. With continuous development, many intelligent optimization algorithms have gradually been applied to the problem of path planning, such as the Ant Colony Optimization (ACO)[2] algorithm, Particle Swarm Optimization (PSO), Genetic Algorithm (GA)[3], Rapidly-exploring Random Tree (RRT) algorithm[4], Velocity Obstacle method (VO)[5], and Dynamic Window Approach (DWA). These algorithms derive feasible paths through specific operational strategies. However, during path planning, they often encounter issues such as falling into local optima or planning paths that are too close to obstacles, resulting in suboptimal solutions. Therefore, many researchers have improved various algorithms, such as the improved RRT algorithm[6], which introduces adaptive step size and target attraction mechanisms, allowing the USV to adaptively adjust its step size based on different waters and to adjust its direction of movement accordingly. The improved DWA[7] introduces the concept of obstacle search angle, enhancing the USV's obstacle avoidance capabilities in different scenarios.
+
+To fully utilize the advantages of various algorithms, scholars have combined different algorithms. For example, the combination of the PSO and Artificial Potential Field (APF) method [8] first plans a global path using the improved PSO, and the improved APF method is used for local path planning when dynamic obstacles are detected during navigation, which effectively reduces the collision risk. The combination of the GA and the ACO[9] uses the solution from the ACO as the initial population for the GA, thereby accelerating the convergence speed. However, most existing algorithms have not sufficiently integrated safety and economy, leading to paths that may not align with maritime practice.
+
+---
+
+The work was supported by the National Natural Science Foundation of China (No. 51939001, No. 62371085) and Fundamental Research Funds for the Central Universities (No.3132023514).
+
+*Corresponding author: Jun Ning.
+
+---
+
+To address the various issues associated with the aforementioned algorithms, this paper proposes an Improved Differential Evolution algorithm (I-DE) and integrates it with the Collision Risk Index (CRI). Simulation experiments demonstrate that the I-DE, compared with the other three algorithms, can more effectively avoid collisions with target ships and obstacles while reducing deviation distance, ensuring both safety and economy. The primary contributions of this paper are outlined as follows:
+
+(1) The crossover factor $\mathrm{{CR}}$ in the Differential Evolution algorithm (DE) is adaptively improved, enhancing population diversity while maintaining the relative independence of individuals. This allows the algorithm to search the solution space appropriately according to the different iteration stages.
+
+(2) The CRI is integrated with the DE, and a restriction factor is added when selecting the degree of membership for the Distance to the Closest Point of Approach (DCPA). This makes the calculation of collision risk more aligned with maritime practice.
+
+## II. SYSTEM MODEL
+
+## A. Differential evolution algorithm
+
+Differential Evolution(DE)[10] is an algorithm used to solve continuous optimization problems. It primarily involves five steps: population initialization, fitness evaluation, differential mutation, crossover operation, and selection of new individuals.
+
+1) Population initialization: Initially, a population of size $\mathrm{M}$ is formed by randomly generating $\mathrm{M}$ individuals, where each individual is composed of n-dimensional vector. The size of the population affects the search capabilities of the algorithm and the use of computational resources. Generally, a larger population enhances the algorithm's global search capabilities but also increases computational costs.
+
+$$
+{X}_{i}\left( 0\right) = \left( {{x}_{i,1}\left( 0\right) ,{x}_{i,2}\left( 0\right) ,{x}_{i,3}\left( 0\right) ,\ldots ,{x}_{i, n}\left( 0\right) }\right) \tag{1}
+$$
+
+$$
+{X}_{i, j}\left( 0\right) = {X}_{i\min } + \operatorname{rand}\left( {0,1}\right) \left( {{X}_{i\max } - {X}_{i\min }}\right) \tag{2}
+$$
+
+$$
+i = 1,2,3,\ldots , M, j = 1,2,3,\ldots n \tag{3}
+$$
+
+Here, ${X}_{i}\left( 0\right)$ denotes an individual, ${X}_{i, j}\left( 0\right)$ denotes the $\mathrm{j}$ -th dimensional vector of the individual, with ${X}_{i\min }$ and ${X}_{i\max }$ specifying the respective lower and upper bounds of this vector.
+
+2) Fitness evaluation: When calculating the fitness of the population individuals (the objective function value), it is necessary to define the objective function based on the specific problem. By designing appropriate objective functions, the algorithm can adapt to various optimization needs and complex problem environments, demonstrating high flexibility and adaptability. In this paper, the fitness is employed to assess the quality of the path points.
+
+3) Differential mutation: Below are descriptions of several mutation strategies that have been extensively researched: DE/rand/1:
+
+$$
+{V}_{i}\left( G\right) = {X}_{r1}\left( G\right) + F \times \left( {{X}_{r2}\left( G\right) - {X}_{r3}\left( G\right) }\right) \tag{4}
+$$
+
+DE/best/1:
+
+$$
+{V}_{i}\left( G\right) = {X}_{\text{best }}\left( G\right) + F \times \left( {{X}_{r1}\left( G\right) - {X}_{r2}\left( G\right) }\right) \tag{5}
+$$
+
+Using DE/rand/1 as an illustration, ${X}_{r1}\left( G\right) ,{X}_{r2}\left( G\right)$ , and ${X}_{r3}\left( G\right)$ are three different vectors randomly selected from the parent generation, ${r1} \neq {r2} \neq {r3} \neq i \in \{ 1,2,3\ldots \ldots , M\} ,\mathrm{\;F}$ is the scaling factor, and $\mathrm{F}$ ranges from 0 to 2, typically set to ${0.5}.{V}_{i}\left( G\right)$ is a new vector generated through the mutation strategy. Different mutation strategies have different population optimization abilities. To better understand the common properties of various mutation strategies, Feoktistov summarized them in a general form as follows: ${V}_{i} = {\beta }_{i} + F \times {\delta }_{i}$ , where ${\beta }_{i}$ serves as the base vector and ${\delta }_{i}$ acts as the differential vector.
+
+4) Crossover operation:
+
+$$
+{U}_{i, j}\left( G\right) = \left\{ \begin{array}{ll} {V}_{i, j}\left( G\right) , & \text{ rand }\lbrack 0,1) < {CR}\text{ or }j = \text{ jrand } \\ {X}_{i, j}\left( G\right) , & \text{ otherwise } \end{array}\right. \tag{6}
+$$
+
+The crossover factor $\mathrm{{CR}}$ ranges from 0 to $1.\mathrm{j}$ is the current vector's dimension and jrand is a dimension randomly selected within the range from 1 to $\mathrm{n}$ . Adding the condition $j = {jrand}$ guarantees that at least one dimension of the new individual comes from the mutant individual, thereby avoiding being identical to the initial individual. The crossover process is illustrated in Fig. 1.
+
+5) Selection of new individuals: The selection operation evaluates the fitness values of individuals to steer the population toward a better direction. The direction of population evolution is determined by the following formula:
+
+$$
+{X}_{i}\left( {G + 1}\right) = \left\{ \begin{array}{ll} {U}_{i}\left( G\right) , & f\left( {{U}_{i}\left( G\right) }\right) \leq f\left( {{X}_{i}\left( G\right) }\right) \\ {X}_{i}\left( G\right) , & \text{ otherwise } \end{array}\right. \tag{7}
+$$
+
+Here, $f\left( {{U}_{i}\left( G\right) }\right)$ and $f\left( {{X}_{i}\left( G\right) }\right)$ are the fitness of the new individual and the initial individual, respectively.
+
+## B. Ship encounter situations and responsibility allocation.
+
+In areas with good visibility, collision avoidance behavior should comply with Rules 8, 13, 14, and 15 of the International Regulations for Preventing Collisions at Sea (COLREGs). Rule 8 explicitly stipulates the actions to be taken to avoid collisions, while Rules 13 to 15 define the different encounter situations: overtaking, head-on, and crossing encounters. Therefore, this paper incorporates the COLREGs and fully considers the implications of the ship encounter situations on collision avoidance behavior. Based on the course angles and positions of the two ships, the encounter between ships is classified into four scenarios. The classification is detailed in Table I.
+
+When encountering another ship head-on, both ships share equal responsibility to give way. In overtaking situations, the overtaking ship has the responsibility to give way, while the ship being overtaken should preserve its original state. In a left-crossing scenario, the own ship should preserve the original state with the other ship bearing the responsibility to give way. Conversely, in a right crossing situation, the own ship has the duty to give way, while the other ship should preserve the original state.
+
+
+
+Fig. 1 Crossover operation
+
+TABLE I
+
+SHIP ENCOUNTER SITUATION CLASSIFICATION
+
+| True bearing of TS to OS/° | Course difference/ ${}^{ \circ }$ | Encounter |
| ${354} \leq {\theta }_{r} \leq 6$ | ${174} \leq {\Delta C} \leq {186}$ | Head-on |
| ${247.5} \leq {\theta }_{r} < {354}$ | ${67.5} \leq {\Delta C} < {174}$ | Left-Crossing |
| $6 < {\theta }_{r} \leq {112.5}$ | ${186} < {\Delta C} \leq {292.5}$ | Right-Crossing |
| ${112.5} < {\theta }_{r} < {247.5}$ | ${\Delta C} < {67.5}\mathrm{\;U}{\Delta C} > {292.5}$ | Overtaking |
+
+## III. ALGORITHM IMPROVEMENTS
+
+This section introduces the I-DE. Firstly, the crossover factor CR in the crossover operation is adaptively improved[11]. Concurrently, enhancements are made to the traditional CRI model by incorporating a restriction factor when selecting the membership function for DCPA, thus aligning the calculation of CRI more closely with maritime practices. Finally, the COLREGs and CRI are incorporated into the fitness function evaluation, forming an evaluation set based on safety, economy, compliance with COLREGs, and optimal collision avoidance timing. The safety factor is determined by the CRI.
+
+## A. Adaptive cross-factor ${CR}$
+
+The crossover factor CR determines the likelihood of each dimension of an individual being altered. A larger CR value facilitates the more effective transfer of information from the mutant individual to the initial individual, while a smaller CR value, although reducing the transfer of information, enhances the independence between individuals. Therefore, an adaptive CR mechanism is proposed to balance the above two effects, with the following improvements:
+
+$$
+{C}_{{R}_{n}} = \left\{ \begin{array}{ll} {C}_{{R}_{1}}, & f\left( {x}_{n}^{G}\right) > f\left( {x}_{\text{avg }}^{G}\right) \\ {C}_{{R}_{0}} \times \frac{\left( {{C}_{{R}_{1}} - {C}_{{R}_{0}}}\right) \left( {f\left( {x}_{\text{avg }}^{G}\right) - f\left( {x}_{n}^{G}\right) }\right) }{f\left( {x}_{\text{avg }}^{G}\right) - f\left( {x}_{\text{min }}^{G}\right) }, & f\left( {x}_{n}^{G}\right) \leq f\left( {x}_{\text{avg }}^{G}\right) \end{array}\right. \tag{8}
+$$
+
+$f\left( {x}_{n}^{G}\right)$ and $f\left( {x}_{\text{avg }}^{G}\right)$ denote the fitness of the n-th individual and the average fitness of all individuals, respectively. $f\left( {x}_{\min }^{G}\right)$ denotes the lowest fitness across all individuals.
+
+## B. Collision risk index (CRI)
+
+The CRI[12] is a fuzzy index used to assess collision risk, representing the likelihood of a collision occurring between ships. It is affected by external factors like the speed and course of the ship, along with the subjective factors of the operator. This paper constructs a collision risk model using three factors: DCPA, TCPA, and the inter-ship distance D. Additionally, a restriction factor is added when selecting the membership function of DCPA to improve the rationality of the selection. The set of factors for the CRI is established as follows:
+
+$$
+U = \{ {DCPA}\text{、}{TCPA}\text{、}D\} \tag{9}
+$$
+
+Define the membership functions for each factor:
+
+1) Membership function of DCPA
+
+Take the own ship(OS)'s position as the origin to establish a spatial right-angled coordinate system, the OS's coordinates are set at $\left( {{x}_{O},{y}_{O}}\right)$ , and the speed and heading are set to ${v}_{O}$ and ${\varphi }_{O}$ , respectively; similarly, the target ship(TS)'s position, speed and heading are set to $\left( {{x}_{T},{y}_{T}}\right) ,{v}_{T}$ and ${\varphi }_{T}$ , respectively. The true bearings of OS to the TS and the TS to OS are ${a}_{OT}$ and ${a}_{TO}$ , respectively. The relative speed between the two ships is ${v}_{R}$ .
+
+In previous studies, the selection of the membership function for DCPA only considered the safe distance of approach (SDA) ${r}_{1}$ and the absolute safe distance of approach ${r}_{2}$ , without considering whether the ship domains of OS and the TS were infringed upon. Fig. 2 illustrates various situations where the ship domains of OS and the TS are infringed upon. Therefore, the membership function for DCPA is improved to address this issue.
+
+Establish a coordinate system with the TS as the origin, the direction of the bow as the positive y-axis, and the direction perpendicular to the bow to the right as the positive $\mathrm{x}$ -axis. Perform a coordinate transformation for the position of OS:
+
+$$
+{x}_{O1} = D\sin {\beta }_{0},{y}_{O1} = D\cos {\beta }_{0},{\beta }_{0} = {a}_{OT} - {\varphi }_{T} + {\gamma }_{1}\text{,} \tag{10}
+$$
+
+$$
+{\gamma }_{1} = \left\{ \begin{array}{ll} {360}, & {a}_{OT} - {\varphi }_{T} \leq 0 \\ 0, & {a}_{OT} - {\varphi }_{T} > 0 \end{array}\right. \tag{11}
+$$
+
+Based on the transformed coordinates $\left( {{x}_{O1},{y}_{O1}}\right)$ , the relative motion line equation of OS relative to the TS is obtained:
+
+$$
+y = \cot \left( {{\varphi }_{R} - {\varphi }_{T}}\right) x + \left( {{y}_{O1} - {x}_{O1}\cot \left( {{\varphi }_{R} - {\varphi }_{T}}\right) }\right) \tag{12}
+$$
+
+$$
+{\varphi }_{R} = \left\{ \begin{array}{l} \arctan \frac{{v}_{OTx}}{{v}_{OTy}} + \theta \;\text{ otherwise } \\ {90}\;{v}_{OTx} \geq 0,{v}_{OTy} = 0 \\ {270}\;{v}_{OTx} < 0,{v}_{OTy} = 0 \end{array}\right. \tag{13}
+$$
+
+$$
+\theta = \left\{ \begin{array}{ll} 0 & {v}_{OTx} \geq 0,{v}_{OTy} > 0 \\ {180} & {v}_{OTx} \geq 0,{v}_{OTy} < 0\text{ or }{v}_{OTx} < 0,{v}_{OTy} < 0 \\ {360} & {v}_{OTx} < 0,{v}_{OTy} > 0 \end{array}\right. \tag{14}
+$$
+
+Due to the change in the coordinate system, the relative motion line equation also needs to be transformed:
+
+$$
+x = \cot \left( {{\varphi }_{R} - {\varphi }_{T}}\right) y + \left( {{y}_{O1} - {x}_{O1}\cot \left( {{\varphi }_{R} - {\varphi }_{T}}\right) }\right) \tag{15}
+$$
+
+
+
+Fig. 2 Various situations of OS and the TS's domains being intruded upon: (a)the TS does not intrude into OS's domain, but OS ntrudes into the TS's domain; (b)OS does not intrude into the TS's domain, but the TS intrudes into OS's domain; (c)both ships intrude into each other's domains.
+
+As shown in Fig. 3, when OS intrudes into the TS's domain, the relative motion line of OS to the TS will intersect with the boundary of the TS's domain. Therefore, by calculating the existence of an intersection point, can determine whether OS has intruded into the TS's domain.
+
+Similarly, by analyzing whether the relative motion line of the TS to OS intersects with the boundary of OS's domain, can determine whether the TS has intruded into OS's domain. The improved membership function is as follows:
+
+$$
+{k}_{DCPA} = \left\{ \begin{array}{ll} 1, & {DCPA} < {r}_{1}\begin{Vmatrix}{{p}_{1} > 0}\end{Vmatrix}{p}_{2} > \\ \frac{1}{2} - \frac{1}{2}\sin \left( {\frac{{180}^{ \circ }}{{r}_{2} - {r}_{1}}\left( {{DCPA} - \frac{{r}_{2} + {r}_{1}}{2}}\right) }\right) , & {r}_{1} < {DCPA} < {r}_{2} \\ 0, & {DCPA} \geq {r}_{2} \end{array}\right. \tag{16}
+$$
+
+Here, ${p}_{1}$ is the number of intersection points between OS’s relative motion line to the TS and the boundary of the TS's domain, ${p}_{2}$ is the number of intersection points between the TS’s relative motion line to OS and the boundary of OS's domain.
+
+2) Membership function of TCPA
+
+$$
+{k}_{TCPA} = \left\{ \begin{array}{ll} 1, & {TCPA} \leq {T}_{1} \\ {\left( \frac{{T}_{2} - {TCPA}}{{T}_{2} - {T}_{1}}\right) }^{2}, & {T}_{1} < {TCPA} \leq {T}_{2} \\ 0, & {TCPA} > {T}_{2},{DCPA} > {d}_{4} \end{array}\right. \tag{17}
+$$
+
+$$
+{T}_{1} = \left\{ \begin{array}{ll} \frac{\sqrt{{d}_{3}^{2} - {DCP}{A}^{2}}}{{v}_{R}}, & {DCPA} \leq {d}_{3} \\ \frac{{DCPA} - {d}_{3}}{{v}_{R}}, & {DCPA} > {d}_{3} \end{array}\right. \tag{18}
+$$
+
+$$
+{T}_{2} = \frac{\sqrt{{d}_{4}^{2} - {DCP}{A}^{2}}}{{v}_{R}} \tag{19}
+$$
+
+Here, ${d}_{3}$ represents the latest avoidance distance for the burdened ship, ${d}_{4}$ represents the distance over which the ship is capable of taking evasive actions.
+
+3) Membership function of the distance between two ships (D)
+
+$$
+{k}_{D} = \left\{ \begin{matrix} 1, & 0 \leq D \leq {d}_{3} \\ {\left( \frac{{d}_{4} - D}{{d}_{4} - {d}_{3}}\right) }^{2}, & {d}_{3} < D \leq {d}_{4} \\ 0, & D > {d}_{4} \end{matrix}\right. \tag{20}
+$$
+
+
+
+Fig. 3 The relative motion lines of the two ships intersect at the boundary of the ship domain.
+
+Establish the weight set $W$ based on the importance of each factor in calculating the ${CRI}$ .
+
+$$
+W = \left\{ {{W}_{DCPA},{W}_{TCPA},{W}_{D}}\right\} \tag{21}
+$$
+
+where ${W}_{DCPA} > {W}_{TCPA} > {W}_{D}$ and ${W}_{DCPA} + {W}_{TCPA} + {W}_{D} = 1$ .
+
+$$
+{CRI} = W \times k = {W}_{DCPA}{k}_{DCPA} + {W}_{TCPA}{k}_{TCPA} + {W}_{D}{k}_{D} \tag{22}
+$$
+
+## C. Fitness function value (Fitness)
+
+Incorporate the COLREGs and the CRI into the evaluation of the fitness function, forming an evaluation set $\mathrm{F}$ based on factors of safety, economy, compliance with COLREGs, and optimal collision avoidance timing. The CRI determines the safety factor in the fitness function, while the voyage distance and the degree of turning together determine the economic factor in the fitness function.
+
+Constructing CRI-based objective function:
+
+$$
+{F}_{1} = {CRI} \tag{23}
+$$
+
+In path planning issues, the total voyage determines the consumption of cost during navigation and serves as an important economic assessment index. In the process of navigation, $\left( {{x}_{i},{y}_{i}}\right)$ represents the current point, and $\left( {{x}_{i - 1},{y}_{i - 1}}\right)$ is the previous point adjacent to $\left( {{x}_{i},{y}_{i}}\right)$ , with the total number of path points being $\mathrm{n}$ and the destination point being $\left( {{x}_{n},{y}_{n}}\right)$ . Constructing total voyage-based objective function:
+
+$$
+{F}_{2} = \frac{\sqrt{{\left( {x}_{i} - {x}_{i - 1}\right) }^{2} + {\left( {y}_{i} - {y}_{i - 1}\right) }^{2}}}{\sqrt{{\left( {x}_{n} - {x}_{i - 1}\right) }^{2} + {\left( {y}_{n} - {y}_{i - 1}\right) }^{2}}} + \frac{\sqrt{{\left( {x}_{n} - {x}_{i}\right) }^{2} + {\left( {y}_{n} - {y}_{i}\right) }^{2}}}{\sqrt{{\left( {x}_{n} - {x}_{i - 1}\right) }^{2} + {\left( {y}_{n} - {y}_{i - 1}\right) }^{2}}} \tag{24}
+$$
+
+Constructing degree of turning-based objective function:
+
+$$
+{F}_{3} = \arccos \left( \frac{\left( {{x}_{i} - {x}_{i - 1},{y}_{i} - {y}_{i - 1}}\right) \cdot {\left( {x}_{n} - {x}_{i},{y}_{n} - {y}_{i}\right) }^{T}}{\begin{Vmatrix}\left( {x}_{i} - {x}_{i - 1},{y}_{i} - {y}_{i - 1}\right) \cdot \left( {x}_{n} - {x}_{i},{y}_{n} - {y}_{i}\right) \end{Vmatrix}}\right) \tag{25}
+$$
+
+Based on the encounter situations and responsibility allocation described earlier, the objective function constructed according to the COLREGs is:
+
+$$
+{F}_{4} = \left\{ \begin{matrix} 1 & {000}^{ \circ } \leq {\theta }_{r} \leq {112.5}^{ \circ }\text{ or }{355}^{ \circ } \leq {\theta }_{r} \leq {005}^{ \circ } \\ 0 & \text{ otherwise } \end{matrix}\right. \tag{26}
+$$
+
+
+
+Fig. 4 Simulation results of a two-ship encounter
+
+TABLE II.
+
+SHIP PARAMETERS.
+
+| Parameter | OS | TS |
| Length Overall /m | 67.80 | 146.00 |
| Beam/m | 16.00 | 36.00 |
| Draft/m | 2.633 | 3.500 |
| Displacement/t | 1850 | 6530.5 |
| Water density $/{\mathrm{m}}^{3}$ | 1.025 | 1.025 |
+
+TABLE III.
+
+INITIAL STATES OF THE EXPERIMENTAL OBJECTS IN TWO SHIP ENCOUNTER
+
+| Ship | Initial heading/。 | Initial speed/kn | Distance OS/n mile | from |
| OS | 225 | 12 | 0 | |
| TS | 0 | 12 | 3.51 | |
| Obstacle | none | none | 2.14 | |
+
+The timing of collision avoidance depends on the CRI value. Therefore, based on previous research, an objective function for optimal collision avoidance timing is constructed with a threshold value of 0.3 :
+
+$$
+{F}_{5} = \left\{ \begin{array}{ll} 1 & {CRI} \geq {0.3} \\ 0 & {CRI} < {0.3} \end{array}\right. \tag{27}
+$$
+
+$$
+\text{ Fitness } = {W}_{1}{F}_{1} + {W}_{2}{F}_{2} + {W}_{3}{F}_{3} + {W}_{4}{F}_{4} + {W}_{5}{F}_{5} \tag{28}
+$$
+
+where ${W}_{1}\text{、}{W}_{2}\text{、}{W}_{3}\text{、}{W}_{4}\text{、}{W}_{5}$ are the weights for safety, total voyage, degree of turning, COLREGs, and optimal collision avoidance timing, respectively.
+
+## IV. EXPERIMENT
+
+This paper validates the effectiveness of the proposed Improved Differential Evolution (I-DE) algorithm through simulation experiments on the Matlab platform. The algorithm is compared with the traditional DE, PSO, and GA, using safety and economy as evaluation criteria.
+
+The simulation experiments are set in open waters with good visibility, ignoring external factors such as wind, waves, and currents. The experiments simulated scenarios involving two-ship and three-ship encounters, incorporating static obstacles in the two-ship encounters to more comprehensively assess the autonomous obstacle avoidance capability of the algorithm.
+
+
+
+Fig. 5 The states of the ships at specific time intervals in two ship encounter
+
+TABLEIV
+
+INITIAL STATES OF THE EXPERIMENTAL OBJECTS IN THREE SHIP ENCOUNTER
+
+| Ship | Initial heading/。 | Initial speed/kn | Distance OS/n mile | from |
| OS | 50 | 12 | 0 | |
| TS1 | 180 | 12 | 6.45 | |
| TS2 | 280 | 12 | 6.64 | |
+
+TABLE V.
+
+VARIOUS DATA OF SIMULATION RESULTS IN TWO SHIP ENCOUNTER
+
+| Algorithms | I-DE | PSO | GA | DE |
| Min Dis to obstacle/n mile | 0.499917 | 0.404968 | 0.421491 | 0.464535 |
| Min Dis to TS/n mile | 1.241208 | 0.639673 | 1.327054 | 0.561472 |
| Sum deviation Dis/n mile | 8.960443 | 9.832715 | 22.624526 | 10.548143 |
| Runtime/s | 6.7743 | 5.1392 | 12.1593 | 5.407 |
+
+TABLE VI.
+
+VARIOUS DATA OF SIMULATION RESULTS IN THREE SHIP ENCOUNTER
+
+| Algorithms | I-DE | PSO | GA | DE |
| Min Dis to TS 1/n mile | 2.409358 | 2.343583 | 2.593572 | 2.426915 |
| Min Dis to TS2/n mile | 1.010833 | 0.916672 | 1.176653 | 1.083573 |
| Sum deviation Dis/n mile | 22.762838 | 47.657674 | 31.220230 | 28.404747 |
| Runtime/s | 10.0148 | 8.352 | 18.726 | 9.7261 |
+
+
+
+Fig. 6 Simulation results of a three-ship encounter
+
+According to Rule 8 of the COLREGs: In open waters with ample space, changing course is the most effective measure to prevent collisions at close range, provided that timely and effective evasive maneuvers do not result in the two ships coming closer than the safe distance again. Therefore, to facilitate the study, when considering encounter situations and avoidance methods, and following the collision avoidance rules and the usual practices followed by seafarers at sea, the USV, when acting as the give-way ship, avoids a collision by changing course instead of reducing speed or stopping. The parameters and initial states of the experimental objects are displayed in Table II to Table IV.
+
+To ensure that the algorithms are tested under fair conditions, the parameters used in the experiments are uniformly set as follows: total iteration times $K = {50}$ , initial population size $M = {40}$ , and dimensions of each individual $n = 5,{W}_{DCPA} = {0.4}$ , ${W}_{TCPA} = {0.4},{W}_{D} = {0.2},{W}_{1} = {0.4},{W}_{2} = {0.2},{W}_{3} = {0.2},{W}_{4} = {0.1},{W}_{5} = {0.1}$ .
+
+
+
+Fig. 7 The states of the ships at specific time intervals in three ship encounter
+
+## A. Two-ship encounters
+
+The outcomes of the simulation are illustrated in Fig. 4-5. Fig. 4 displays the complete paths planned by the various algorithms, while Fig. 5 displays the states of the ships at specific time intervals, reflecting the real-time distance between the OS and the TS, providing data support for evaluating safety. In these figures, the blue ship represents OS, the black ship represents the TS, and the black hexagons represent the static obstacles. The simulation results are detailed in Table IV. By comparing the minimum distances(MD) between the OS and the TS and between the OS and the obstacle, the safety of the paths can be evaluated. The total deviation distance between the planned path and the initial path can be used to assess the economy of the path. Additionally, the table also shows the runtime of each algorithm.
+
+As evidenced by the experimental data, the PSO algorithm and the traditional DE algorithm perform poorly in terms of navigational safety. The paths planned by these two algorithms result in OS maintaining a relatively close distance to the TS during navigation, posing a higher collision risk. In contrast, the paths planned by the I-DE algorithm and the GA maintain a greater distance between the ships, better ensuring navigational safety. By comparing the MD between OS and the obstacle, it is evident that the I-DE algorithm also performs best in avoiding collisions with an obstacle, presenting the lowest collision risk.
+
+As shown in Fig. 4, in the initial stage, there is a considerable distance between the OS and the obstacle, and no evasive action is needed. However, the path planned by the GA deviates from the original route at the outset. This premature maneuver increases the deviation distance of OS, thereby reducing the path's economic efficiency. In contrast, the paths planned by the I-DE algorithm, the PSO algorithm, and the traditional DE algorithm closely adhere to the original path when far from the obstacle. These algorithms start taking evasive actions at approximately 1 nautical mile from the obstacle, adjusting the course to avoid collisions and passing the obstacle from a greater distance. Once the obstacle is safely passed, the ship gradually returns to the original path and eventually reaches the target point. According to the data in Table V, the total deviation distance for the GA is the largest, while the other three algorithms have relatively smaller deviation distances, indicating better economic efficiency.
+
+## B. Three-ship encounters
+
+Similar to the two-ship encounter scenario, the simulation results are presented in Fig. 6 and Fig. 7. The experimental findings indicate that the I-DE algorithm, along with the other three algorithms, ensures that the ship safely navigates through encounters with target ships, successfully avoiding collisions and demonstrating good safety performance.
+
+However, there are significant differences in economic performance among the four algorithms. The PSO algorithm results in the ship deviating significantly from its original course after avoiding TS2, with the total deviation distance reported in Table VI being 47.65 nautical miles, which compromises the economic efficiency of the path. In contrast, the paths planned by the I-DE, DE, and GA algorithms show a tendency to approach the original course after the avoidance maneuver with TS2. As shown in Fig. 6, the I-DE algorithm enables the ship to smoothly return to its original course after avoidance. According to the data in Table VI, the total deviation distances for the I-DE, DE, and GA algorithms are 22.76 nautical miles, 28.40 nautical miles, and 31.22 nautical miles, respectively, indicating that the path planned by the I-DE algorithm exhibits better economic efficiency.
+
+Since the runtime is of lower importance in path evaluation and the differences in runtime among the algorithms are minimal, the impact of runtime is not considered. Considering both safety and economy, the I-DE algorithm demonstrates better performance in planning safe, economical paths, significantly outperforming the other algorithms. Therefore, the simulation experiments confirm the effectiveness of the improved differential evolution algorithm.
+
+## V. CONCLUSION
+
+To address the issue of path planning and collision avoidance for USVs in open waters, this paper proposes an improved adaptive differential evolution algorithm. This algorithm adaptively adjusts the crossover factor (CR) in the crossover operation, enhancing population diversity while increasing the independence among individuals. Additionally, the CRI is incorporated into the fitness function evaluation. A restriction factor is added when selecting the membership function of DCPA, making the calculation of the collision risk more consistent with maritime practices. Simulations were conducted on the Matlab platform, comparing the I-DE algorithm with the PSO algorithm, the GA, and the traditional DE algorithm under two-ship encounter and three-ship encounter scenarios. Safety is determined by the minimum distance between own ship and the obstacle and the target ship, and economy is determined by the total deviation distance. The experimental outcomes indicate that the I-DE algorithm outperforms others in terms of safety and economy, thereby validating its effectiveness.
+
+In future research on path planning and collision avoidance for USVs, it is essential to fully consider the interference of external factors as well as the maneuverability of the USV. Furthermore, attention should be given to aspects such as communication delays, human factors in decision-making, and integration with existing maritime traffic management systems, thereby ensuring closer alignment with maritime practices.
+
+## REFERENCES
+
+[1] Y. Singh, S. Sharma, R. Sutton, D. Hatton, and A. Khan, "A constrained A* approach towards optimal path planning for an unmanned surface vehicle in a maritime environment containing dynamic obstacles and ocean currents," Ocean Eng., vol. 169, pp. 187-201, Dec 2018.
+
+[2] C. Ntakolia and D. Lyridis, "A comparative study on Ant Colony Optimization algorithm approaches for solving multi-objective path planning problems in case of unmanned surface vehicles," Ocean Eng., vol. 255, Jul 2022, Art. no. 111418.
+
+[3] H. Guo, Z. Y. Mao, W. J. Ding, and P. L. Liu, "Optimal search path planning for unmanned surface vehicle based on an improved genetic algorithm," Comput. Electr. Eng., vol. 79, Oct 2019, Art. no. 106467.
+
+[4] J. F. Zhang, H. Zhang, J. J. Liu, D. Wu, and C. G. Soares, "A Two-Stage Path Planning Algorithm Based on Rapid-Exploring Random Tree for Ships Navigating in Multi-Obstacle Water Areas Considering COLREGs," J. Mar. Sci. Eng., vol. 10, no. 10, Oct 2022, Art. no. 1441.
+
+[5] J. Ren, J. Zhang, and Y. N. Cui, "Autonomous Obstacle Avoidance Algorithm for Unmanned Surface Vehicles Based on an Improved Velocity Obstacle Method," ISPRS Int. J. Geo-Inf., vol. 10, no. 9, Sep 2021, Art. no. 618.
+
+[6] Z. Zhang, D. F. Wu, J. D. Gu, and F. S. Li, "A Path-Planning Strategy for Unmanned Surface Vehicles Based on an Adaptive Hybrid Dynamic Stepsize and Target Attractive Force-RRT Algorithm," J. Mar. Sci. Eng., Article vol. 7, no. 5, p. 14, May 2019, Art. no. 132.
+
+[7] Z. Wang, Y. Liang, C. Gong, Y. Zhou, C. Zeng, and S. Zhu, "Improved Dynamic Window Approach for Unmanned Surface Vehicles' Local Path Planning Considering the Impact of Environmental Factors," Sensors, vol. 22, no. 14, Jul 11, 2022.
+
+[8] Z. Wang, G. F. Li, and J. Ren, "Dynamic path planning for unmanned surface vehicle in complex offshore areas based on hybrid algorithm," Comput. Commun., Article vol. 166, pp. 49-56, Jan 2021.
+
+[9] Y. Liang and L. Wang, "Applying genetic algorithm and ant colony optimization algorithm into marine investigation path planning model," Soft Comput., Article vol. 24, no. 11, pp. 8199-8210, Jun 2020.
+
+[10] R. Storn and K. Price, "Differential evolution - A simple and efficient heuristic for global optimization over continuous spaces," J. Glob. Optim., Article vol. 11, no. 4, pp. 341-359, Dec 1997.
+
+[11] B. Zhang, X. Sun, S. Liu, and X. Deng, "Adaptive Differential Evolution-Based Distributed Model Predictive Control for Multi-UAV Formation Flight," Int. J. Aeronaut. Space Sci., vol. 21, no. 2, pp. 538-548, Jun 2020.
+
+[12] M. C. Tsou, "Multi-target collision avoidance route planning under an ECDIS framework," Ocean Eng., Article vol. 121, pp. 268-278, Jul 2016.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aSYzSmasZz/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aSYzSmasZz/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c295d78c3e9c8f4d52fa7abfd4422087984c639d
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/aSYzSmasZz/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,466 @@
+§ PATH PLANNING OF USV BASED ON THE IMPROVED DIFFERENTIAL EVOLUTION ALGORITHM
+
+Zhongming Xiao
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+xiaozhongming@dlmu.edu.cn
+
+Baoyi Hou
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+houbaoyi@dlmu.edu.cn
+
+Jun Ning
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+junning@dlmu.edu.cn
+
+Bin Lin
+
+Information Science and Technology College
+
+Dalian Maritime University
+
+Dalian, China
+
+binlin@dlmu.edu.cn
+
+Zhengjiang Liu
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+liuzhengjiang@dlmu.edu.cn
+
+${Abstract}$ -Planning a reasonable path and avoiding collisions with surrounding obstacles are among the most critical aspects of Unmanned Surface Vehicle (USV) navigation, which has drawn considerable attention from researchers in recent years, with various heuristic and intelligent optimization algorithms being applied to path planning. However, most existing algorithms have not sufficiently integrated safety and economy, leading to the planned paths that may not align with maritime practice. Therefore, to tackle the aforementioned issues, this paper introduces a differential evolution algorithm (DE) with an adaptive crossover factor for path planning and collision avoidance in USV. The collision risk index (CRI) is integrated with the DE, and the CRI is improved by introducing a restriction factor when selecting the degree of membership for the distance to closest point of approach (DCPA). The experimental results demonstrate that, compared with the other three algorithms, the improved DE exhibits greater advantages in terms of minimum distance to the target ship, minimum distance to obstacles, and total yaw distance, thereby validating the effectiveness of the algorithm.
+
+Index Terms-path planning, collision avoidance, collision risk index, differential evolution algorithm.
+
+§ I. INTRODUCTION
+
+Unmanned surface vehicles (USVs) are intelligent control system that integrates path planning, communications, autonomous decision-making, and automatic target recognition, as well as a range of other advanced technologies. USVs utilize radar and AIS to continuously monitor their surroundings, enabling dynamic adjustments in course and speed to effectively avoid collisions with other ships or unknown obstacles at sea. With the continuous development of USV technology, the operational capabilities of USVs in various complex marine environments have steadily improved. Consequently, USVs are being increasingly used in diverse domains of daily life, such as waterway patrol and safety monitoring, ocean exploration and geological surveys, and marine biodiversity conservation.
+
+Path planning and collision avoidance technologies, as the core technologies of USV, have played a crucial role in their development. In light of this, scholars have conducted extensive research on the technologies. In past studies, many researchers have applied various heuristic algorithms to USV path planning, such as the A* algorithm[1] and the Dijkstra algorithm. With continuous development, many intelligent optimization algorithms have gradually been applied to the problem of path planning, such as the Ant Colony Optimization (ACO)[2] algorithm, Particle Swarm Optimization (PSO), Genetic Algorithm (GA)[3], Rapidly-exploring Random Tree (RRT) algorithm[4], Velocity Obstacle method (VO)[5], and Dynamic Window Approach (DWA). These algorithms derive feasible paths through specific operational strategies. However, during path planning, they often encounter issues such as falling into local optima or planning paths that are too close to obstacles, resulting in suboptimal solutions. Therefore, many researchers have improved various algorithms, such as the improved RRT algorithm[6], which introduces adaptive step size and target attraction mechanisms, allowing the USV to adaptively adjust its step size based on different waters and to adjust its direction of movement accordingly. The improved DWA[7] introduces the concept of obstacle search angle, enhancing the USV's obstacle avoidance capabilities in different scenarios.
+
+To fully utilize the advantages of various algorithms, scholars have combined different algorithms. For example, the combination of the PSO and Artificial Potential Field (APF) method [8] first plans a global path using the improved PSO, and the improved APF method is used for local path planning when dynamic obstacles are detected during navigation, which effectively reduces the collision risk. The combination of the GA and the ACO[9] uses the solution from the ACO as the initial population for the GA, thereby accelerating the convergence speed. However, most existing algorithms have not sufficiently integrated safety and economy, leading to paths that may not align with maritime practice.
+
+The work was supported by the National Natural Science Foundation of China (No. 51939001, No. 62371085) and Fundamental Research Funds for the Central Universities (No.3132023514).
+
+*Corresponding author: Jun Ning.
+
+To address the various issues associated with the aforementioned algorithms, this paper proposes an Improved Differential Evolution algorithm (I-DE) and integrates it with the Collision Risk Index (CRI). Simulation experiments demonstrate that the I-DE, compared with the other three algorithms, can more effectively avoid collisions with target ships and obstacles while reducing deviation distance, ensuring both safety and economy. The primary contributions of this paper are outlined as follows:
+
+(1) The crossover factor $\mathrm{{CR}}$ in the Differential Evolution algorithm (DE) is adaptively improved, enhancing population diversity while maintaining the relative independence of individuals. This allows the algorithm to search the solution space appropriately according to the different iteration stages.
+
+(2) The CRI is integrated with the DE, and a restriction factor is added when selecting the degree of membership for the Distance to the Closest Point of Approach (DCPA). This makes the calculation of collision risk more aligned with maritime practice.
+
+§ II. SYSTEM MODEL
+
+§ A. DIFFERENTIAL EVOLUTION ALGORITHM
+
+Differential Evolution(DE)[10] is an algorithm used to solve continuous optimization problems. It primarily involves five steps: population initialization, fitness evaluation, differential mutation, crossover operation, and selection of new individuals.
+
+1) Population initialization: Initially, a population of size $\mathrm{M}$ is formed by randomly generating $\mathrm{M}$ individuals, where each individual is composed of n-dimensional vector. The size of the population affects the search capabilities of the algorithm and the use of computational resources. Generally, a larger population enhances the algorithm's global search capabilities but also increases computational costs.
+
+$$
+{X}_{i}\left( 0\right) = \left( {{x}_{i,1}\left( 0\right) ,{x}_{i,2}\left( 0\right) ,{x}_{i,3}\left( 0\right) ,\ldots ,{x}_{i,n}\left( 0\right) }\right) \tag{1}
+$$
+
+$$
+{X}_{i,j}\left( 0\right) = {X}_{i\min } + \operatorname{rand}\left( {0,1}\right) \left( {{X}_{i\max } - {X}_{i\min }}\right) \tag{2}
+$$
+
+$$
+i = 1,2,3,\ldots ,M,j = 1,2,3,\ldots n \tag{3}
+$$
+
+Here, ${X}_{i}\left( 0\right)$ denotes an individual, ${X}_{i,j}\left( 0\right)$ denotes the $\mathrm{j}$ -th dimensional vector of the individual, with ${X}_{i\min }$ and ${X}_{i\max }$ specifying the respective lower and upper bounds of this vector.
+
+2) Fitness evaluation: When calculating the fitness of the population individuals (the objective function value), it is necessary to define the objective function based on the specific problem. By designing appropriate objective functions, the algorithm can adapt to various optimization needs and complex problem environments, demonstrating high flexibility and adaptability. In this paper, the fitness is employed to assess the quality of the path points.
+
+3) Differential mutation: Below are descriptions of several mutation strategies that have been extensively researched: DE/rand/1:
+
+$$
+{V}_{i}\left( G\right) = {X}_{r1}\left( G\right) + F \times \left( {{X}_{r2}\left( G\right) - {X}_{r3}\left( G\right) }\right) \tag{4}
+$$
+
+DE/best/1:
+
+$$
+{V}_{i}\left( G\right) = {X}_{\text{ best }}\left( G\right) + F \times \left( {{X}_{r1}\left( G\right) - {X}_{r2}\left( G\right) }\right) \tag{5}
+$$
+
+Using DE/rand/1 as an illustration, ${X}_{r1}\left( G\right) ,{X}_{r2}\left( G\right)$ , and ${X}_{r3}\left( G\right)$ are three different vectors randomly selected from the parent generation, ${r1} \neq {r2} \neq {r3} \neq i \in \{ 1,2,3\ldots \ldots ,M\} ,\mathrm{\;F}$ is the scaling factor, and $\mathrm{F}$ ranges from 0 to 2, typically set to ${0.5}.{V}_{i}\left( G\right)$ is a new vector generated through the mutation strategy. Different mutation strategies have different population optimization abilities. To better understand the common properties of various mutation strategies, Feoktistov summarized them in a general form as follows: ${V}_{i} = {\beta }_{i} + F \times {\delta }_{i}$ , where ${\beta }_{i}$ serves as the base vector and ${\delta }_{i}$ acts as the differential vector.
+
+4) Crossover operation:
+
+$$
+{U}_{i,j}\left( G\right) = \left\{ \begin{array}{ll} {V}_{i,j}\left( G\right) , & \text{ rand }\lbrack 0,1) < {CR}\text{ or }j = \text{ jrand } \\ {X}_{i,j}\left( G\right) , & \text{ otherwise } \end{array}\right. \tag{6}
+$$
+
+The crossover factor $\mathrm{{CR}}$ ranges from 0 to $1.\mathrm{j}$ is the current vector's dimension and jrand is a dimension randomly selected within the range from 1 to $\mathrm{n}$ . Adding the condition $j = {jrand}$ guarantees that at least one dimension of the new individual comes from the mutant individual, thereby avoiding being identical to the initial individual. The crossover process is illustrated in Fig. 1.
+
+5) Selection of new individuals: The selection operation evaluates the fitness values of individuals to steer the population toward a better direction. The direction of population evolution is determined by the following formula:
+
+$$
+{X}_{i}\left( {G + 1}\right) = \left\{ \begin{array}{ll} {U}_{i}\left( G\right) , & f\left( {{U}_{i}\left( G\right) }\right) \leq f\left( {{X}_{i}\left( G\right) }\right) \\ {X}_{i}\left( G\right) , & \text{ otherwise } \end{array}\right. \tag{7}
+$$
+
+Here, $f\left( {{U}_{i}\left( G\right) }\right)$ and $f\left( {{X}_{i}\left( G\right) }\right)$ are the fitness of the new individual and the initial individual, respectively.
+
+§ B. SHIP ENCOUNTER SITUATIONS AND RESPONSIBILITY ALLOCATION.
+
+In areas with good visibility, collision avoidance behavior should comply with Rules 8, 13, 14, and 15 of the International Regulations for Preventing Collisions at Sea (COLREGs). Rule 8 explicitly stipulates the actions to be taken to avoid collisions, while Rules 13 to 15 define the different encounter situations: overtaking, head-on, and crossing encounters. Therefore, this paper incorporates the COLREGs and fully considers the implications of the ship encounter situations on collision avoidance behavior. Based on the course angles and positions of the two ships, the encounter between ships is classified into four scenarios. The classification is detailed in Table I.
+
+When encountering another ship head-on, both ships share equal responsibility to give way. In overtaking situations, the overtaking ship has the responsibility to give way, while the ship being overtaken should preserve its original state. In a left-crossing scenario, the own ship should preserve the original state with the other ship bearing the responsibility to give way. Conversely, in a right crossing situation, the own ship has the duty to give way, while the other ship should preserve the original state.
+
+ < g r a p h i c s >
+
+Fig. 1 Crossover operation
+
+TABLE I
+
+SHIP ENCOUNTER SITUATION CLASSIFICATION
+
+max width=
+
+True bearing of TS to OS/° Course difference/ ${}^{ \circ }$ Encounter
+
+1-3
+${354} \leq {\theta }_{r} \leq 6$ ${174} \leq {\Delta C} \leq {186}$ Head-on
+
+1-3
+${247.5} \leq {\theta }_{r} < {354}$ ${67.5} \leq {\Delta C} < {174}$ Left-Crossing
+
+1-3
+$6 < {\theta }_{r} \leq {112.5}$ ${186} < {\Delta C} \leq {292.5}$ Right-Crossing
+
+1-3
+${112.5} < {\theta }_{r} < {247.5}$ ${\Delta C} < {67.5}\mathrm{\;U}{\Delta C} > {292.5}$ Overtaking
+
+1-3
+
+§ III. ALGORITHM IMPROVEMENTS
+
+This section introduces the I-DE. Firstly, the crossover factor CR in the crossover operation is adaptively improved[11]. Concurrently, enhancements are made to the traditional CRI model by incorporating a restriction factor when selecting the membership function for DCPA, thus aligning the calculation of CRI more closely with maritime practices. Finally, the COLREGs and CRI are incorporated into the fitness function evaluation, forming an evaluation set based on safety, economy, compliance with COLREGs, and optimal collision avoidance timing. The safety factor is determined by the CRI.
+
+§ A. ADAPTIVE CROSS-FACTOR ${CR}$
+
+The crossover factor CR determines the likelihood of each dimension of an individual being altered. A larger CR value facilitates the more effective transfer of information from the mutant individual to the initial individual, while a smaller CR value, although reducing the transfer of information, enhances the independence between individuals. Therefore, an adaptive CR mechanism is proposed to balance the above two effects, with the following improvements:
+
+$$
+{C}_{{R}_{n}} = \left\{ \begin{array}{ll} {C}_{{R}_{1}}, & f\left( {x}_{n}^{G}\right) > f\left( {x}_{\text{ avg }}^{G}\right) \\ {C}_{{R}_{0}} \times \frac{\left( {{C}_{{R}_{1}} - {C}_{{R}_{0}}}\right) \left( {f\left( {x}_{\text{ avg }}^{G}\right) - f\left( {x}_{n}^{G}\right) }\right) }{f\left( {x}_{\text{ avg }}^{G}\right) - f\left( {x}_{\text{ min }}^{G}\right) }, & f\left( {x}_{n}^{G}\right) \leq f\left( {x}_{\text{ avg }}^{G}\right) \end{array}\right. \tag{8}
+$$
+
+$f\left( {x}_{n}^{G}\right)$ and $f\left( {x}_{\text{ avg }}^{G}\right)$ denote the fitness of the n-th individual and the average fitness of all individuals, respectively. $f\left( {x}_{\min }^{G}\right)$ denotes the lowest fitness across all individuals.
+
+§ B. COLLISION RISK INDEX (CRI)
+
+The CRI[12] is a fuzzy index used to assess collision risk, representing the likelihood of a collision occurring between ships. It is affected by external factors like the speed and course of the ship, along with the subjective factors of the operator. This paper constructs a collision risk model using three factors: DCPA, TCPA, and the inter-ship distance D. Additionally, a restriction factor is added when selecting the membership function of DCPA to improve the rationality of the selection. The set of factors for the CRI is established as follows:
+
+$$
+U = \{ {DCPA}\text{ 、 }{TCPA}\text{ 、 }D\} \tag{9}
+$$
+
+Define the membership functions for each factor:
+
+1) Membership function of DCPA
+
+Take the own ship(OS)'s position as the origin to establish a spatial right-angled coordinate system, the OS's coordinates are set at $\left( {{x}_{O},{y}_{O}}\right)$ , and the speed and heading are set to ${v}_{O}$ and ${\varphi }_{O}$ , respectively; similarly, the target ship(TS)'s position, speed and heading are set to $\left( {{x}_{T},{y}_{T}}\right) ,{v}_{T}$ and ${\varphi }_{T}$ , respectively. The true bearings of OS to the TS and the TS to OS are ${a}_{OT}$ and ${a}_{TO}$ , respectively. The relative speed between the two ships is ${v}_{R}$ .
+
+In previous studies, the selection of the membership function for DCPA only considered the safe distance of approach (SDA) ${r}_{1}$ and the absolute safe distance of approach ${r}_{2}$ , without considering whether the ship domains of OS and the TS were infringed upon. Fig. 2 illustrates various situations where the ship domains of OS and the TS are infringed upon. Therefore, the membership function for DCPA is improved to address this issue.
+
+Establish a coordinate system with the TS as the origin, the direction of the bow as the positive y-axis, and the direction perpendicular to the bow to the right as the positive $\mathrm{x}$ -axis. Perform a coordinate transformation for the position of OS:
+
+$$
+{x}_{O1} = D\sin {\beta }_{0},{y}_{O1} = D\cos {\beta }_{0},{\beta }_{0} = {a}_{OT} - {\varphi }_{T} + {\gamma }_{1}\text{ , } \tag{10}
+$$
+
+$$
+{\gamma }_{1} = \left\{ \begin{array}{ll} {360}, & {a}_{OT} - {\varphi }_{T} \leq 0 \\ 0, & {a}_{OT} - {\varphi }_{T} > 0 \end{array}\right. \tag{11}
+$$
+
+Based on the transformed coordinates $\left( {{x}_{O1},{y}_{O1}}\right)$ , the relative motion line equation of OS relative to the TS is obtained:
+
+$$
+y = \cot \left( {{\varphi }_{R} - {\varphi }_{T}}\right) x + \left( {{y}_{O1} - {x}_{O1}\cot \left( {{\varphi }_{R} - {\varphi }_{T}}\right) }\right) \tag{12}
+$$
+
+$$
+{\varphi }_{R} = \left\{ \begin{array}{l} \arctan \frac{{v}_{OTx}}{{v}_{OTy}} + \theta \;\text{ otherwise } \\ {90}\;{v}_{OTx} \geq 0,{v}_{OTy} = 0 \\ {270}\;{v}_{OTx} < 0,{v}_{OTy} = 0 \end{array}\right. \tag{13}
+$$
+
+$$
+\theta = \left\{ \begin{array}{ll} 0 & {v}_{OTx} \geq 0,{v}_{OTy} > 0 \\ {180} & {v}_{OTx} \geq 0,{v}_{OTy} < 0\text{ or }{v}_{OTx} < 0,{v}_{OTy} < 0 \\ {360} & {v}_{OTx} < 0,{v}_{OTy} > 0 \end{array}\right. \tag{14}
+$$
+
+Due to the change in the coordinate system, the relative motion line equation also needs to be transformed:
+
+$$
+x = \cot \left( {{\varphi }_{R} - {\varphi }_{T}}\right) y + \left( {{y}_{O1} - {x}_{O1}\cot \left( {{\varphi }_{R} - {\varphi }_{T}}\right) }\right) \tag{15}
+$$
+
+ < g r a p h i c s >
+
+Fig. 2 Various situations of OS and the TS's domains being intruded upon: (a)the TS does not intrude into OS's domain, but OS ntrudes into the TS's domain; (b)OS does not intrude into the TS's domain, but the TS intrudes into OS's domain; (c)both ships intrude into each other's domains.
+
+As shown in Fig. 3, when OS intrudes into the TS's domain, the relative motion line of OS to the TS will intersect with the boundary of the TS's domain. Therefore, by calculating the existence of an intersection point, can determine whether OS has intruded into the TS's domain.
+
+Similarly, by analyzing whether the relative motion line of the TS to OS intersects with the boundary of OS's domain, can determine whether the TS has intruded into OS's domain. The improved membership function is as follows:
+
+$$
+{k}_{DCPA} = \left\{ \begin{array}{ll} 1, & {DCPA} < {r}_{1}\begin{Vmatrix}{{p}_{1} > 0}\end{Vmatrix}{p}_{2} > \\ \frac{1}{2} - \frac{1}{2}\sin \left( {\frac{{180}^{ \circ }}{{r}_{2} - {r}_{1}}\left( {{DCPA} - \frac{{r}_{2} + {r}_{1}}{2}}\right) }\right) , & {r}_{1} < {DCPA} < {r}_{2} \\ 0, & {DCPA} \geq {r}_{2} \end{array}\right. \tag{16}
+$$
+
+Here, ${p}_{1}$ is the number of intersection points between OS’s relative motion line to the TS and the boundary of the TS's domain, ${p}_{2}$ is the number of intersection points between the TS’s relative motion line to OS and the boundary of OS's domain.
+
+2) Membership function of TCPA
+
+$$
+{k}_{TCPA} = \left\{ \begin{array}{ll} 1, & {TCPA} \leq {T}_{1} \\ {\left( \frac{{T}_{2} - {TCPA}}{{T}_{2} - {T}_{1}}\right) }^{2}, & {T}_{1} < {TCPA} \leq {T}_{2} \\ 0, & {TCPA} > {T}_{2},{DCPA} > {d}_{4} \end{array}\right. \tag{17}
+$$
+
+$$
+{T}_{1} = \left\{ \begin{array}{ll} \frac{\sqrt{{d}_{3}^{2} - {DCP}{A}^{2}}}{{v}_{R}}, & {DCPA} \leq {d}_{3} \\ \frac{{DCPA} - {d}_{3}}{{v}_{R}}, & {DCPA} > {d}_{3} \end{array}\right. \tag{18}
+$$
+
+$$
+{T}_{2} = \frac{\sqrt{{d}_{4}^{2} - {DCP}{A}^{2}}}{{v}_{R}} \tag{19}
+$$
+
+Here, ${d}_{3}$ represents the latest avoidance distance for the burdened ship, ${d}_{4}$ represents the distance over which the ship is capable of taking evasive actions.
+
+3) Membership function of the distance between two ships (D)
+
+$$
+{k}_{D} = \left\{ \begin{matrix} 1, & 0 \leq D \leq {d}_{3} \\ {\left( \frac{{d}_{4} - D}{{d}_{4} - {d}_{3}}\right) }^{2}, & {d}_{3} < D \leq {d}_{4} \\ 0, & D > {d}_{4} \end{matrix}\right. \tag{20}
+$$
+
+ < g r a p h i c s >
+
+Fig. 3 The relative motion lines of the two ships intersect at the boundary of the ship domain.
+
+Establish the weight set $W$ based on the importance of each factor in calculating the ${CRI}$ .
+
+$$
+W = \left\{ {{W}_{DCPA},{W}_{TCPA},{W}_{D}}\right\} \tag{21}
+$$
+
+where ${W}_{DCPA} > {W}_{TCPA} > {W}_{D}$ and ${W}_{DCPA} + {W}_{TCPA} + {W}_{D} = 1$ .
+
+$$
+{CRI} = W \times k = {W}_{DCPA}{k}_{DCPA} + {W}_{TCPA}{k}_{TCPA} + {W}_{D}{k}_{D} \tag{22}
+$$
+
+§ C. FITNESS FUNCTION VALUE (FITNESS)
+
+Incorporate the COLREGs and the CRI into the evaluation of the fitness function, forming an evaluation set $\mathrm{F}$ based on factors of safety, economy, compliance with COLREGs, and optimal collision avoidance timing. The CRI determines the safety factor in the fitness function, while the voyage distance and the degree of turning together determine the economic factor in the fitness function.
+
+Constructing CRI-based objective function:
+
+$$
+{F}_{1} = {CRI} \tag{23}
+$$
+
+In path planning issues, the total voyage determines the consumption of cost during navigation and serves as an important economic assessment index. In the process of navigation, $\left( {{x}_{i},{y}_{i}}\right)$ represents the current point, and $\left( {{x}_{i - 1},{y}_{i - 1}}\right)$ is the previous point adjacent to $\left( {{x}_{i},{y}_{i}}\right)$ , with the total number of path points being $\mathrm{n}$ and the destination point being $\left( {{x}_{n},{y}_{n}}\right)$ . Constructing total voyage-based objective function:
+
+$$
+{F}_{2} = \frac{\sqrt{{\left( {x}_{i} - {x}_{i - 1}\right) }^{2} + {\left( {y}_{i} - {y}_{i - 1}\right) }^{2}}}{\sqrt{{\left( {x}_{n} - {x}_{i - 1}\right) }^{2} + {\left( {y}_{n} - {y}_{i - 1}\right) }^{2}}} + \frac{\sqrt{{\left( {x}_{n} - {x}_{i}\right) }^{2} + {\left( {y}_{n} - {y}_{i}\right) }^{2}}}{\sqrt{{\left( {x}_{n} - {x}_{i - 1}\right) }^{2} + {\left( {y}_{n} - {y}_{i - 1}\right) }^{2}}} \tag{24}
+$$
+
+Constructing degree of turning-based objective function:
+
+$$
+{F}_{3} = \arccos \left( \frac{\left( {{x}_{i} - {x}_{i - 1},{y}_{i} - {y}_{i - 1}}\right) \cdot {\left( {x}_{n} - {x}_{i},{y}_{n} - {y}_{i}\right) }^{T}}{\begin{Vmatrix}\left( {x}_{i} - {x}_{i - 1},{y}_{i} - {y}_{i - 1}\right) \cdot \left( {x}_{n} - {x}_{i},{y}_{n} - {y}_{i}\right) \end{Vmatrix}}\right) \tag{25}
+$$
+
+Based on the encounter situations and responsibility allocation described earlier, the objective function constructed according to the COLREGs is:
+
+$$
+{F}_{4} = \left\{ \begin{matrix} 1 & {000}^{ \circ } \leq {\theta }_{r} \leq {112.5}^{ \circ }\text{ or }{355}^{ \circ } \leq {\theta }_{r} \leq {005}^{ \circ } \\ 0 & \text{ otherwise } \end{matrix}\right. \tag{26}
+$$
+
+ < g r a p h i c s >
+
+Fig. 4 Simulation results of a two-ship encounter
+
+TABLE II.
+
+SHIP PARAMETERS.
+
+max width=
+
+Parameter OS TS
+
+1-3
+Length Overall /m 67.80 146.00
+
+1-3
+Beam/m 16.00 36.00
+
+1-3
+Draft/m 2.633 3.500
+
+1-3
+Displacement/t 1850 6530.5
+
+1-3
+Water density $/{\mathrm{m}}^{3}$ 1.025 1.025
+
+1-3
+
+TABLE III.
+
+INITIAL STATES OF THE EXPERIMENTAL OBJECTS IN TWO SHIP ENCOUNTER
+
+max width=
+
+Ship Initial heading/。 Initial speed/kn Distance OS/n mile from
+
+1-5
+OS 225 12 0 X
+
+1-5
+TS 0 12 3.51 X
+
+1-5
+Obstacle none none 2.14 X
+
+1-5
+
+The timing of collision avoidance depends on the CRI value. Therefore, based on previous research, an objective function for optimal collision avoidance timing is constructed with a threshold value of 0.3 :
+
+$$
+{F}_{5} = \left\{ \begin{array}{ll} 1 & {CRI} \geq {0.3} \\ 0 & {CRI} < {0.3} \end{array}\right. \tag{27}
+$$
+
+$$
+\text{ Fitness } = {W}_{1}{F}_{1} + {W}_{2}{F}_{2} + {W}_{3}{F}_{3} + {W}_{4}{F}_{4} + {W}_{5}{F}_{5} \tag{28}
+$$
+
+where ${W}_{1}\text{ 、 }{W}_{2}\text{ 、 }{W}_{3}\text{ 、 }{W}_{4}\text{ 、 }{W}_{5}$ are the weights for safety, total voyage, degree of turning, COLREGs, and optimal collision avoidance timing, respectively.
+
+§ IV. EXPERIMENT
+
+This paper validates the effectiveness of the proposed Improved Differential Evolution (I-DE) algorithm through simulation experiments on the Matlab platform. The algorithm is compared with the traditional DE, PSO, and GA, using safety and economy as evaluation criteria.
+
+The simulation experiments are set in open waters with good visibility, ignoring external factors such as wind, waves, and currents. The experiments simulated scenarios involving two-ship and three-ship encounters, incorporating static obstacles in the two-ship encounters to more comprehensively assess the autonomous obstacle avoidance capability of the algorithm.
+
+ < g r a p h i c s >
+
+Fig. 5 The states of the ships at specific time intervals in two ship encounter
+
+TABLEIV
+
+INITIAL STATES OF THE EXPERIMENTAL OBJECTS IN THREE SHIP ENCOUNTER
+
+max width=
+
+Ship Initial heading/。 Initial speed/kn Distance OS/n mile from
+
+1-5
+OS 50 12 0 X
+
+1-5
+TS1 180 12 6.45 X
+
+1-5
+TS2 280 12 6.64 X
+
+1-5
+
+TABLE V.
+
+VARIOUS DATA OF SIMULATION RESULTS IN TWO SHIP ENCOUNTER
+
+max width=
+
+Algorithms I-DE PSO GA DE
+
+1-5
+Min Dis to obstacle/n mile 0.499917 0.404968 0.421491 0.464535
+
+1-5
+Min Dis to TS/n mile 1.241208 0.639673 1.327054 0.561472
+
+1-5
+Sum deviation Dis/n mile 8.960443 9.832715 22.624526 10.548143
+
+1-5
+Runtime/s 6.7743 5.1392 12.1593 5.407
+
+1-5
+
+TABLE VI.
+
+VARIOUS DATA OF SIMULATION RESULTS IN THREE SHIP ENCOUNTER
+
+max width=
+
+Algorithms I-DE PSO GA DE
+
+1-5
+Min Dis to TS 1/n mile 2.409358 2.343583 2.593572 2.426915
+
+1-5
+Min Dis to TS2/n mile 1.010833 0.916672 1.176653 1.083573
+
+1-5
+Sum deviation Dis/n mile 22.762838 47.657674 31.220230 28.404747
+
+1-5
+Runtime/s 10.0148 8.352 18.726 9.7261
+
+1-5
+
+ < g r a p h i c s >
+
+Fig. 6 Simulation results of a three-ship encounter
+
+According to Rule 8 of the COLREGs: In open waters with ample space, changing course is the most effective measure to prevent collisions at close range, provided that timely and effective evasive maneuvers do not result in the two ships coming closer than the safe distance again. Therefore, to facilitate the study, when considering encounter situations and avoidance methods, and following the collision avoidance rules and the usual practices followed by seafarers at sea, the USV, when acting as the give-way ship, avoids a collision by changing course instead of reducing speed or stopping. The parameters and initial states of the experimental objects are displayed in Table II to Table IV.
+
+To ensure that the algorithms are tested under fair conditions, the parameters used in the experiments are uniformly set as follows: total iteration times $K = {50}$ , initial population size $M = {40}$ , and dimensions of each individual $n = 5,{W}_{DCPA} = {0.4}$ , ${W}_{TCPA} = {0.4},{W}_{D} = {0.2},{W}_{1} = {0.4},{W}_{2} = {0.2},{W}_{3} = {0.2},{W}_{4} = {0.1},{W}_{5} = {0.1}$ .
+
+ < g r a p h i c s >
+
+Fig. 7 The states of the ships at specific time intervals in three ship encounter
+
+§ A. TWO-SHIP ENCOUNTERS
+
+The outcomes of the simulation are illustrated in Fig. 4-5. Fig. 4 displays the complete paths planned by the various algorithms, while Fig. 5 displays the states of the ships at specific time intervals, reflecting the real-time distance between the OS and the TS, providing data support for evaluating safety. In these figures, the blue ship represents OS, the black ship represents the TS, and the black hexagons represent the static obstacles. The simulation results are detailed in Table IV. By comparing the minimum distances(MD) between the OS and the TS and between the OS and the obstacle, the safety of the paths can be evaluated. The total deviation distance between the planned path and the initial path can be used to assess the economy of the path. Additionally, the table also shows the runtime of each algorithm.
+
+As evidenced by the experimental data, the PSO algorithm and the traditional DE algorithm perform poorly in terms of navigational safety. The paths planned by these two algorithms result in OS maintaining a relatively close distance to the TS during navigation, posing a higher collision risk. In contrast, the paths planned by the I-DE algorithm and the GA maintain a greater distance between the ships, better ensuring navigational safety. By comparing the MD between OS and the obstacle, it is evident that the I-DE algorithm also performs best in avoiding collisions with an obstacle, presenting the lowest collision risk.
+
+As shown in Fig. 4, in the initial stage, there is a considerable distance between the OS and the obstacle, and no evasive action is needed. However, the path planned by the GA deviates from the original route at the outset. This premature maneuver increases the deviation distance of OS, thereby reducing the path's economic efficiency. In contrast, the paths planned by the I-DE algorithm, the PSO algorithm, and the traditional DE algorithm closely adhere to the original path when far from the obstacle. These algorithms start taking evasive actions at approximately 1 nautical mile from the obstacle, adjusting the course to avoid collisions and passing the obstacle from a greater distance. Once the obstacle is safely passed, the ship gradually returns to the original path and eventually reaches the target point. According to the data in Table V, the total deviation distance for the GA is the largest, while the other three algorithms have relatively smaller deviation distances, indicating better economic efficiency.
+
+§ B. THREE-SHIP ENCOUNTERS
+
+Similar to the two-ship encounter scenario, the simulation results are presented in Fig. 6 and Fig. 7. The experimental findings indicate that the I-DE algorithm, along with the other three algorithms, ensures that the ship safely navigates through encounters with target ships, successfully avoiding collisions and demonstrating good safety performance.
+
+However, there are significant differences in economic performance among the four algorithms. The PSO algorithm results in the ship deviating significantly from its original course after avoiding TS2, with the total deviation distance reported in Table VI being 47.65 nautical miles, which compromises the economic efficiency of the path. In contrast, the paths planned by the I-DE, DE, and GA algorithms show a tendency to approach the original course after the avoidance maneuver with TS2. As shown in Fig. 6, the I-DE algorithm enables the ship to smoothly return to its original course after avoidance. According to the data in Table VI, the total deviation distances for the I-DE, DE, and GA algorithms are 22.76 nautical miles, 28.40 nautical miles, and 31.22 nautical miles, respectively, indicating that the path planned by the I-DE algorithm exhibits better economic efficiency.
+
+Since the runtime is of lower importance in path evaluation and the differences in runtime among the algorithms are minimal, the impact of runtime is not considered. Considering both safety and economy, the I-DE algorithm demonstrates better performance in planning safe, economical paths, significantly outperforming the other algorithms. Therefore, the simulation experiments confirm the effectiveness of the improved differential evolution algorithm.
+
+§ V. CONCLUSION
+
+To address the issue of path planning and collision avoidance for USVs in open waters, this paper proposes an improved adaptive differential evolution algorithm. This algorithm adaptively adjusts the crossover factor (CR) in the crossover operation, enhancing population diversity while increasing the independence among individuals. Additionally, the CRI is incorporated into the fitness function evaluation. A restriction factor is added when selecting the membership function of DCPA, making the calculation of the collision risk more consistent with maritime practices. Simulations were conducted on the Matlab platform, comparing the I-DE algorithm with the PSO algorithm, the GA, and the traditional DE algorithm under two-ship encounter and three-ship encounter scenarios. Safety is determined by the minimum distance between own ship and the obstacle and the target ship, and economy is determined by the total deviation distance. The experimental outcomes indicate that the I-DE algorithm outperforms others in terms of safety and economy, thereby validating its effectiveness.
+
+In future research on path planning and collision avoidance for USVs, it is essential to fully consider the interference of external factors as well as the maneuverability of the USV. Furthermore, attention should be given to aspects such as communication delays, human factors in decision-making, and integration with existing maritime traffic management systems, thereby ensuring closer alignment with maritime practices.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/alucFTO60T/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/alucFTO60T/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..49f137514144b5352fb4012783305f2157abbef0
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/alucFTO60T/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,505 @@
+# ${H}_{\infty }$ State Feedback Controller Based on Dynamic Observer Design for Singular Fractional-Order Systems
+
+1st Minghui Wei
+
+Shenyang Aerospace University
+
+School of Automation
+
+Shenyang, China
+
+2271700918@qq.com
+
+${2}^{\text{nd }}\mathrm{{He}}\mathrm{{Li}}$
+
+Shenyang Aerospace University School of Automation
+
+Shenyang, China
+
+lihe_good@126.com
+
+${3}^{\text{rd }}$ Shuo Liu
+
+Shenyang Aerospace University
+
+School of Automation
+
+Shenyang, China
+
+2922601793@qq.com
+
+Abstract-This paper focuses on the problem of ${H}_{\infty }$ state feedback controller design based on dynamic observer for singular fractional-order systems (FOS), where the fractional derivative order $\alpha$ lies between 0 and 1. First, a new form of dynamic observer with a non-singular structure is proposed, which is easier to implement physically. Secondly, the bounded real lemma corresponding to ${H}_{\infty }$ norm of FOS is proposed via a set of linear matrix inequalities (LMIs). Compared to existing methods, the lemma employs real variable, which is easier to solve. Building upon the new lemma, the conditions for designing ${H}_{\infty }$ state feedback controller based on dynamic observer of FOS are derived. Finally, numerical example is presented to validate the effectiveness of the proposed method.
+
+Index Terms-singular fractional-order systems (singular FOS), dynamic observer, ${H}_{\infty }$ control, state feedback control
+
+## I. INTRODUCTION
+
+In the past decade, fractional-order calculus has garnered considerable attention from physicists and engineers [1]. It has been observed that many systems across various interdisciplinary fields can be effectively described using fractional derivatives because these derivatives capture the historical evolution of functions and demonstrate stronger global correlations compared to integer derivatives. Numerous systems exhibit fractional-order dynamics, including viscoelastic systems [2], dielectric polarization [3], electrode-electrolyte polarization [4], electromagnetic waves [5], quantitative finance [6], and the quantum evolution of complex systems [7].
+
+Singular systems, also known as generalized systems, encompass both differential and algebraic equations [8]. This model accounts for physical constraints, static relationships, and broader impulsive behaviors due to improper transfer matrices. In contrast to non-singular systems, singular FOS provide a more precise representation of the physical properties of systems, offering direct and comprehensive descriptions [9]. Since their introduction in many fields of system design and control, singular FOS have received considerable attention. They have diverse applications in electrical systems, large-scale interconnected networks, power grids, constrained mechanical systems, and chemical processes [10]-[12].
+
+In control system design, state feedback controllers are typically designed to meet specific performance criteria, especially when access to all states of the considered system is unavailable, or when system output measurements cannot provide complete information about the internal system states. This poses challenges for the design of state feedback controllers [13]. Therefore, the theory of observer design has attracted widespread attention. Based on state estimates obtained from the observer, the observer based controller is used to generate control laws to stabilize unstable systems or ensure desired performance [14], [15]. Recently, research activities focusing on observer-based control for FOS have been developed. In [22], a novel observer-free synchronization method is introduced for a specific category of incommensurate fractional-order systems. [23] studied the robust ${H}_{\infty }$ observer control of linear time-invariant perturbed uncertain FOS. By analyzing the ${H}_{\infty }$ norm of the FOS and considering the fractional derivative $\alpha$ , a new sufficient condition is proposed to ensure the stability of the estimation error system.
+
+${H}_{\infty }$ control plays a crucial role in control systems. The ${H}_{\infty }$ optimization used under the presence of disturbances with bounded energy allows guaranteeing levels of disturbance attenuation. However, it is typically limited to integer-order systems [17]. In recent years, there have been developments extending the computation of ${H}_{2}$ and ${H}_{\infty }$ norms to FOS. The ${H}_{2}$ norm of fractional transfer functions of implicit type is studied in [16]. [18] employs two methods based on a binary algorithm and LMI condition to compute the ${H}_{\infty }$ norm of FOS and determine the Hamiltonian matrix. Using the generalized Kalman-Yakubovich-Popov (KYP) lemma, the bounded real lemmas for the ${H}_{ - }$ norm and ${H}_{\infty }$ norm of FOS are derived via a series of LMIs in [19]. Based on these analysis results, numerous studies have focused on designing ${H}_{\infty }$ controllers and observers. [21] studied the finite time ${H}_{\infty }$ control problem of fractional order neural networks using finite time stability theory and Lyapunov sample function method. The ${H}_{\infty }$ control problem for singular FOS with order ranging from 0 to 1 is explored in [20].
+
+---
+
+This study was funded by National Natural Science Foundation of China (grant number 62003223).
+
+---
+
+In this work, the problem of designing ${H}_{\infty }$ state feedback controller based on dynamic observers for singular FOS is studied. The main contributions can be summarized as follows:
+
+- A dynamic observer is proposed. Compared with [26], the observer in this paper has a non-singular structure, making it easier to implement.
+
+- Novel necessary and sufficient conditions for the bounded real lemma corresponding to ${H}_{\infty }$ norm for singular FOS ranging $0 < \alpha < 1$ are proposed. Unlike previous approaches, such as [24] and [25], the matrix variable is real, which is easier to solve.
+
+- Based on the bounded real lemma, the conditions for designing the dynamic observer are given via a set of LMIs.
+
+Notations: In the subsequent sections of the paper, $A$ is a hermitian matrix if and only if ${A}^{ * } = A$ and $A > 0.\operatorname{Re}\left( Q\right)$ and $\operatorname{Im}\left( Q\right)$ represent the real and imaginary parts of the complex $Q$ , respectively. $\operatorname{Sym}\left( A\right) = A + {A}^{T}$ .
+
+Proposition 1. A complex Hermitian matrix $Q$ satisfies $Q < 0$ if and only if
+
+$$
+\left\lbrack \begin{matrix} {R}_{e}\left( Q\right) & {I}_{m}\left( Q\right) \\ - {I}_{m}\left( Q\right) & {R}_{e}\left( Q\right) \end{matrix}\right\rbrack < 0
+$$
+
+II. Problem STATEMENT AND PRELIMINARIES
+
+Consider the following singular FOS:
+
+$$
+\begin{cases} E{D}^{\alpha }x & = {Ax}\left( t\right) + {Bu}\left( t\right) + {B}_{w}w\left( t\right) , \\ z\left( t\right) & = {C}_{z}x\left( t\right) + {D}_{z}u\left( t\right) , \\ y\left( t\right) & = {Cx}\left( t\right) , \end{cases} \tag{1}
+$$
+
+in which $\alpha$ is the fractional-order, ranging from $0 < \alpha < 1$ , $x \in {R}^{n}$ is the pseudo state vector, $y \in {R}^{q}$ is the output vector, $z \in {R}^{r}$ is the control output, $u \in {R}^{m}$ is the control input, $w \in {R}^{p}$ is disturbance input. $A, B,{B}_{w}, C,{C}_{z}$ are the constant matries for the appropriate dimensions, $E \in {R}^{n \times n}$ is the singular matrix, which is $\operatorname{rank}\left( E\right) < n.{D}^{\alpha }$ denotes the Caputo fractional derivative
+
+$$
+{D}^{\alpha }f\left( t\right) = \frac{1}{\Gamma \left( {m - \alpha }\right) }{\int }_{{t}_{0}}^{t}\frac{{f}^{\left( m\right) }\left( \tau \right) }{{\left( t - \tau \right) }^{\alpha + 1 - m}}{d\tau }. \tag{2}
+$$
+
+In this paper, it is assumed that $E$ and $C$ such that
+
+$$
+\text{rank}\left\lbrack \begin{array}{l} E \\ C \end{array}\right\rbrack = n\text{.} \tag{3}
+$$
+
+Then, consider the following observer based controller
+
+$$
+\begin{cases} {D}^{\alpha }z\left( t\right) = & {TA}\widehat{x}\left( t\right) + {TBu}\left( t\right) + {C}_{d}{x}_{d}\left( t\right) \\ & + {D}_{d}\left( {y\left( t\right) - \widehat{y}\left( t\right) }\right) , \\ {D}^{\alpha }{x}_{d}\left( t\right) = & {A}_{d}{x}_{d}\left( t\right) + {B}_{d}\left( {y\left( t\right) - \widehat{y}\left( t\right) }\right) , \\ \widehat{x}\left( t\right) = & z\left( t\right) + {Ny}\left( t\right) , \\ u\left( t\right) = & K\widehat{x}\left( t\right) , \end{cases} \tag{4}
+$$
+
+in which $\widehat{x}\left( t\right) \in {R}^{n}$ is the state estimation vector, ${\widehat{x}}_{d}\left( t\right) \in$ ${R}^{n}$ is an auxiliary state, and $T, N,{A}_{d},{B}_{d},{C}_{d},{D}_{d}, K$ are constant matrices of appropriate dimensions, and $T, N$ such that
+
+$$
+{TE} + {NC} = {I}_{n}. \tag{5}
+$$
+
+in which ${I}_{n}$ represents the $\mathrm{n}$ dimensional identity matrix.
+
+Define $e\left( t\right) = x\left( t\right) - \widehat{x}\left( t\right) ,\bar{x} = {\left\lbrack \begin{array}{lll} {x}^{T} & {e}^{T} & {x}_{d}^{T} \end{array}\right\rbrack }^{T}$ . Combining singular FOS (1) and controller (2), one obtains
+
+$$
+\begin{cases} \bar{E}{D}^{\alpha }\bar{x}\left( t\right) & = \bar{A}\bar{x}\left( t\right) + \bar{B}w\left( t\right) , \\ z\left( t\right) & = \bar{C}\bar{x}\left( t\right) , \end{cases} \tag{6}
+$$
+
+where,
+
+$$
+\bar{E} = \left\lbrack \begin{matrix} E & 0 & 0 \\ 0 & I & 0 \\ 0 & 0 & I \end{matrix}\right\rbrack ,\bar{A} = \left\lbrack \begin{matrix} A + {BK} & - {BK} & 0 \\ 0 & {TA} - {D}_{d}C & - {C}_{d} \\ 0 & {B}_{d}C & {A}_{d} \end{matrix}\right\rbrack ,
+$$
+
+$$
+\bar{B} = \left\lbrack \begin{matrix} {B}_{w} \\ T{B}_{w} \\ 0 \end{matrix}\right\rbrack ,\bar{C} = \left\lbrack \begin{array}{lll} {C}_{z} + {D}_{z}K & - {D}_{z}K & 0 \end{array}\right\rbrack .
+$$
+
+The transfer function of system (6) is
+
+$$
+G\left( s\right) = \bar{C}{\left( {s}^{\alpha }\bar{E} - \bar{A}\right) }^{-1}\bar{B}. \tag{7}
+$$
+
+The design problem of ${H}_{\infty }$ state feedback controller based on dynamic observer is to design a controller such that the closed-loop system (6) is admissibility, and its transfer function satisfies $\parallel G\left( s\right) {\parallel }_{\infty } < \gamma$ .
+
+Lemma 1. [27] Let $\gamma$ be a scalar such that $\gamma > 0$ . The singular FOS is admissible and satisfies the condition $\parallel G\left( s\right) {\parallel }_{\infty } < \gamma$ if there exists $\bar{E}P = {P}^{ * }{\bar{E}}^{T} \in {\mathbf{C}}^{n \times n} > 0$ such
+
+that
+
+$$
+\left\lbrack \begin{matrix} \operatorname{Sym}\left( {\bar{A}\left( {{rP} + \bar{r}\bar{P}}\right) }\right) & * & * \\ \bar{C}\left( {{rP} + \bar{r}\bar{P}}\right) & - I & * \\ {\bar{B}}^{T} & 0 & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0, \tag{8}
+$$
+
+where $r = {e}^{j\theta },\theta = \frac{\pi }{2}\left( {1 - \alpha }\right)$ .
+
+Lemma 2. [28] Let $\gamma$ be a scalar such that $\gamma > 0$ , the following statements hold the same significance:
+
+(i) there exists $\bar{E}P = {P}^{ * }{\bar{E}}^{T} \in {\mathbf{C}}^{n \times n} > 0$ such that
+
+$$
+\left\lbrack \begin{matrix} \operatorname{Sym}\left( {\bar{A}\left( {{rP} + \bar{r}\bar{P}}\right) }\right) & * & * \\ \bar{C}\left( {{rP} + \bar{r}\bar{P}}\right) & - I & * \\ {\bar{B}}^{T} & 0 & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0, \tag{9}
+$$
+
+where $r = {e}^{j\theta },\theta = \frac{\pi }{2}\left( {1 - \alpha }\right)$ .
+
+(ii) there exists matrix $M \in {\mathbf{R}}^{n \times n}$ such that
+
+$$
+\left\lbrack \begin{matrix} \operatorname{Sym}\left( {\bar{A}M}\right) & * & * \\ \bar{C}M & - I & * \\ {\bar{B}}^{T} & 0 & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0, \tag{10}
+$$
+
+$$
+\left\lbrack \begin{matrix} \left( {\bar{E}M + {\left( \bar{E}M\right) }^{T}}\right) /a & * \\ \left( {\bar{E}M - {\left( \bar{E}M\right) }^{T}}\right) /b & {\left( \bar{E}M + \bar{E}M\right) }^{T})/a \end{matrix}\right\rbrack > 0, \tag{11}
+$$
+
+where $\theta = \frac{\pi }{2}\left( {1 - \alpha }\right) , a = 4\cos \theta , b = 4\sin \theta$ .
+
+Proof. Define ${Q}_{1} = \bar{E}{P}_{1} = {R}_{e}\left( Q\right) ,{Q}_{2} = \bar{E}{P}_{2} = {I}_{m}\left( Q\right)$ . According to Proposition 1, the condition $Q = \bar{E}{P}_{1} + \bar{E}{P}_{2}i >$ 0 is equivalent to
+
+$$
+\left\lbrack \begin{matrix} \bar{E}{P}_{1} & \bar{E}{P}_{2} \\ - \bar{E}{P}_{2} & \bar{E}{P}_{1} \end{matrix}\right\rbrack > 0. \tag{12}
+$$
+
+Since $r = {e}^{j\theta } = \cos \theta + i\sin \theta$ and $\bar{r} = {e}^{-{j\theta }} = \cos \theta - i\sin \theta$ , then it yields
+
+$$
+\left( {{rQ} + \bar{r}\bar{Q}}\right) = \left( {\cos \theta + \sin \theta }\right) \left( {\bar{E}{P}_{1} + \bar{E}{P}_{2}i}\right)
+$$
+
+$$
++ \left( {\cos \theta - \sin \theta }\right) \left( {\bar{E}{P}_{1} - \bar{E}{P}_{2}i}\right) \tag{13}
+$$
+
+$$
+= \left( {2\cos \theta \bar{E}{P}_{1} - 2\sin \theta \bar{E}{P}_{2}}\right) \text{.}
+$$
+
+Note that $\bar{E}{P}_{1}$ is real symmetric matrix, while $\bar{E}{P}_{2}$ is skew-symmetric matrix. Therefore
+
+$$
+{\left( rQ + \bar{r}\bar{Q}\right) }^{T} = \left( {2\cos \theta \bar{E}{P}_{1} + 2\sin \theta \bar{E}{P}_{2}}\right) . \tag{14}
+$$
+
+Let $\widetilde{Q} = {rQ} + \bar{r}\bar{Q}, M = {rP} + \bar{r}\bar{P}$ , then $\widetilde{Q} = \bar{E}M$ , Then (9) is equivalent to (10). From (13) and (14), we obtain
+
+$$
+\left\{ \begin{array}{l} \bar{E}{P}_{1} = \left( {\bar{E}M + {\left( \bar{E}M\right) }^{T}}\right) /\left( {4\cos \theta }\right) \\ \bar{E}{P}_{2} = \left( {{\left( \bar{E}M\right) }^{T} - \bar{E}M}\right) /\left( {4\sin \theta }\right) . \end{array}\right. \tag{15}
+$$
+
+Combining (15) and (12), it follows that (11) holds, which is equivalent to condition $Q = \bar{E}{P}_{1} + \bar{E}{P}_{2}i > 0$ . This ends the proof.
+
+## III. Main Results
+
+In this section, the problem of designing observer based controller is transformed into an optimization problem of LMIs. The conditions for designing ${H}_{\infty }$ state feedback controller based on dynamic observer for singular FOS are provided in the form of LMIs.
+
+Theorem 1. Let the system (6) be admissible and $\gamma > 0$ . For $\parallel G\left( s\right) {\parallel }_{\infty } < \gamma$ if and only if there exists matrices $X, Y, H$ , $S, G, L$ and $W$ such that
+
+$$
+\left\lbrack \begin{matrix} {\Omega }_{11} & * & * & * & * \\ - {\left( BS\right) }^{T} & {\Omega }_{22} & * & * & * \\ {A}^{T} & {\Omega }_{32} & {\Omega }_{33} & * & * \\ {\Omega }_{41} & - {D}_{z}S & {C}_{z} & - {\gamma I} & * \\ {B}_{w}^{T} & {B}_{w}^{T}{T}^{T} & {B}_{w}^{T}{T}^{T}{Y}^{T} & 0 & - {\gamma I} \end{matrix}\right\rbrack < 0 \tag{16}
+$$
+
+$$
+\left\lbrack \begin{matrix} \left( {\mathbf{Q} + {\mathbf{Q}}^{T}}\right) /a & * \\ \left( {\mathbf{Q} - {\mathbf{Q}}^{T}}\right) /b & \left( {\mathbf{Q} + {\mathbf{Q}}^{T}}\right) /a \end{matrix}\right\rbrack > 0, \tag{17}
+$$
+
+$$
+{\Omega }_{11} = \operatorname{Sym}\left( {{AX} + {BS}}\right) ,
+$$
+
+$$
+{\Omega }_{22} = \operatorname{Sym}\left( {{TAX} - H}\right) ,
+$$
+
+$$
+{\Omega }_{32} = {A}^{T}{T}^{T} - {C}^{T}{L}^{T} + W,
+$$
+
+$$
+{\Omega }_{33} = \operatorname{Sym}\left( {{YTA} + {GC}}\right) ,
+$$
+
+$$
+{\Omega }_{41} = {C}_{z}X + {D}_{z}S,
+$$
+
+$$
+\mathbf{Q} = \left\lbrack \begin{matrix} {EX} & 0 & E \\ 0 & X & I \\ 0 & Z & Y \end{matrix}\right\rbrack
+$$
+
+Define controller gain $K$ and ${A}_{d},{B}_{d},{C}_{d},{D}_{d}$
+
+$$
+K = S{X}^{-1},
+$$
+
+$$
+{D}_{d} = L
+$$
+
+$$
+{C}_{d} = \left( {H - {D}_{d}{CX}}\right) {U}^{-1},
+$$
+
+$$
+{B}_{d} = {V}^{-1}\left( {G + Y{D}_{d}}\right) ,
+$$
+
+$$
+{A}_{d} = {V}^{-1}(W - {YTAX} + Y{D}_{d}{CX}
+$$
+
+$$
+\left. {-V{B}_{d}{CX} + Y{C}_{d}}\right) {U}^{-1}, \tag{18}
+$$
+
+where $V$ and $U$ are invertible, which satisfies
+
+$$
+Z = {YX} + {VU} \tag{19}
+$$
+
+Proof. According to Lemma 2, the singular FOS is admissible and satisfies $\parallel G\left( s\right) {\parallel }_{\infty } < \gamma$ if there exists a matrix $M \in {\mathbf{R}}^{n \times n}$
+
+such that
+
+$$
+\left\lbrack \begin{matrix} \operatorname{Sym}\left( {\bar{A}M}\right) & * & * \\ \bar{C}M & - I & * \\ {\bar{B}}^{T} & 0 & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0, \tag{20}
+$$
+
+$$
+\left\lbrack \begin{matrix} \left( {\bar{E}M + {\left( \bar{E}M\right) }^{T}}\right) /a & * \\ \left( {\bar{E}M - {\left( \bar{E}M\right) }^{T}}\right) /b & {\left( \bar{E}M + \bar{E}M\right) }^{T})/a \end{matrix}\right\rbrack > 0, \tag{21}
+$$
+
+Next, the target is to linearize the condition (20) as described in [29]. First, $M$ and ${M}^{-1}$ as
+
+$$
+M = \left\lbrack \begin{matrix} X & 0 & {V}^{-T} \\ 0 & X & \left( {I - X{Y}^{T}}\right) {V}^{-T} \\ 0 & U & - U{Y}^{T}{V}^{-T} \end{matrix}\right\rbrack ,
+$$
+
+$$
+{M}^{-1} = \left\lbrack \begin{matrix} {X}^{-1} & - {X}^{-1} & {U}^{-1} \\ 0 & {Y}^{T} & \left( {I - {Y}^{T}X}\right) {U}^{-1} \\ 0 & {V}^{T} & - {V}^{T}X{U}^{-1} \end{matrix}\right\rbrack ,
+$$
+
+where $V$ and $U$ are invertible, which satisfies $Z = {YX} + {VU}$ , and define
+
+$$
+F = \left\lbrack \begin{matrix} I & 0 & 0 \\ 0 & I & 0 \\ 0 & Y & V \end{matrix}\right\rbrack
+$$
+
+Multiplying both sides of (20) with the diagonal matrix $\{ F, I, I\}$ and its transpose, we obtain
+
+$$
+\left\lbrack \begin{matrix} {\widetilde{\Omega }}_{11} & * & * & * & * \\ - {\left( BKX\right) }^{T} & {\widetilde{\Omega }}_{22} & * & * & * \\ {A}^{T} & {\widetilde{\Omega }}_{32} & {\widetilde{\Omega }}_{33} & * & * \\ {\widetilde{\Omega }}_{41} & - {D}_{z}{KX} & {C}_{z} & - {\gamma I} & * \\ {B}_{w}^{T} & {B}_{w}^{T}{T}^{T} & {B}_{w}^{T}{T}^{T}{Y}^{T} & 0 & - {\gamma I} \end{matrix}\right\rbrack < 0
+$$
+
+(22)
+
+where
+
+$$
+{\widetilde{\Omega }}_{11} = \operatorname{Sym}\left( {{AX} + {BKX}}\right) ,
+$$
+
+$$
+{\widetilde{\Omega }}_{22} = \operatorname{Sym}\left( {{TAX} - {D}_{d}{CX} - {C}_{d}U}\right) ,
+$$
+
+$$
+{\widetilde{\Omega }}_{32} = {YTAX} - Y{D}_{d}{CX} + V{B}_{d}{CX} - Y{C}_{d}U
+$$
+
+$$
++ V{A}_{d}U + {\left( TA - {D}_{d}C\right) }^{T}\text{,} \tag{23}
+$$
+
+$$
+{\widetilde{\Omega }}_{33} = \operatorname{Sym}\left( {{YTA} - Y{D}_{d}C + V{B}_{d}C}\right) ,
+$$
+
+$$
+{\widetilde{\Omega }}_{41} = {C}_{z}X + {D}_{z}{KX}.
+$$
+
+Let
+
+$$
+S = {KX}, \tag{24}
+$$
+
+$$
+H = {D}_{d}{CX} + {C}_{d}U,
+$$
+
+$$
+W = {YTAX} - Y{D}_{d}{CX} + V{B}_{d}{CX}
+$$
+
+$$
+- Y{C}_{d}U + V{A}_{d}U
+$$
+
+$$
+G = V{B}_{d} - Y{D}_{d}.
+$$
+
+Combining (24) and (22), it follows that (16) holds.
+
+Then multiplying both sides of (21) with the $F$ and ${F}^{T}$ , and define $Z = {YX} + {VU}$ , this concludes the proof.
+
+## IV. NUMERICAL EXAMPLE
+
+Consider the fractional-order circuit depicted in Fig.1 [30], which can be characterized as
+
+$$
+{e}_{1} = {L}_{1}\frac{{d}^{\alpha }{i}_{1}}{d{t}^{\alpha }} + {L}_{3}\frac{{d}^{\alpha }{i}_{3}}{d{t}^{\alpha }} + {R}_{1}{i}_{1} + {R}_{3}{i}_{3},
+$$
+
+$$
+{e}_{2} = {L}_{2}\frac{{d}^{\alpha }{i}_{1}}{d{t}^{\alpha }} + {L}_{3}\frac{{d}^{\alpha }{i}_{3}}{d{t}^{\alpha }} + {R}_{2}{i}_{2} + {R}_{3}{i}_{3},
+$$
+
+$$
+{i}_{3} = {i}_{1} + {i}_{2}.
+$$
+
+$$
+\text{Let}x\left( t\right) = \left\lbrack \begin{array}{l} {i}_{1} \\ {i}_{2} \\ {i}_{3} \end{array}\right\rbrack , z\left( t\right) = {Cx}\left( t\right) \text{. Then}
+$$
+
+$$
+\left\lbrack \begin{matrix} {L}_{1} & 0 & {L}_{3} \\ 0 & {L}_{2} & {L}_{3} \\ 0 & 0 & 0 \end{matrix}\right\rbrack {D}^{\alpha }x\left( t\right) = \left\lbrack \begin{matrix} - {R}_{1} & 0 & - {R}_{3} \\ 0 & - {R}_{2} & - {R}_{3} \\ 1 & 1 & - 1 \end{matrix}\right\rbrack x\left( t\right)
+$$
+
+$$
++ \left\lbrack \begin{array}{ll} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{array}\right\rbrack u\left( t\right) + \left\lbrack \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right\rbrack w
+$$
+
+$$
+z\left( t\right) = \left\lbrack \begin{array}{lll} 1 & 1 & 1 \end{array}\right\rbrack x\left( t\right) .
+$$
+
+Consider the appropriate parameters as follows
+
+$$
+A = \left\lbrack \begin{matrix} - 4 & 0 & - 5 \\ 0 & - 3 & - 5 \\ 1 & 1 & - 1 \end{matrix}\right\rbrack , B = \left\lbrack \begin{array}{ll} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{array}\right\rbrack ,{B}_{w} = \left\lbrack \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right\rbrack ,
+$$
+
+$$
+C = \left\lbrack \begin{array}{lll} 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right\rbrack ,{C}_{z} = \left\lbrack \begin{array}{lll} 1 & 1 & 1 \end{array}\right\rbrack ,{D}_{z} = \left\lbrack \begin{array}{ll} 1 & 0 \end{array}\right\rbrack ,
+$$
+
+$$
+E = \left\lbrack \begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right\rbrack ,\alpha = 1/3. \tag{25}
+$$
+
+Then the solution to (5) is
+
+$$
+T = \left\lbrack \begin{matrix} 1 & - 1 & 2 \\ 0 & 1 & 3 \\ 0 & 1 & 3 \end{matrix}\right\rbrack , N = \left\lbrack \begin{matrix} 1 & 0 \\ 0 & 0 \\ - 1 & 1 \end{matrix}\right\rbrack .
+$$
+
+According to Theorem 1, we obtain that
+
+$$
+{A}_{d} = \left\lbrack \begin{matrix} - {0.0540} & {0.8565} & - {0.0369} \\ - {0.6895} & - {0.8754} & - {0.6877} \\ {3.0035} & {3.2893} & {0.8686} \end{matrix}\right\rbrack ,
+$$
+
+$$
+{B}_{d} = \left\lbrack \begin{matrix} {0.0422} & {0.0644} \\ {0.9692} & - {0.1048} \\ - {1.9431} & - {0.8367} \end{matrix}\right\rbrack
+$$
+
+$$
+{C}_{d} = \left\lbrack \begin{matrix} - {0.1496} & {2.9450} & - {0.0947} \\ {2.7666} & {2.0096} & - {1.9358} \\ - {0.5109} & - {2.9046} & - {2.1334} \end{matrix}\right\rbrack
+$$
+
+$$
+{D}_{d} = \left\lbrack \begin{matrix} - {0.0288} & - {0.0662} \\ {0.3001} & {0.3680} \\ {0.7859} & {1.0324} \end{matrix}\right\rbrack
+$$
+
+$$
+K = \left\lbrack \begin{matrix} {0.3793} & - {0.2046} & {0.1052} \\ - {0.3196} & {0.6161} & {1.4726} \end{matrix}\right\rbrack .
+$$
+
+Fig. 2 shows state vector and state estimation vector. The maximum singular values of $G\left( s\right)$ are plotted in Fig. 3, peaking at approximately 0.4031 . The state diagram of system (6) illustrated in Fig.4. It implies that the system is stable and admissible.
+
+
+
+Fig. 1. The singular fractional-order electrical circuit.
+
+
+
+Fig. 2. State vector and state estimation vector.
+
+## V. CONCLUSION
+
+The problem of designing ${H}_{\infty }$ state feedback controller based on dynamic observer for singular FOS is investigated in this paper. Firstly, a non-singular ${H}_{\infty }$ state feedback controller based on dynamic observer is proposed. Additionally, a new bounded real lemma for the ${H}_{\infty }$ norm of FOS is presented, which forms the foundation for designing the dynamic observer. Then, conditions for designing ${H}_{\infty }$ state feedback controller based on dynamic observer for FOS are derived. Ultimately, the effectiveness of the proposed methodology is confirmed through a simulation example.
+
+
+
+Fig. 3. The maximum singular values of $G\left( s\right)$ .
+
+
+
+Fig. 4. State diagram of system (6).
+
+## REFERENCES
+
+[1] Naifar O, Makhlouf A B. Fractional Order Systems-Control Theory and Applications[M]. Berlin Germany: Springer International Publishing, 2022.
+
+[2] Rivero M, Rogosin S V, Tenreiro Machado J A, et al. Stability of fractional order systems[J]. Mathematical Problems in Engineering, 2013, 2013(1): 356215.
+
+[3] Luo Y, Chen Y Q, Wang C Y, et al. Tuning fractional order proportional integral controllers for fractional order systems[J]. Journal of Process Control, 2010, 20(7): 823-831.
+
+[4] Pan I, Das S. Intelligent fractional order systems and control: an introduction[M]. Springer, 2012.
+
+[5] Aguila-Camacho N, Duarte-Mermoud M A, Gallegos J A. Lyapunov functions for fractional order systems[J]. Communications in Nonlinear Science and Numerical Simulation, 2014, 19(9): 2951-2957.
+
+[6] Jamil A A, Tu W F, Ali S W, et al. Fractional-order PID controllers for temperature control: A review[J]. Energies, 2022, 15(10): 3800.
+
+[7] Wen X J, Wu Z M, Lu J G. Stability analysis of a class of nonlinear fractional-order systems[J]. IEEE Transactions on circuits and systems II: Express Briefs, 2008, 55(11): 1178-1182.
+
+[8] Marir S, Chadli M, Basin M V. Bounded real lemma for singular linear continuous-time fractional-order systems[J]. Automatica, 2022, 135: 109962.
+
+[9] Shafai B, Nazari S, Moradmand A. A direct algebraic approach to design state feedback and observers for singular systems[C]//2019 IEEE Conference on Control Technology and Applications (CCTA). IEEE, 2019: 835-842.
+
+[10] Meng B, Wang X, Zhang Z, et al. Necessary and sufficient conditions for normalization and sliding mode control of singular fractional-order systems with uncertainties[J]. Science China Information Sciences, 2020, 63: 1-10.
+
+[11] Batiha I M, El-Khazali R, AlSaedi A, et al. The general solution of singular fractional-order linear time-invariant continuous systems with regular pencils[J]. Entropy, 2018, 20(6): 400.
+
+[12] Zhan T, Liu X, Ma S. A new singular system approach to output feedback sliding mode control for fractional order nonlinear systems[J]. Journal of the Franklin Institute, 2018, 355(14): 6746-6762.
+
+[13] Gao S, Zhao D, Yan X, et al. Model-free adaptive state feedback control for a class of nonlinear systems[J]. IEEE Transactions on Automation Science and Engineering, 2023, 21(2): 1824-1836.
+
+[14] Frei T, Chang C H, Filo M, et al. A genetic mammalian proportional-integral feedback control circuit for robust and precise gene regulation[J]. Proceedings of the National Academy of Sciences, 2022, 119(00): e2122132119.
+
+[15] Hauswirth A, He Z, Bolognani S, et al. Optimization algorithms as robust feedback controllers[J]. Annual Reviews in Control, 2024, 57: 100941.
+
+[16] Thanh N T, Ngoc Phat V. Switching law design for finite-time stability of singular fractional-order systems with delay[J]. IET Control Theory & Applications, 2019, 13(9): 1367-1373.
+
+[17] Wu B, Chang X H, Zhao X. Fuzzy ${H}_{\infty }$ Output Feedback Control for Nonlinear NCSs With Quantization and Stochastic Communication Protocol[J]. IEEE Transactions on Fuzzy Systems, 2020, 29(9): 2623- 2634.
+
+[18] Fadiga L, Farges C, Sabatier J, et al. On computation of ${H}_{\infty }$ norm for commensurate fractional order systems[C]//2011 50th IEEE Conference on Decision and Control and European Control Conference. IEEE, 2011: 8231-8236.
+
+[19] Liang S, Wei Y H, Pan J W, et al. Bounded real lemmas for fractional order systems[J]. International Journal of Automation and Computing, 2015, 12(2): 192-198.
+
+[20] Zhang Q H, Lu J G. ${H}_{\infty }$ control for singular fractional-order interval systems: The $0 < \alpha < 1$ case[J]. ISA transactions,2021,110: 105-116.
+
+[21] Thuan M V, Sau N H, Huyen N T T. Finite-time ${H}_{\infty }$ control of uncertain fractional-order neural networks[J]. Computational and Applied Mathematics, 2020, 39(2): 59.
+
+[22] Martínez-Guerra R, Pérez-Pinacho C A, Gómez-Cortés G C, et al. An Observer for a Class of Incommensurate Fractional-Order Systems[J]. Synchronization of Integral and Fractional Order Chaotic Systems: A Differential Algebraic and Differential Geometric Approach With Selected Applications in Real-Time, 2015: 219-236.
+
+[23] Boukal Y, Darouach M, Zasadzinski M, et al. Robust ${H}_{\infty }$ observer-based control of fractional-order systems with gain parametrization[J]. IEEE Transactions on Automatic Control, 2017, 62(11): 5710-5723.
+
+[24] Li X J, Yang G H. Adaptive ${H}_{\infty }$ control in finite frequency domain for uncertain linear systems[J]. Information Sciences, 2015, 314: 14-27.
+
+[25] Li Y, Wei Y, Yuquan C, et al. ${H}_{\infty }$ bounded real lemma for singular fractional-order systems[J]. International Journal of Systems Science, 2021, 52(12): 2538-2548.
+
+[26] Guo Y, Lin C, Chen B, et al. Stabilization for singular fractional-order systems via static output feedback[J]. IEEE Access, 2018, 6: 71678- 71684.
+
+[27] Li B, Zhao X. Robust ${H}_{\infty }$ control for fractional order singular systems $0 < \alpha < 1$ with uncertainty[J].Optimal Control Applications and Methods, 2023, 44(1):332-348.
+
+[28] Li H, Yang G H. Dynamic output feedback ${H}_{\infty }$ control for fractional-order linear uncertain systems with actuator faults[J].Journal of the Franklin Institute, 2019, 356(8).
+
+[29] Silva B M C, Ishihara J Y, Tognetti E S. LMI-based consensus of linear multi-agent systems by reduced-order dynamic output feedback[J]. ISA transactions, 2022, 129: 121-129.
+
+[30] Kaczorek T, Rogowski K. Fractional linear systems and electrical circuits[M]. Cham, Switzerland: Springer International Publishing, 2015.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/alucFTO60T/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/alucFTO60T/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..77f6d68d8076e0387d6635697ddc5eea44ce8cc5
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/alucFTO60T/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,439 @@
+§ ${H}_{\INFTY }$ STATE FEEDBACK CONTROLLER BASED ON DYNAMIC OBSERVER DESIGN FOR SINGULAR FRACTIONAL-ORDER SYSTEMS
+
+1st Minghui Wei
+
+Shenyang Aerospace University
+
+School of Automation
+
+Shenyang, China
+
+2271700918@qq.com
+
+${2}^{\text{ nd }}\mathrm{{He}}\mathrm{{Li}}$
+
+Shenyang Aerospace University School of Automation
+
+Shenyang, China
+
+lihe_good@126.com
+
+${3}^{\text{ rd }}$ Shuo Liu
+
+Shenyang Aerospace University
+
+School of Automation
+
+Shenyang, China
+
+2922601793@qq.com
+
+Abstract-This paper focuses on the problem of ${H}_{\infty }$ state feedback controller design based on dynamic observer for singular fractional-order systems (FOS), where the fractional derivative order $\alpha$ lies between 0 and 1. First, a new form of dynamic observer with a non-singular structure is proposed, which is easier to implement physically. Secondly, the bounded real lemma corresponding to ${H}_{\infty }$ norm of FOS is proposed via a set of linear matrix inequalities (LMIs). Compared to existing methods, the lemma employs real variable, which is easier to solve. Building upon the new lemma, the conditions for designing ${H}_{\infty }$ state feedback controller based on dynamic observer of FOS are derived. Finally, numerical example is presented to validate the effectiveness of the proposed method.
+
+Index Terms-singular fractional-order systems (singular FOS), dynamic observer, ${H}_{\infty }$ control, state feedback control
+
+§ I. INTRODUCTION
+
+In the past decade, fractional-order calculus has garnered considerable attention from physicists and engineers [1]. It has been observed that many systems across various interdisciplinary fields can be effectively described using fractional derivatives because these derivatives capture the historical evolution of functions and demonstrate stronger global correlations compared to integer derivatives. Numerous systems exhibit fractional-order dynamics, including viscoelastic systems [2], dielectric polarization [3], electrode-electrolyte polarization [4], electromagnetic waves [5], quantitative finance [6], and the quantum evolution of complex systems [7].
+
+Singular systems, also known as generalized systems, encompass both differential and algebraic equations [8]. This model accounts for physical constraints, static relationships, and broader impulsive behaviors due to improper transfer matrices. In contrast to non-singular systems, singular FOS provide a more precise representation of the physical properties of systems, offering direct and comprehensive descriptions [9]. Since their introduction in many fields of system design and control, singular FOS have received considerable attention. They have diverse applications in electrical systems, large-scale interconnected networks, power grids, constrained mechanical systems, and chemical processes [10]-[12].
+
+In control system design, state feedback controllers are typically designed to meet specific performance criteria, especially when access to all states of the considered system is unavailable, or when system output measurements cannot provide complete information about the internal system states. This poses challenges for the design of state feedback controllers [13]. Therefore, the theory of observer design has attracted widespread attention. Based on state estimates obtained from the observer, the observer based controller is used to generate control laws to stabilize unstable systems or ensure desired performance [14], [15]. Recently, research activities focusing on observer-based control for FOS have been developed. In [22], a novel observer-free synchronization method is introduced for a specific category of incommensurate fractional-order systems. [23] studied the robust ${H}_{\infty }$ observer control of linear time-invariant perturbed uncertain FOS. By analyzing the ${H}_{\infty }$ norm of the FOS and considering the fractional derivative $\alpha$ , a new sufficient condition is proposed to ensure the stability of the estimation error system.
+
+${H}_{\infty }$ control plays a crucial role in control systems. The ${H}_{\infty }$ optimization used under the presence of disturbances with bounded energy allows guaranteeing levels of disturbance attenuation. However, it is typically limited to integer-order systems [17]. In recent years, there have been developments extending the computation of ${H}_{2}$ and ${H}_{\infty }$ norms to FOS. The ${H}_{2}$ norm of fractional transfer functions of implicit type is studied in [16]. [18] employs two methods based on a binary algorithm and LMI condition to compute the ${H}_{\infty }$ norm of FOS and determine the Hamiltonian matrix. Using the generalized Kalman-Yakubovich-Popov (KYP) lemma, the bounded real lemmas for the ${H}_{ - }$ norm and ${H}_{\infty }$ norm of FOS are derived via a series of LMIs in [19]. Based on these analysis results, numerous studies have focused on designing ${H}_{\infty }$ controllers and observers. [21] studied the finite time ${H}_{\infty }$ control problem of fractional order neural networks using finite time stability theory and Lyapunov sample function method. The ${H}_{\infty }$ control problem for singular FOS with order ranging from 0 to 1 is explored in [20].
+
+This study was funded by National Natural Science Foundation of China (grant number 62003223).
+
+In this work, the problem of designing ${H}_{\infty }$ state feedback controller based on dynamic observers for singular FOS is studied. The main contributions can be summarized as follows:
+
+ * A dynamic observer is proposed. Compared with [26], the observer in this paper has a non-singular structure, making it easier to implement.
+
+ * Novel necessary and sufficient conditions for the bounded real lemma corresponding to ${H}_{\infty }$ norm for singular FOS ranging $0 < \alpha < 1$ are proposed. Unlike previous approaches, such as [24] and [25], the matrix variable is real, which is easier to solve.
+
+ * Based on the bounded real lemma, the conditions for designing the dynamic observer are given via a set of LMIs.
+
+Notations: In the subsequent sections of the paper, $A$ is a hermitian matrix if and only if ${A}^{ * } = A$ and $A > 0.\operatorname{Re}\left( Q\right)$ and $\operatorname{Im}\left( Q\right)$ represent the real and imaginary parts of the complex $Q$ , respectively. $\operatorname{Sym}\left( A\right) = A + {A}^{T}$ .
+
+Proposition 1. A complex Hermitian matrix $Q$ satisfies $Q < 0$ if and only if
+
+$$
+\left\lbrack \begin{matrix} {R}_{e}\left( Q\right) & {I}_{m}\left( Q\right) \\ - {I}_{m}\left( Q\right) & {R}_{e}\left( Q\right) \end{matrix}\right\rbrack < 0
+$$
+
+II. Problem STATEMENT AND PRELIMINARIES
+
+Consider the following singular FOS:
+
+$$
+\begin{cases} E{D}^{\alpha }x & = {Ax}\left( t\right) + {Bu}\left( t\right) + {B}_{w}w\left( t\right) , \\ z\left( t\right) & = {C}_{z}x\left( t\right) + {D}_{z}u\left( t\right) , \\ y\left( t\right) & = {Cx}\left( t\right) , \end{cases} \tag{1}
+$$
+
+in which $\alpha$ is the fractional-order, ranging from $0 < \alpha < 1$ , $x \in {R}^{n}$ is the pseudo state vector, $y \in {R}^{q}$ is the output vector, $z \in {R}^{r}$ is the control output, $u \in {R}^{m}$ is the control input, $w \in {R}^{p}$ is disturbance input. $A,B,{B}_{w},C,{C}_{z}$ are the constant matries for the appropriate dimensions, $E \in {R}^{n \times n}$ is the singular matrix, which is $\operatorname{rank}\left( E\right) < n.{D}^{\alpha }$ denotes the Caputo fractional derivative
+
+$$
+{D}^{\alpha }f\left( t\right) = \frac{1}{\Gamma \left( {m - \alpha }\right) }{\int }_{{t}_{0}}^{t}\frac{{f}^{\left( m\right) }\left( \tau \right) }{{\left( t - \tau \right) }^{\alpha + 1 - m}}{d\tau }. \tag{2}
+$$
+
+In this paper, it is assumed that $E$ and $C$ such that
+
+$$
+\text{ rank }\left\lbrack \begin{array}{l} E \\ C \end{array}\right\rbrack = n\text{ . } \tag{3}
+$$
+
+Then, consider the following observer based controller
+
+$$
+\begin{cases} {D}^{\alpha }z\left( t\right) = & {TA}\widehat{x}\left( t\right) + {TBu}\left( t\right) + {C}_{d}{x}_{d}\left( t\right) \\ & + {D}_{d}\left( {y\left( t\right) - \widehat{y}\left( t\right) }\right) , \\ {D}^{\alpha }{x}_{d}\left( t\right) = & {A}_{d}{x}_{d}\left( t\right) + {B}_{d}\left( {y\left( t\right) - \widehat{y}\left( t\right) }\right) , \\ \widehat{x}\left( t\right) = & z\left( t\right) + {Ny}\left( t\right) , \\ u\left( t\right) = & K\widehat{x}\left( t\right) , \end{cases} \tag{4}
+$$
+
+in which $\widehat{x}\left( t\right) \in {R}^{n}$ is the state estimation vector, ${\widehat{x}}_{d}\left( t\right) \in$ ${R}^{n}$ is an auxiliary state, and $T,N,{A}_{d},{B}_{d},{C}_{d},{D}_{d},K$ are constant matrices of appropriate dimensions, and $T,N$ such that
+
+$$
+{TE} + {NC} = {I}_{n}. \tag{5}
+$$
+
+in which ${I}_{n}$ represents the $\mathrm{n}$ dimensional identity matrix.
+
+Define $e\left( t\right) = x\left( t\right) - \widehat{x}\left( t\right) ,\bar{x} = {\left\lbrack \begin{array}{lll} {x}^{T} & {e}^{T} & {x}_{d}^{T} \end{array}\right\rbrack }^{T}$ . Combining singular FOS (1) and controller (2), one obtains
+
+$$
+\begin{cases} \bar{E}{D}^{\alpha }\bar{x}\left( t\right) & = \bar{A}\bar{x}\left( t\right) + \bar{B}w\left( t\right) , \\ z\left( t\right) & = \bar{C}\bar{x}\left( t\right) , \end{cases} \tag{6}
+$$
+
+where,
+
+$$
+\bar{E} = \left\lbrack \begin{matrix} E & 0 & 0 \\ 0 & I & 0 \\ 0 & 0 & I \end{matrix}\right\rbrack ,\bar{A} = \left\lbrack \begin{matrix} A + {BK} & - {BK} & 0 \\ 0 & {TA} - {D}_{d}C & - {C}_{d} \\ 0 & {B}_{d}C & {A}_{d} \end{matrix}\right\rbrack ,
+$$
+
+$$
+\bar{B} = \left\lbrack \begin{matrix} {B}_{w} \\ T{B}_{w} \\ 0 \end{matrix}\right\rbrack ,\bar{C} = \left\lbrack \begin{array}{lll} {C}_{z} + {D}_{z}K & - {D}_{z}K & 0 \end{array}\right\rbrack .
+$$
+
+The transfer function of system (6) is
+
+$$
+G\left( s\right) = \bar{C}{\left( {s}^{\alpha }\bar{E} - \bar{A}\right) }^{-1}\bar{B}. \tag{7}
+$$
+
+The design problem of ${H}_{\infty }$ state feedback controller based on dynamic observer is to design a controller such that the closed-loop system (6) is admissibility, and its transfer function satisfies $\parallel G\left( s\right) {\parallel }_{\infty } < \gamma$ .
+
+Lemma 1. [27] Let $\gamma$ be a scalar such that $\gamma > 0$ . The singular FOS is admissible and satisfies the condition $\parallel G\left( s\right) {\parallel }_{\infty } < \gamma$ if there exists $\bar{E}P = {P}^{ * }{\bar{E}}^{T} \in {\mathbf{C}}^{n \times n} > 0$ such
+
+that
+
+$$
+\left\lbrack \begin{matrix} \operatorname{Sym}\left( {\bar{A}\left( {{rP} + \bar{r}\bar{P}}\right) }\right) & * & * \\ \bar{C}\left( {{rP} + \bar{r}\bar{P}}\right) & - I & * \\ {\bar{B}}^{T} & 0 & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0, \tag{8}
+$$
+
+where $r = {e}^{j\theta },\theta = \frac{\pi }{2}\left( {1 - \alpha }\right)$ .
+
+Lemma 2. [28] Let $\gamma$ be a scalar such that $\gamma > 0$ , the following statements hold the same significance:
+
+(i) there exists $\bar{E}P = {P}^{ * }{\bar{E}}^{T} \in {\mathbf{C}}^{n \times n} > 0$ such that
+
+$$
+\left\lbrack \begin{matrix} \operatorname{Sym}\left( {\bar{A}\left( {{rP} + \bar{r}\bar{P}}\right) }\right) & * & * \\ \bar{C}\left( {{rP} + \bar{r}\bar{P}}\right) & - I & * \\ {\bar{B}}^{T} & 0 & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0, \tag{9}
+$$
+
+where $r = {e}^{j\theta },\theta = \frac{\pi }{2}\left( {1 - \alpha }\right)$ .
+
+(ii) there exists matrix $M \in {\mathbf{R}}^{n \times n}$ such that
+
+$$
+\left\lbrack \begin{matrix} \operatorname{Sym}\left( {\bar{A}M}\right) & * & * \\ \bar{C}M & - I & * \\ {\bar{B}}^{T} & 0 & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0, \tag{10}
+$$
+
+$$
+\left\lbrack \begin{matrix} \left( {\bar{E}M + {\left( \bar{E}M\right) }^{T}}\right) /a & * \\ \left( {\bar{E}M - {\left( \bar{E}M\right) }^{T}}\right) /b & {\left( \bar{E}M + \bar{E}M\right) }^{T})/a \end{matrix}\right\rbrack > 0, \tag{11}
+$$
+
+where $\theta = \frac{\pi }{2}\left( {1 - \alpha }\right) ,a = 4\cos \theta ,b = 4\sin \theta$ .
+
+Proof. Define ${Q}_{1} = \bar{E}{P}_{1} = {R}_{e}\left( Q\right) ,{Q}_{2} = \bar{E}{P}_{2} = {I}_{m}\left( Q\right)$ . According to Proposition 1, the condition $Q = \bar{E}{P}_{1} + \bar{E}{P}_{2}i >$ 0 is equivalent to
+
+$$
+\left\lbrack \begin{matrix} \bar{E}{P}_{1} & \bar{E}{P}_{2} \\ - \bar{E}{P}_{2} & \bar{E}{P}_{1} \end{matrix}\right\rbrack > 0. \tag{12}
+$$
+
+Since $r = {e}^{j\theta } = \cos \theta + i\sin \theta$ and $\bar{r} = {e}^{-{j\theta }} = \cos \theta - i\sin \theta$ , then it yields
+
+$$
+\left( {{rQ} + \bar{r}\bar{Q}}\right) = \left( {\cos \theta + \sin \theta }\right) \left( {\bar{E}{P}_{1} + \bar{E}{P}_{2}i}\right)
+$$
+
+$$
++ \left( {\cos \theta - \sin \theta }\right) \left( {\bar{E}{P}_{1} - \bar{E}{P}_{2}i}\right) \tag{13}
+$$
+
+$$
+= \left( {2\cos \theta \bar{E}{P}_{1} - 2\sin \theta \bar{E}{P}_{2}}\right) \text{ . }
+$$
+
+Note that $\bar{E}{P}_{1}$ is real symmetric matrix, while $\bar{E}{P}_{2}$ is skew-symmetric matrix. Therefore
+
+$$
+{\left( rQ + \bar{r}\bar{Q}\right) }^{T} = \left( {2\cos \theta \bar{E}{P}_{1} + 2\sin \theta \bar{E}{P}_{2}}\right) . \tag{14}
+$$
+
+Let $\widetilde{Q} = {rQ} + \bar{r}\bar{Q},M = {rP} + \bar{r}\bar{P}$ , then $\widetilde{Q} = \bar{E}M$ , Then (9) is equivalent to (10). From (13) and (14), we obtain
+
+$$
+\left\{ \begin{array}{l} \bar{E}{P}_{1} = \left( {\bar{E}M + {\left( \bar{E}M\right) }^{T}}\right) /\left( {4\cos \theta }\right) \\ \bar{E}{P}_{2} = \left( {{\left( \bar{E}M\right) }^{T} - \bar{E}M}\right) /\left( {4\sin \theta }\right) . \end{array}\right. \tag{15}
+$$
+
+Combining (15) and (12), it follows that (11) holds, which is equivalent to condition $Q = \bar{E}{P}_{1} + \bar{E}{P}_{2}i > 0$ . This ends the proof.
+
+§ III. MAIN RESULTS
+
+In this section, the problem of designing observer based controller is transformed into an optimization problem of LMIs. The conditions for designing ${H}_{\infty }$ state feedback controller based on dynamic observer for singular FOS are provided in the form of LMIs.
+
+Theorem 1. Let the system (6) be admissible and $\gamma > 0$ . For $\parallel G\left( s\right) {\parallel }_{\infty } < \gamma$ if and only if there exists matrices $X,Y,H$ , $S,G,L$ and $W$ such that
+
+$$
+\left\lbrack \begin{matrix} {\Omega }_{11} & * & * & * & * \\ - {\left( BS\right) }^{T} & {\Omega }_{22} & * & * & * \\ {A}^{T} & {\Omega }_{32} & {\Omega }_{33} & * & * \\ {\Omega }_{41} & - {D}_{z}S & {C}_{z} & - {\gamma I} & * \\ {B}_{w}^{T} & {B}_{w}^{T}{T}^{T} & {B}_{w}^{T}{T}^{T}{Y}^{T} & 0 & - {\gamma I} \end{matrix}\right\rbrack < 0 \tag{16}
+$$
+
+$$
+\left\lbrack \begin{matrix} \left( {\mathbf{Q} + {\mathbf{Q}}^{T}}\right) /a & * \\ \left( {\mathbf{Q} - {\mathbf{Q}}^{T}}\right) /b & \left( {\mathbf{Q} + {\mathbf{Q}}^{T}}\right) /a \end{matrix}\right\rbrack > 0, \tag{17}
+$$
+
+$$
+{\Omega }_{11} = \operatorname{Sym}\left( {{AX} + {BS}}\right) ,
+$$
+
+$$
+{\Omega }_{22} = \operatorname{Sym}\left( {{TAX} - H}\right) ,
+$$
+
+$$
+{\Omega }_{32} = {A}^{T}{T}^{T} - {C}^{T}{L}^{T} + W,
+$$
+
+$$
+{\Omega }_{33} = \operatorname{Sym}\left( {{YTA} + {GC}}\right) ,
+$$
+
+$$
+{\Omega }_{41} = {C}_{z}X + {D}_{z}S,
+$$
+
+$$
+\mathbf{Q} = \left\lbrack \begin{matrix} {EX} & 0 & E \\ 0 & X & I \\ 0 & Z & Y \end{matrix}\right\rbrack
+$$
+
+Define controller gain $K$ and ${A}_{d},{B}_{d},{C}_{d},{D}_{d}$
+
+$$
+K = S{X}^{-1},
+$$
+
+$$
+{D}_{d} = L
+$$
+
+$$
+{C}_{d} = \left( {H - {D}_{d}{CX}}\right) {U}^{-1},
+$$
+
+$$
+{B}_{d} = {V}^{-1}\left( {G + Y{D}_{d}}\right) ,
+$$
+
+$$
+{A}_{d} = {V}^{-1}(W - {YTAX} + Y{D}_{d}{CX}
+$$
+
+$$
+\left. {-V{B}_{d}{CX} + Y{C}_{d}}\right) {U}^{-1}, \tag{18}
+$$
+
+where $V$ and $U$ are invertible, which satisfies
+
+$$
+Z = {YX} + {VU} \tag{19}
+$$
+
+Proof. According to Lemma 2, the singular FOS is admissible and satisfies $\parallel G\left( s\right) {\parallel }_{\infty } < \gamma$ if there exists a matrix $M \in {\mathbf{R}}^{n \times n}$
+
+such that
+
+$$
+\left\lbrack \begin{matrix} \operatorname{Sym}\left( {\bar{A}M}\right) & * & * \\ \bar{C}M & - I & * \\ {\bar{B}}^{T} & 0 & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0, \tag{20}
+$$
+
+$$
+\left\lbrack \begin{matrix} \left( {\bar{E}M + {\left( \bar{E}M\right) }^{T}}\right) /a & * \\ \left( {\bar{E}M - {\left( \bar{E}M\right) }^{T}}\right) /b & {\left( \bar{E}M + \bar{E}M\right) }^{T})/a \end{matrix}\right\rbrack > 0, \tag{21}
+$$
+
+Next, the target is to linearize the condition (20) as described in [29]. First, $M$ and ${M}^{-1}$ as
+
+$$
+M = \left\lbrack \begin{matrix} X & 0 & {V}^{-T} \\ 0 & X & \left( {I - X{Y}^{T}}\right) {V}^{-T} \\ 0 & U & - U{Y}^{T}{V}^{-T} \end{matrix}\right\rbrack ,
+$$
+
+$$
+{M}^{-1} = \left\lbrack \begin{matrix} {X}^{-1} & - {X}^{-1} & {U}^{-1} \\ 0 & {Y}^{T} & \left( {I - {Y}^{T}X}\right) {U}^{-1} \\ 0 & {V}^{T} & - {V}^{T}X{U}^{-1} \end{matrix}\right\rbrack ,
+$$
+
+where $V$ and $U$ are invertible, which satisfies $Z = {YX} + {VU}$ , and define
+
+$$
+F = \left\lbrack \begin{matrix} I & 0 & 0 \\ 0 & I & 0 \\ 0 & Y & V \end{matrix}\right\rbrack
+$$
+
+Multiplying both sides of (20) with the diagonal matrix $\{ F,I,I\}$ and its transpose, we obtain
+
+$$
+\left\lbrack \begin{matrix} {\widetilde{\Omega }}_{11} & * & * & * & * \\ - {\left( BKX\right) }^{T} & {\widetilde{\Omega }}_{22} & * & * & * \\ {A}^{T} & {\widetilde{\Omega }}_{32} & {\widetilde{\Omega }}_{33} & * & * \\ {\widetilde{\Omega }}_{41} & - {D}_{z}{KX} & {C}_{z} & - {\gamma I} & * \\ {B}_{w}^{T} & {B}_{w}^{T}{T}^{T} & {B}_{w}^{T}{T}^{T}{Y}^{T} & 0 & - {\gamma I} \end{matrix}\right\rbrack < 0
+$$
+
+(22)
+
+where
+
+$$
+{\widetilde{\Omega }}_{11} = \operatorname{Sym}\left( {{AX} + {BKX}}\right) ,
+$$
+
+$$
+{\widetilde{\Omega }}_{22} = \operatorname{Sym}\left( {{TAX} - {D}_{d}{CX} - {C}_{d}U}\right) ,
+$$
+
+$$
+{\widetilde{\Omega }}_{32} = {YTAX} - Y{D}_{d}{CX} + V{B}_{d}{CX} - Y{C}_{d}U
+$$
+
+$$
++ V{A}_{d}U + {\left( TA - {D}_{d}C\right) }^{T}\text{ , } \tag{23}
+$$
+
+$$
+{\widetilde{\Omega }}_{33} = \operatorname{Sym}\left( {{YTA} - Y{D}_{d}C + V{B}_{d}C}\right) ,
+$$
+
+$$
+{\widetilde{\Omega }}_{41} = {C}_{z}X + {D}_{z}{KX}.
+$$
+
+Let
+
+$$
+S = {KX}, \tag{24}
+$$
+
+$$
+H = {D}_{d}{CX} + {C}_{d}U,
+$$
+
+$$
+W = {YTAX} - Y{D}_{d}{CX} + V{B}_{d}{CX}
+$$
+
+$$
+- Y{C}_{d}U + V{A}_{d}U
+$$
+
+$$
+G = V{B}_{d} - Y{D}_{d}.
+$$
+
+Combining (24) and (22), it follows that (16) holds.
+
+Then multiplying both sides of (21) with the $F$ and ${F}^{T}$ , and define $Z = {YX} + {VU}$ , this concludes the proof.
+
+§ IV. NUMERICAL EXAMPLE
+
+Consider the fractional-order circuit depicted in Fig.1 [30], which can be characterized as
+
+$$
+{e}_{1} = {L}_{1}\frac{{d}^{\alpha }{i}_{1}}{d{t}^{\alpha }} + {L}_{3}\frac{{d}^{\alpha }{i}_{3}}{d{t}^{\alpha }} + {R}_{1}{i}_{1} + {R}_{3}{i}_{3},
+$$
+
+$$
+{e}_{2} = {L}_{2}\frac{{d}^{\alpha }{i}_{1}}{d{t}^{\alpha }} + {L}_{3}\frac{{d}^{\alpha }{i}_{3}}{d{t}^{\alpha }} + {R}_{2}{i}_{2} + {R}_{3}{i}_{3},
+$$
+
+$$
+{i}_{3} = {i}_{1} + {i}_{2}.
+$$
+
+$$
+\text{ Let }x\left( t\right) = \left\lbrack \begin{array}{l} {i}_{1} \\ {i}_{2} \\ {i}_{3} \end{array}\right\rbrack ,z\left( t\right) = {Cx}\left( t\right) \text{ . Then }
+$$
+
+$$
+\left\lbrack \begin{matrix} {L}_{1} & 0 & {L}_{3} \\ 0 & {L}_{2} & {L}_{3} \\ 0 & 0 & 0 \end{matrix}\right\rbrack {D}^{\alpha }x\left( t\right) = \left\lbrack \begin{matrix} - {R}_{1} & 0 & - {R}_{3} \\ 0 & - {R}_{2} & - {R}_{3} \\ 1 & 1 & - 1 \end{matrix}\right\rbrack x\left( t\right)
+$$
+
+$$
++ \left\lbrack \begin{array}{ll} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{array}\right\rbrack u\left( t\right) + \left\lbrack \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right\rbrack w
+$$
+
+$$
+z\left( t\right) = \left\lbrack \begin{array}{lll} 1 & 1 & 1 \end{array}\right\rbrack x\left( t\right) .
+$$
+
+Consider the appropriate parameters as follows
+
+$$
+A = \left\lbrack \begin{matrix} - 4 & 0 & - 5 \\ 0 & - 3 & - 5 \\ 1 & 1 & - 1 \end{matrix}\right\rbrack ,B = \left\lbrack \begin{array}{ll} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{array}\right\rbrack ,{B}_{w} = \left\lbrack \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right\rbrack ,
+$$
+
+$$
+C = \left\lbrack \begin{array}{lll} 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right\rbrack ,{C}_{z} = \left\lbrack \begin{array}{lll} 1 & 1 & 1 \end{array}\right\rbrack ,{D}_{z} = \left\lbrack \begin{array}{ll} 1 & 0 \end{array}\right\rbrack ,
+$$
+
+$$
+E = \left\lbrack \begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right\rbrack ,\alpha = 1/3. \tag{25}
+$$
+
+Then the solution to (5) is
+
+$$
+T = \left\lbrack \begin{matrix} 1 & - 1 & 2 \\ 0 & 1 & 3 \\ 0 & 1 & 3 \end{matrix}\right\rbrack ,N = \left\lbrack \begin{matrix} 1 & 0 \\ 0 & 0 \\ - 1 & 1 \end{matrix}\right\rbrack .
+$$
+
+According to Theorem 1, we obtain that
+
+$$
+{A}_{d} = \left\lbrack \begin{matrix} - {0.0540} & {0.8565} & - {0.0369} \\ - {0.6895} & - {0.8754} & - {0.6877} \\ {3.0035} & {3.2893} & {0.8686} \end{matrix}\right\rbrack ,
+$$
+
+$$
+{B}_{d} = \left\lbrack \begin{matrix} {0.0422} & {0.0644} \\ {0.9692} & - {0.1048} \\ - {1.9431} & - {0.8367} \end{matrix}\right\rbrack
+$$
+
+$$
+{C}_{d} = \left\lbrack \begin{matrix} - {0.1496} & {2.9450} & - {0.0947} \\ {2.7666} & {2.0096} & - {1.9358} \\ - {0.5109} & - {2.9046} & - {2.1334} \end{matrix}\right\rbrack
+$$
+
+$$
+{D}_{d} = \left\lbrack \begin{matrix} - {0.0288} & - {0.0662} \\ {0.3001} & {0.3680} \\ {0.7859} & {1.0324} \end{matrix}\right\rbrack
+$$
+
+$$
+K = \left\lbrack \begin{matrix} {0.3793} & - {0.2046} & {0.1052} \\ - {0.3196} & {0.6161} & {1.4726} \end{matrix}\right\rbrack .
+$$
+
+Fig. 2 shows state vector and state estimation vector. The maximum singular values of $G\left( s\right)$ are plotted in Fig. 3, peaking at approximately 0.4031 . The state diagram of system (6) illustrated in Fig.4. It implies that the system is stable and admissible.
+
+ < g r a p h i c s >
+
+Fig. 1. The singular fractional-order electrical circuit.
+
+ < g r a p h i c s >
+
+Fig. 2. State vector and state estimation vector.
+
+§ V. CONCLUSION
+
+The problem of designing ${H}_{\infty }$ state feedback controller based on dynamic observer for singular FOS is investigated in this paper. Firstly, a non-singular ${H}_{\infty }$ state feedback controller based on dynamic observer is proposed. Additionally, a new bounded real lemma for the ${H}_{\infty }$ norm of FOS is presented, which forms the foundation for designing the dynamic observer. Then, conditions for designing ${H}_{\infty }$ state feedback controller based on dynamic observer for FOS are derived. Ultimately, the effectiveness of the proposed methodology is confirmed through a simulation example.
+
+ < g r a p h i c s >
+
+Fig. 3. The maximum singular values of $G\left( s\right)$ .
+
+ < g r a p h i c s >
+
+Fig. 4. State diagram of system (6).
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/au4HFflf6W/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/au4HFflf6W/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..747658b459f10a7b2357c53968345b352e896790
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/au4HFflf6W/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,335 @@
+# Regional Multi-ship Collision Risk Analysis Based on Velocity Obstacle Method: a Case Study on the Pearl River Estuary
+
+Qi Liu
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+State Key Laboratory of
+
+Maritime Technology and Safety, Wuhan, China
+
+lq754001x@whut.edu.cn
+
+Pengfei Chen
+
+School of Navigation Wuhan University of Technology Wuhan, China
+
+State Key Laboratory of
+
+Maritime Technology and Safety, Wuhan, China
+
+Chenpf@whut.edu.cn
+
+Junmin Mou
+
+School of Navigation Wuhan University of Technology Wuhan, China
+
+State Key Laboratory of
+
+Maritime Technology and Safety, Wuhan, China
+
+Moujm@whut.edu.cn
+
+Linying Chen
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+State Key Laboratory of
+
+Maritime Technology and Safety, Wuhan, China
+
+LinyingChen@whut.edu.cn
+
+Abstract-Analysis of regional multi-ship collision risk is essential for enhancing the efficiency of traffic management in maritime transportation. However, traditional collision risk analysis methods only assess the risk of collision from the viewpoint of ship pair encounters. In this research, a novel framework for analyzing regional multi-ship collision risk based on Velocity Obstacle (VO) method is proposed using the AIS (Automatic Identification System) data. Firstly, the ships in specific sea areas are clustered with the density-based spatial clustering of applications with Noise to identify multi-ship encounter situations. Afterward, a new collision risk indicator utilizing VO-based time-varying collision risk measurement method is proposed to calculate the collision risk of the single ship. Secondly, the macro-regional collision risk is quantified by calculating the contribution of each ship and each cluster with the Shapley value in cooperative games. Finally, to verify the effectiveness of the proposed framework, we carried out a case study of the Pearl River Estuary in China using historical AIS data. The results show that the proposed framework for regional multi-ship collision risk analysis can help maritime surveillance operators identify the ships with high risk and gain a better understanding of regional collision risk from both microcosmic and macroscopic perspectives.
+
+Keywords-multi-ship encounter situation, velocity obstacle, time-varying risk, shapely value, maritime traffic safety
+
+## I. INTRODUCTION
+
+Maritime transportation is one of the most important transportation approaches for international trade today. With the trend towards economic globalization, maritime transportation has continued to grow over the past decades. However, the increase in maritime transportation volume has led to the augment of maritime traffic density and maritime traffic complexity, thereby increasing the occurrence rate of maritime accidents, in particular ship collisions [1]. In the face of the relatively high maritime traffic volumes or complexity at sea, maritime surveillance operators are always subjective and random in the conduct of monitoring, lacking an overall perception of regional collision risk, regional collision risk, which brings large pressure on maritime surveillance. To analyse the risk of ship collision from multiple perspectives and enhance the regulatory efficiency of Vessel Traffic Service Operators (VTSO), it is imperative to put forward a novel framework for analysing regional multi-ship collision risk.
+
+The analysis of ship collision risk is a research hotpot in the maritime field and plays an important role in reducing the number of collision accidents and enhancing the efficiency and level of maritime traffic monitoring. To assess the risk of ship collision quantitatively from multiple perspectives, many scholars have conducted plenty of research and proposed a variety of methods. These methods can be broadly categorized into three general groups: (1) synthetic indicator-based approaches; (2) safety domain-based approaches; and (3) velocity obstacles-based approaches.
+
+Synthetic indicator-based approaches integrate some factors that indicate the spatial and temporal motion characteristics of encountering ships to measure the Collision Risk Index (CRI) using mathematical functions. The two most famous factors are Distance to Closet Point of Approach (DCPA) and Time to Closest Point of Approach (TCPA), which have been applied in [2-6]. In addition, Zhang et al. [7] considering some risk-influencing elements, introduced a new risk indicator named Vessel Collision Risk Operator (VCRO) to measure the level of the conflict risk of ships. Relevant work can further refer [8]. Based on Zhang et al research, [9] improved the relative distance in the original VCRO and proposed the model of enhanced vessel conflict ranking operator, which further enhanced the accuracy of conflict risk measurement.
+
+Safety domain-based approaches usually construct the own ship's (OS) safety domain in space, take the ships intruding into the safety domain of the OS as posing a collision risk, detect potential collision conflicts, and assess the risk of ship collision in terms of invasions or overlaps in the safety domain of encountering ships, such as ship domain [10] and collision diameter [11]. The ship domain has received a great deal of attention in recent years, and massive AIS (Automatic Identification System) data and advances in intelligent technologies have facilitated the development of various ship domain models with different shapes, including circular, elliptical [12], and polygonal [13]. These ship domains have been applied in collision risk analysis. For instance, Wang et al. [14] based on the elliptical ship domain, developed the Quaternion Ship Domain (QSD) by combining the impact of the COLREGS on the process of the actual ship encounter situations and used it for the assessment of ship collision risk. Szlapczynski et al. [15] developed the domain intrusion time/degree indicators to evaluate the collision risk collision during ship navigation. Liu et al. [16] proposed a collision probability model by introducing the maximum interval and the violation degree of two ship domains to measure the collision risk. Li et al. [17] proposed a novel collision risk assessment model based on the integration of elliptic and quadratic ship domains, offering a new way for collision risk measurement.
+
+---
+
+The work presented in this study is financially supported by the National Natural Science Foundation of China (Grant Number: 52101402, 52271367)
+
+---
+
+Velocity obstacle-based approaches transform the spatial-temporal correlations between ships into the velocity domain and judge the OS's velocity sets of falling into the dangerous velocity space to determine whether the collision risk exists. Recently, it has been progressively developed to combine the ship domain with the VO, proposing non-linear VO [18] and generalized VO algorithms [19], and the VO algorithms have been widely applied in ship collision risk analysis. For instance, Huang et al.[20] first developed VO-based Time-varying Collision Risk (TCR) measurement method to estimate the collision risk of the single ship in multi-ship encounters. Chen et al. [21] based on TCR measurement, introduced a real-time regional ship collision risk analysis method in different encounter situations. Li et al. [22] proposed a rule-aware TCR model for real-time collision risk analysis, which integrates the impact of various factors in the actual situation.
+
+The above approaches provide a solid foundation for the development of collision risk analysis methods. However, these approaches mainly assess the collision risk from ship-ship encounters' viewpoint and analyses ship collision risk only from a microscopic perspective. With the gradual increase in the number of ships, multi-ship encounters are common at sea. Therefore, it is necessary to propose a novel framework to analyse the ship collision risk in the case of regional multi-ship encounters from multiple perspectives. Relevant work has been done. Zhang et al. [23] combined the density complexity and the multi-vessel collision risk operator to analyse regional vessel collision risk. Zhen et al. [24] considering the impact factors of DCPA, TCPA, ship crossing angle, and navigational environment, proposed a fuzzy logic-based collision risk model for regional multi-ship collision risk assessment. Besides, Liu et al. [25] developed a framework for regional collision risk identification with the spatial clustering method. The contribution of this study is to introduce a novel regional collision risk analysis framework that combines the TCR-based collision risk measurement and the Shapley value method. This framework can accurately identify high-risk ships and quantify the regional collision risk from both micro and macro perspectives, which will help the VTSO to accurately grasp the trend of the regional collision risk and strengthen their capacity and efficiency of maritime safety surveillance.
+
+The structures of this paper are organized as follows. The methodology of the research is introduced in section II. Section III describes the construction of the framework. Section IV conducts a case study with the proposed framework for regional collision risk analysis. Some discussion about the results and comparison are presented to validate the effectiveness and feasibility of the proposed framework in section V. Finally, section VI concludes the research.
+
+## II. METHODOLOGY
+
+## A. Overview of the Study
+
+In this study, the collision risk is defined as the percentage of velocities that might potentially result in a collision accident within the entire velocity sets of the OS. This definition comprehensively considers the motion state that the ship needs to maintain for effective collision avoidance from the free space's viewpoint and provides a quantitative measurement of the collision risk faced by the OS, which could significantly assist the VTSO in assessing and mitigating potential collision scenarios. Building upon this definition, we proposed a novel framework based on VO method to analyse regional multi-ship collision risk from both microscopic and macroscopic perspectives, which is beneficial to have an overall understanding of regional multi-ship collision risk and improve the efficiency of safety management for the VTSO in jurisdictional waters.
+
+Firstly, the AIS data in the designated region will be collected and preprocessed over a specified time interval. Subsequently, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) method will be employed to classify the ships into different clusters. This density-based clustering technique takes into account the spatial distances between ships to identify regional multi-ship encounter situations, which are critical for effective analysis. Secondly, we utilize the TCR-based collision risk measurement to accurately quantify the collision risk for individual ships. Besides, by combining the Shapely value method, the collision risk of each cluster is measured by calculating the contribution of ships in a cluster. In this way, the macro-regional collision risk can be derived using the collision risk and the contribution of each cluster. Finally, to validate the effectiveness and feasibility of the proposed framework, we conduct two comparative experiments with the existing collision risk approaches. These experiments are designed to rigorously verify the performance of our framework against the traditional approach, allowing us to demonstrate its advantages in terms of accuracy and application in real sailing scenarios. The proposed research framework is shown in Figure 1.
+
+
+
+Fig. 1. The proposed research framework
+
+## B. Regional Multi-ship Encounter Situation Recognition Using Density-based Clustering
+
+The density-based spatial clustering approaches are a fundamental category of unsupervised learning algorithms that have achieved widespread application in various applications due to their intuitive and fast advantages recently, mainly including the DBSCAN, hierarchical-DBSCAN, and Ordering Points To Identify the Clustering Structure. These methods are based on the principle that the spatial density distribution of the data is processed with a predetermined threshold to divide them into different groups [21]. In this research, we specifically utilise the DBSCAN method to conduct the clustering technology for the recognition of regional multi-ship encounter situations. This algorithm can divide similar data into the same cluster according to certain principles and find out the noise data that does not belong to any cluster. The implementation of the DBSCAN algorithm requires the setting of two primary parameters: Eps and MinPts. The pseudocode for the DBSCAN algorithm is described in Figure 2. By employing the DBSCAN method, the ships in a selected region can be classified into multiple clusters, which can reduce the burden of collision risk calculation and improve the efficiency of recognizing muti-ship encounter situations. During the clustering process, the ships that are not included in any clusters can be considered noise points in the clustering process. These noise points are spatially distant from other vessels and are considered to have no collision risk with others. Therefore, we can disregard these ships in this research, which can help simplify the calculation of collision risk.
+
+---
+
+Algorithm 1: The implementation process of DBSCAN algorithm
+
+Input:
+
+ $D$ : a dataset contain $\mathrm{n}$ objects
+
+ Eps: neighborhood parameter
+
+ MinPts: neighborhood parameter
+
+Output: a set of clusters
+
+ 1. Mark all objects as unvisited
+
+ Randomly select an unvisited object $p$ ;
+
+ If $p$ has least MinPts in its Eps neighborhood:
+
+ Create a new cluster $C$ and add $p$ to $C$ ;
+
+ Sets of objects in ${Eps}$ neighborhoods where $N$ is $p$ ;
+
+ For each point in ${Np}$
+
+ Mark ${p}^{ * }$ for visit:
+
+ If ${p}^{ * }$ has at least MinPts objects in its Eps neighborhood, add them to $N$ ;
+
+ If ${p}^{ * }$ is not a member of any cluster, add ${p}^{ * }$ to $C$ ;
+
+ End for:
+
+ 5. Else marker $p$ is noise;
+
+ . Until there is no object marked unvisited;
+
+---
+
+Fig. 2. The pseudocode for the DBSCAN algorithm
+
+## C. TCR-based Multi-ship Collision Risk Measurement Model
+
+Traditional collision risk analysis methods approaches consider the spatiotemporal relationships of encounter ships separately, which can bring contradictory results. To overcome this shortcoming, the TCR collision risk modeling method is employed in this research to analyse and quantify the risk of ship collision. The concept of TCR, first proposed by [20], is described as the likelihood of the event that the OS will not be able to avoid a collision with other ships. The TCR for the collision risk measurement projects the spatiotemporal relationships between ships in the OS's velocity space and assesses the difficulty of avoiding collision accidents. The description of TCR is shown in (1) and Figure 3.
+
+$$
+{TCR}\left( t\right) = \frac{{\operatorname{sets}}_{\text{collision }}\left( t\right) }{{\operatorname{sets}}_{\text{reachable }}\left( t\right) } \tag{1}
+$$
+
+where ${\operatorname{sets}}_{\text{collision }}\left( t\right)$ are the sets of velocities that lead to collisions at time $t$ ; set ${s}_{\text{reachable }}\left( t\right)$ are the OS’s reachable velocities sets before collisions at time $t$ .
+
+
+
+Fig. 3. The description of TCR
+
+## D. Shapley Value Method in Cooperative Games
+
+Cooperative games involve competition between different groups that need both coalition and cooperation. It is used to ascertain how to distribute the amounts produced by cooperation, which can be used to measure the contribution of the individual in the group [26]. The Shapley value method, introduced by Shapley and Shubik in 1953 [27], plays a dominant role in cooperative game theory. It allocates cooperative amounts by estimating the contribution of each player. The formula of the Shapley value method is shown as (2):
+
+$$
+S{V}_{i}\left\lbrack A\right\rbrack = \mathop{\sum }\limits_{\substack{{C \subseteq N} \\ {i \subseteq C} }}\frac{\left( {c - 1}\right) !\left( {n - c}\right) !}{n!}\left\lbrack {A\left( C\right) - A\left( {C-\{ i\} }\right) }\right\rbrack \tag{2}
+$$
+
+where $i$ is the player in the game, $C$ signifies the group generated by the player $i, c$ represents the total number of players of the group $C.N$ denotes the group formed by all vessels, $n$ denotes the number of players in group $N.A\left( C\right)$ refers to the amounts generated by the group $C, A\left( {C-\{ i\} }\right)$ refers to the amounts generated by group $C$ before player $i$ joins. $S{V}_{i}\left\lbrack A\right\rbrack$ represents the Shapley value of the player $i$ .
+
+The Shapley value method was first applied in the maritime field to assess the contribution of ships to the global collision risk [26]. In this study, the Shapley value method is also employed to identify the contribution of each ship and cluster to the regional collision risk. With this indicator, the measurement of regional collision risk from a macroscopic viewpoint can be obtained.
+
+### III.THE CONSTRUCTION OF FRAMEWORK
+
+## A. Analysing the Risk of Ship Collision in Multi-ship Encountering
+
+The role of the TCR method could be to detect the collision candidate ships and provide the measurement of collision risk for the single ship navigating at different sea areas. Considering these advantages, we utilize the TCR-based collision risk modeling method to analyse the collision risk of ships in this paper.
+
+The VO method can collect some velocity sets that could lead to collisions between the OS and the TSs, which is essential for the TCR. Supposing that ship $A$ and ship $B$ navigate in the waterways. The motion status of the two ships can be denoted as $A\left\{ {{P}_{A}\left( T\right) ,{V}_{A}\left( T\right) ,{L}_{A}}\right\} , B\left\{ {{P}_{B}\left( T\right) ,{V}_{B}\left( T\right) ,{L}_{B}}\right\}$ ; $P$ is the position of two ships at time $T.V$ is their velocity at time $T.L$ is the length of the ships. Using the VO method, the spatiotemporal correlations between two ships are transformed into the ship A's velocity space. The condition of collision can be shown as (3).
+
+$$
+{P}_{A}\left( {t}_{c}\right) \subseteq {P}_{B}\left( {t}_{c}\right) \oplus \text{Conf}P \tag{3}
+$$
+
+$$
+P = P\left( {t}_{0}\right) + {v}^{ * }\left( {t - {t}_{0}}\right)
+$$
+
+where ${P}_{A}\left( {t}_{c}\right) ,{P}_{B}\left( {t}_{c}\right)$ refers to the position of ship $A$ and ship $B$ at collision time ${t}_{c};P$ is the position of two ships at the specified time $t$ . Conf $P$ are all the possible positions of ship $A$ around ship $B$ when the collision happens. $\bigoplus$ represents the Minkowski addition.
+
+In this research, we utilize the NLVO method to obtain the VOs in TCR. The NLVO method can be expressed in (4):
+
+$$
+{NLV}{O}_{A \mid {\operatorname{ship}}_{ji}} = \mathop{\bigcup }\limits_{{t}_{f}}^{\infty }\left( \frac{{P}_{{\operatorname{ship}}_{j}}\left( {t}_{i}\right) - {P}_{A}\left( {t}_{0}\right) }{\left( {t}_{i} - {t}_{0}\right) }\right) \oplus \frac{\operatorname{Conf}{P}_{{\operatorname{ship}}_{j}}}{\left( {t}_{i} - {t}_{0}\right) } \tag{4}
+$$
+
+$$
+{NLV}{O}_{A \mid {allshi}{p}_{ti}} = \mathop{\bigcup }\limits_{{j = 1}}^{n}{NLV}{O}_{A \mid {shi}{p}_{jii}}
+$$
+
+where ${P}_{\text{ship }}\left( {t}_{i}\right) - {P}_{A}\left( {t}_{0}\right)$ indicates the difference in distance between the ship $j$ at the time ${t}_{i}$ and the OS at time ${t}_{0}$ . ${NLV}{O}_{A \mid {\text{ ship }}_{\text{jti }}}$ denotes the OS’s velocity sets induced by ship $j.{NLV}{O}_{A \mid {allshi}{p}_{ti}}$ denotes the OS’s velocity sets induced by all target ships based on Boolean operations. To take full account of the ship's maneuverability, velocity, and heading influences, we employ the QSD as a criterion for ConfP. A detailed description of the QSD can be found at $\left\lbrack {{28},{29}}\right\rbrack$ .
+
+To quantify the collision risk of the individual ship, a new collision risk indicator- ${TC}{R}_{QSD}$ , which is the TCR measured by the OS's QSD, is introduced in this study. The calculation formula of the indicator is shown in (5):
+
+$$
+{TC}{R}_{QSD} = \frac{V{O}_{QSD}}{V{O}_{\text{region }}} \tag{5}
+$$
+
+where $V{O}_{QSD}$ is the area of intersection regions between the VOs induced by the QSD of TS and the velocity region of the OS. $V{O}_{\text{region }}$ is the area of the ship’s velocity region, representing all the possible velocities that the ship can achieve. To simplify the calculation process, the assumption that the changes of course and reduction of velocity are considered collision avoidance operations to obtain the ship's velocity region. Using this indicator, the measurement of the collision risk of single ships can be proceeded.
+
+## B. Identifying the Contribution of Each Ship to the Regional Collision Risk in Multi-ship Encountering
+
+The Shapley value method can measure the contribution of players to the entire group mentioned in section II. Inspired by this research, the Shapley value method is employed in this paper to estimate the contribution of each ship and cluster to the regional collision risk.
+
+At sea, the multi-ship encounters in a region can be considered cooperative games. The ship in a multi-ship encountering situation can be considered as a game player and the numerical values collision risk of the ship is equivalent to the amount made by the game player. The ships are arranged in the way of permutation and combination to produce various groups. The amount of collision risk for each ship group $A\left( C\right)$ should first be obtained. The amount of group is regarded as the sum of the collision risk of each ship in a multi-ship encounters group. Besides, $A\left( {C-\{ i\} }\right)$ could be obtained by calculating the amounts of the collision risk of ship group $C$ without the participation of ship $i$ . Finally, each ship’s Shapley value could be measured based on (2). Combining the collision risk values for individual ships, the collision risk of clusters can be obtained based on (6). In this way, the regional collision risk from a macroscopic perspective can be also quantified based on (7).
+
+$$
+{CC}{R}_{i} = \mathop{\bigcup }\limits_{{i = 1}}^{n}\left( {{TC}{R}_{QSD} * {S}_{i}}\right) \tag{6}
+$$
+
+$$
+M - {RCR} = \mathop{\bigcup }\limits_{{j = 1}}^{m}{CC}{R}_{j} * {S}_{j} \tag{7}
+$$
+
+where ${TC}{R}_{QSD}$ is the numerical value collision risk for single ships. ${CC}{R}_{i}$ denotes the collision risk of each cluster. $M -$ ${RCR}$ refers to the macro-regional collision risk. ${S}_{i}$ and ${S}_{j}$ denotes the Shapley value of each ship and each cluster, respectively. $n$ represents the number of ships in a cluster. $m$ represents the number of clusters in the research region.
+
+## IV. CASE STUDY
+
+To validate the feasibility of the proposed framework, in this section, we carried out a case study on the Pearl River Estuary in China for regional multi-ship collision risk analysis. The elaboration of research data and detailed results of the experiment are shown in the following section.
+
+## A. Description of the AIS Data and Parameter Setting
+
+In this study, we used the Pearl River Estuary's AIS dataset for one day, which was provided by the Wuhan University of Technology. The AIS data, showing the mooring and berthing status of the vessel, were removed to avoid the influence of abnormal data on the case study. Besides, the ship type was not considered in this research. The MinPt and Eps in the DBSCAN algorithm are set to 2 and 6 nm, respectively. The detailed parameter settings are displayed in Table I.
+
+TABLE I. PARAMETER SETTINGS
+
+| Variable | Setting |
| Time | 08:00 15th - 08:00 16th May 2020 |
| Data boundary | Lat: 21.7410-22.1289°N; Log:113.2370-113.7677E; |
| Eps | 6nm |
| MinPt | 2 |
| TCR time | 30min |
| Ship Length (if data not available) | 200m |
+
+## B. Results of The Experiments
+
+In this section, we randomly selected two sets of ship AIS data at different moments to validate the effectiveness of the proposed TCR-based multi-ship collision risk analysis framework. The ships in the designated region are divided into different clusters using the DBSCAN algorithm, then the indicator of the QSD-based TCR that represents the collision risk for single ships, and the M-RCR can be obtained utilizing the proposed framework. Figures 4 and 5 show the visualization of the ship clustering and randomly selected ships' TCR at different moments. The detailed experimental results for these ships are illustrated in Tables II and III.
+
+
+
+Fig. 4. Visualization of ship clustering and the ships' TCR in different groups at 13:25:00
+
+
+
+Fig. 5. Visualization of ship clustering and the ships' TCR in different groups at 21:30:00
+
+From Figures 4 and 5, it can be found that more than 10 ships are navigating in the research region at both timespots. For timespot 13:25:00, there were 15 ships in the region, which were categorized into three ship groups by implementing the DBSCAN method. To demonstrate the performance of the proposed collision risk analysis framework, the TCR has been visualized for three ships (414XXX660, 412XXX530, 413XXX910). For ship "414XXX660" contained in Group 1 (green), the two ships in Group 1 did not form an encounter situation since the trajectories of the two ships observed from AIS data are divergent. Therefore, there is no collision risk between the two ships, and the QSD-TCR of the ship "414XXX660" is 0 . Meanwhile, ship "412XXX530" in Group 2 (red) had formed the encounter situation with one ship of the group. The QSD-TCR of the ship " 412XXX530" is 0.2625 , which shows less collision risk. Differing from ship "414XXX660" and ship "412XXX530", ship "413XXX910" in Group 3 (purple) had formed multiple encounter situations with the rest of the ships in the group, thus ship "413XXX910" has a higher collision risk (QSD-TCR:0.4295) than two ships. In addition, "noises" are successfully recognized by the DBSCAN algorithm.
+
+For timespot 21:30:00, the experimental results for the ships are available utilizing the proposed framework. These ships in the region are classified into two clusters with the DBSCAN method, and each cluster contains five ships. The TCR has also been visualized for three ships (413XXX050, 413XXX960, 413XXX020), For ship "413XXX050" and ship "413XXX960" in Group 1 (green), the two ships had formed encounter situations with several ships of Group 1, showing a high collision risk. The QSD-TCR of ship "413XXX050" and "413XXX960" are 0.4788 and 0.7944 , respectively. Based on the proposed collision risk analysis framework, the two ships should take immediate collision avoidance measures to mitigate the collision risk at the moment. Besides, ship "413XXX020" in Group 2 (red) also formed encounter situations with several ships. However, the values of QSD-TCR of the ship "413XXX020" are small. The reason is that the distance between ship"413XXX020" and the ships forming the encounter situation is relatively large. However, a collision accident could occur if the ship "413XXX020" continues to navigate in its current state of motion. The ship "413XXX020" should take collision avoidance operations as far as possible. Besides, each ship's Shapely value can be calculated utilizing the proposed framework. The detailed results of the case study are shown in Table II.
+
+Finally, with the Shapley value indicating the contribution of each ship and cluster, the numerical values of M-RCR at both timespots 13:25 and 21:30 can be measured in the selected region, as shown in Table III. Comparing the M-RCR at timespot 13:25, the values of M-RCR at timespot 21:30 are higher. The VTSO should devote more effort to strengthening the supervision and management of the region at the moment, which can help them accurately grasp the trend of the collision risk from a macroscopic perspective. In conclusion, the proposed collision risk analysis framework can detect ships with high risk and quantify the temporal and spatial distribution of collision risk in designated regions. The VTSO can take action to enhance the supervision of the maritime traffic situation to ensure the safety of ship navigation.
+
+TABLE II. THE COLLISION RISK VALUE FOR SINGLE SHIP UTILIZING THE PROPOSED FRAMEWORK
+
+| Time | MMSI | ${TCR}_{OSD}$ | Shapely value | Group |
| 13:25:00 | 413XXX910 | 0.4295 | 0.2803 | 3 |
| 13:25:00 | 412XXX530 | 0.2625 | 0.1948 | 2 |
| 13:25:00 | 414XXX660 | 0 | 0 | 1 |
| 21:30:00 | 413XXX050 | 0.4788 | 0.2634 | 1 |
| 21:30:00 | 413XXX960 | 0.7944 | 0.3818 | 1 |
| 21:30:00 | 413XXX020 | 0.1871 | 0.1524 | 2 |
+
+TABLE III. THE RESULTS OF MACRO-REGIONAL COLLISION RISK Utilizing The Proposed Framework
+
+| Time | M-RCR |
| 13:25:00 | 0.2739 |
| 21:30:00 | 0.6405 |
+
+## V. DISCUSSION
+
+In the previous sections, multi-ship encounter situations are identified, and the collision risk of the single ship and the regional collision risk are analysed and quantified. To further validate the effectiveness of the proposed collision risk analysis framework. In this section, two comparative experiments employing the traditional collision analysis methods proposed by $\left\lbrack {{25},{30}}\right\rbrack$ will proceed:(1)a comparison between the proposed framework and the CPA-based method [25]. (2) a comparison between the proposed framework and the complexity measurement-based method [30]. The comparison has two parts, mainly including the comparison between the collision risk and complexity of a single ship and the results of regional collision risk and overall complexity of the selected region. The results are shown in Tables IV and V.
+
+TABLE IV. RESULTS OF COLLISION RISK ANALYSIS AND COMPLEXITY OF SHIP UTILIZING THE METHODS [25,30]
+
+| Time | MMSI | ${TCR}_{OSD}$ | CRI | Complexity | Group |
| 13:25:00 | 413XXX910 | 0.4295 | 0.3699 | 4.3364 | 3 |
| 13:25:00 | 412XXX530 | 0.2625 | 0.2778 | 0.7242 | 2 |
| 13:25:00 | 413XXX660 | 0 | 0 | $< {0.0001}$ | 1 |
| 21:30:00 | 413XXX050 | 0.4788 | 0.4290 | 8.2786 | 1 |
| 21:30:00 | 413XXX960 | 0.7944 | 0.5251 | 6.8122 | 1 |
| 21:30:00 | 413XXX020 | 0.1871 | 0.3668 | 1.8870 | 2 |
+
+TABLE V. RESULTS OF REGIONAL COLLISION RISK (RCR) AND COMPLEXITY IN REGION UTILIZING THE METHODS [25,30]
+
+| Time | M-RCR | RCR | Complexity |
| 13:25:00 | 0.2739 | 0.3472 | 0.3012 |
| 21:30:00 | 0.6405 | 0.5654 | 6.2923 |
+
+In Tables IV and V, it can be seen that although the numerical values for single-ship collision risk (QSD-TCR, CRI) derived from the proposed algorithm and the CPA-based method for the same scenario have differences. The final results, which indicate the high-risk ships, are consistent with each other. Meanwhile, the region collision risk (M-RCR, RCR) from the two different methods also yields different results, but the region with high risk identified by the CPA-based method is in line with the proposed algorithm. These verify the performance of the proposed framework to identify ships with high collision risk from a microcosmic perspective and gain an overall collision risk in a region from a macroscopic perspective. In addition, The traffic complexity model, first proposed by [30], is used to assess the complexity of maritime traffic situations. According to [31], there is a certain correlation between traffic complexity and the risk of ship collision. In general, the higher the traffic complexity, the greater the risk of collision. It can reflect the magnitude of the instantaneous ship collision risk. Therefore, we introduce this indicator as a comparison to further verify the effectiveness of the proposed framework. From Tables IV and V, the traffic complexity of ships obtained from the complexity measurement-based method can also identify the ships and regions with high risk, and the results are consistent with the proposed framework. In conclusion, we further validate the effectiveness and feasibility of the proposed framework in analysing the collision risk under the multi-ship encounter situations in the region by the two comparative experiments.
+
+## VI. CONCLUSION
+
+In this paper, a novel regional multi-ship collision risk analysis framework based on the VO method is proposed. The risk of collision is described as the proportion of the velocity obstacle sets generated by the TSs. One risk indicator ${TC}{R}_{OSD}$ is introduced to quantify the collision risk for single ships. Combining the Shapley value method in cooperative games, the macro-regional collision risk can be obtained.
+
+A case study on the Pearl River Estuary to validate the feasibility of the proposed framework. The results indicate that the proposed framework can accurately identify high-risk ships and regions. Comparing the existing collision risk analysis method, the proposed framework can analyse the collision risk of multi-ship encounter situations in a region from both micro and macro perspectives. The contribution of the proposed framework is that utilising spatial clustering techniques and the VO method apply them in the monitoring and management of collision risk in waters under the jurisdiction of maritime authorities. Based on this, maritime surveillance operators can have a better understanding of regional collision risk in multi-ship encounters, thus further enhancing the situational awareness ability and improving the efficiency of maritime traffic management faced with relatively high maritime traffic volumes or complexity challenges. However, the currently proposed framework has some shortcomings. One shortcoming is that the influence of ship heading angle is not considered in the clustering classification, which may affect the accuracy of multi-ship encounter recognition. In addition, other risk influencing factors (eg: weather conditions and ship type) are not integrated into the collision risk model. Future work could focus on improving these limitations and applying the proposed framework to predict the regional collision risk.
+
+## ACKNOWLEDGMENT
+
+This work is financially supported by the National Natural Science Foundation of China under grants 52101402 and 52271367.
+
+## REFERENCES
+
+[1] P. F. Chen, Y. M. Huang, J. M. Mou, and P. van Gelder, "Ship collision candidate detection method: A velocity obstacle approach," Ocean Eng, vol. 170, pp. 186-198, DEC 15. 2018.
+
+[2] J. M. Mou, C. van der Tak, and H. Ligteringen, "Study on collision avoidance in busy waterways by using AIS data," Ocean Eng, vol. 37, no. 5-6, pp. 483-490, APR. 2010.
+
+[3] M. Y. Cai, J. F. Zhang, D. Zhang, X. L. Yuan, and C. G. Soares, "Collision risk analysis on ferry ships in Jiangsu Section of the Yangtze River based on AIS data," Reliab Eng Syst Safe, vol. 215, NOV. 2021.
+
+[4] A. K. Debnath, H. C. Chin, and M. M. Haque, "Modelling Port Water Collision Risk Using Traffic Conflicts," J Navigation, vol. 64, no. 4, pp. 645-655, OCT. 2011.
+
+[5] R. Zhen, M. Riveiro, and Y. X. Jin, "A novel analytic framework of real-time multi-vessel collision risk assessment for maritime traffic surveillance," Ocean Eng, vol. 145, pp. 492-501, NOV 15. 2017.
+
+[6] Q. Yu, A. P. Teixeira, K. Liu, and C. Guedes Soares, "Framework and application of multi-criteria ship collision risk assessment," Ocean Eng, vol. 250, pp. 111006, 2022/04/15/. 2022.
+
+[7] W. B. Zhang, F. Goerlandt, J. Montewka, and P. Kujala, "A method for detecting possible near miss ship collisions from AIS data," Ocean Eng, vol. 107, pp. 60-69, OCT 1. 2015.
+
+[8] W. B. Zhang, C. Kopca, J. J. Tang, D. F. Ma, and Y. H. Wang, "A Systematic Approach for Collision Risk Analysis based on AIS Data," J Navigation, vol. 70, no. 5, pp. 1117-1132, SEP. 2017.
+
+[9] R. W. Liu, X. J. Huo, M. H. Liang, and K. Wang, "Ship collision risk analysis: Modeling, visualization and prediction," Ocean Eng, vol. 266, DEC 15.2022.
+
+[10] Y. Fujii, and K. Tanaka, "Traffic Capacity," J Navigation, vol. 24, no. 4, pp. 543-552. 1971.
+
+[11] P. T. Pedersen, "Review and application of ship collision and grounding analysis procedures," Mar. Struct, vol. 23, no. 3, pp. 241- 262, JUL. 2010.
+
+[12] R. Szlapczynski, and J. Szlapczynska, "An analysis of domain-based ship collision risk parameters," Ocean Eng, vol. 126, pp. 47-56, NOV 1. 2016.
+
+[13] Y. Y. Wang, and H. C. Chin, "An Empirically-Calibrated Ship Domain as a Safety Criterion for Navigation in Confined Waters," J Navigation, vol. 69, no. 2, pp. 257-276, MAR. 2016.
+
+[14] N. Wang, "An Intelligent Spatial Collision Risk Based on the Quaternion Ship Domain," J Navigation, vol. 63, no. 4, pp. 733-749, OCT. 2010.
+
+[15] R. Szlapczynski, and J. Szlapczynska, "Review of ship safety domains: Models and applications," Ocean Eng, vol. 145, pp. 277-289, NOV 15. 2017.
+
+[16] J. Liu, G. Y. Shi, and K. G. Zhu, "A novel ship collision risk evaluation algorithm based on the maximum interval of two ship domains and the violation degree of two ship domains," Ocean Eng, vol. 255, JUL 1. 2022.
+
+[17] W. Li, L. Zhong, Y. Liu, and G. Shi, "Ship Intrusion Collision Risk Model Based on a Dynamic Elliptical Domain," J. Mar. Sci. Eng, 11, 2023].
+
+[18] P. F. Chen, Y. M. Huang, E. Papadimitriou, J. M. Mou, and P. van Gelder, "An improved time discretized non-linear velocity obstacle method for multi-ship encounter detection," Ocean Eng, vol. 196, JAN 15. 2020.
+
+[19] Y. M. Huang, L. Y. Chen, and P. van Gelder, "Generalized velocity obstacle algorithm for preventing ship collisions at sea," Ocean Eng, vol. 173, pp. 142-156, FEB 1. 2019.
+
+[20] Y. M. Huang, and P. van Gelder, "Time-Varying Risk Measurement for Ship Collision Prevention," Risk Anal, vol. 40, no. 1, pp. 24-42, JAN. 2020.
+
+[21] P. F. Chen, M. X. Li, and J. M. Mou, "A Velocity Obstacle-Based Real-Time Regional Ship Collision Risk Analysis Method," J. Mar. Sci. Eng, vol. 9, no. 4, APR. 2021.
+
+[22] M. X. Li, J. M. Mou, L. Y. Chen, Y. X. He, and Y. M. Huang, "A rule-aware time-varying conflict risk measure for MASS considering maritime practice," Reliab Eng Syst Safe, vol. 215, NOV. 2021.
+
+[23] W. B. Zhang, X. Y. Feng, Y. Qi, F. Shu, Y. J. Zhang, and Y. H. Wang, "Towards a Model of Regional Vessel Near-miss Collision Risk Assessment for Open Waters based on AIS Data," J Navigation, vol. 72, no. 6, pp. 1449-1468, NOV. 2019.
+
+[24] R. Zhen, Z. Q. Shi, Z. P. Shao, and J. L. Liu, "A novel regional collision risk assessment method considering aggregation density under multi-ship encounter situations," J Navigation, vol. 75, no. 1, pp. 76-94, JAN. 2022.
+
+[25] Z. H. Liu, Z. L. Wu, and Z. Y. Zheng, "A novel framework for regional collision risk identification based on AIS data," Appl. Ocean Res, vol. 89, pp. 261-272, AUG. 2019.
+
+[26] Z. H. Liu, Z. L. Wu, and Z. Y. Zheng, "A cooperative game approach for assessing the collision risk in multi-vessel encountering," Ocean Eng, vol. 187, SEP 1. 2019.
+
+[27] L. S. Shapley, and M. Shubik, A method for evaluating the distribution of power in a committee system: The Shapley Value, 1988.
+
+[28] N. Wang, "A Generalized Ellipsoidal Basis Function Based Online Self-constructing Fuzzy Neural Network," NPL, vol. 34, no. 1, pp. 13- 37, 2011 AUG. 2011.
+
+[29] N. Wang, X. Meng, Q. Xu, and Z. Wang, "A Unified Analytical Framework for Ship Domains," J Navigation, vol. 62, no. 4, pp. 643- 655, 2009 OCT. 2009.
+
+[30] Y. Q. Wen, Y. M. Huang, C. H. Zhou, J. L. Yang, C. S. Xiao, and X. C. Wu, "Modelling of marine traffic flow complexity," Ocean Eng, vol. 104, pp. 500-510, AUG 1. 2015.
+
+[31] Z. H. Liu, Z. L. Wu, Z. Y. Zheng, and X. D. Yu, "A Molecular Dynamics Approach to Identify the Marine Traffic Complexity in a Waterway," J. Mar. Sci. Eng, vol. 10, no. 11, NOV. 2022.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/au4HFflf6W/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/au4HFflf6W/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..23378059d36d130d93a06b18d00644a27c97516f
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/au4HFflf6W/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,349 @@
+§ REGIONAL MULTI-SHIP COLLISION RISK ANALYSIS BASED ON VELOCITY OBSTACLE METHOD: A CASE STUDY ON THE PEARL RIVER ESTUARY
+
+Qi Liu
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+State Key Laboratory of
+
+Maritime Technology and Safety, Wuhan, China
+
+lq754001x@whut.edu.cn
+
+Pengfei Chen
+
+School of Navigation Wuhan University of Technology Wuhan, China
+
+State Key Laboratory of
+
+Maritime Technology and Safety, Wuhan, China
+
+Chenpf@whut.edu.cn
+
+Junmin Mou
+
+School of Navigation Wuhan University of Technology Wuhan, China
+
+State Key Laboratory of
+
+Maritime Technology and Safety, Wuhan, China
+
+Moujm@whut.edu.cn
+
+Linying Chen
+
+School of Navigation
+
+Wuhan University of Technology Wuhan, China
+
+State Key Laboratory of
+
+Maritime Technology and Safety, Wuhan, China
+
+LinyingChen@whut.edu.cn
+
+Abstract-Analysis of regional multi-ship collision risk is essential for enhancing the efficiency of traffic management in maritime transportation. However, traditional collision risk analysis methods only assess the risk of collision from the viewpoint of ship pair encounters. In this research, a novel framework for analyzing regional multi-ship collision risk based on Velocity Obstacle (VO) method is proposed using the AIS (Automatic Identification System) data. Firstly, the ships in specific sea areas are clustered with the density-based spatial clustering of applications with Noise to identify multi-ship encounter situations. Afterward, a new collision risk indicator utilizing VO-based time-varying collision risk measurement method is proposed to calculate the collision risk of the single ship. Secondly, the macro-regional collision risk is quantified by calculating the contribution of each ship and each cluster with the Shapley value in cooperative games. Finally, to verify the effectiveness of the proposed framework, we carried out a case study of the Pearl River Estuary in China using historical AIS data. The results show that the proposed framework for regional multi-ship collision risk analysis can help maritime surveillance operators identify the ships with high risk and gain a better understanding of regional collision risk from both microcosmic and macroscopic perspectives.
+
+Keywords-multi-ship encounter situation, velocity obstacle, time-varying risk, shapely value, maritime traffic safety
+
+§ I. INTRODUCTION
+
+Maritime transportation is one of the most important transportation approaches for international trade today. With the trend towards economic globalization, maritime transportation has continued to grow over the past decades. However, the increase in maritime transportation volume has led to the augment of maritime traffic density and maritime traffic complexity, thereby increasing the occurrence rate of maritime accidents, in particular ship collisions [1]. In the face of the relatively high maritime traffic volumes or complexity at sea, maritime surveillance operators are always subjective and random in the conduct of monitoring, lacking an overall perception of regional collision risk, regional collision risk, which brings large pressure on maritime surveillance. To analyse the risk of ship collision from multiple perspectives and enhance the regulatory efficiency of Vessel Traffic Service Operators (VTSO), it is imperative to put forward a novel framework for analysing regional multi-ship collision risk.
+
+The analysis of ship collision risk is a research hotpot in the maritime field and plays an important role in reducing the number of collision accidents and enhancing the efficiency and level of maritime traffic monitoring. To assess the risk of ship collision quantitatively from multiple perspectives, many scholars have conducted plenty of research and proposed a variety of methods. These methods can be broadly categorized into three general groups: (1) synthetic indicator-based approaches; (2) safety domain-based approaches; and (3) velocity obstacles-based approaches.
+
+Synthetic indicator-based approaches integrate some factors that indicate the spatial and temporal motion characteristics of encountering ships to measure the Collision Risk Index (CRI) using mathematical functions. The two most famous factors are Distance to Closet Point of Approach (DCPA) and Time to Closest Point of Approach (TCPA), which have been applied in [2-6]. In addition, Zhang et al. [7] considering some risk-influencing elements, introduced a new risk indicator named Vessel Collision Risk Operator (VCRO) to measure the level of the conflict risk of ships. Relevant work can further refer [8]. Based on Zhang et al research, [9] improved the relative distance in the original VCRO and proposed the model of enhanced vessel conflict ranking operator, which further enhanced the accuracy of conflict risk measurement.
+
+Safety domain-based approaches usually construct the own ship's (OS) safety domain in space, take the ships intruding into the safety domain of the OS as posing a collision risk, detect potential collision conflicts, and assess the risk of ship collision in terms of invasions or overlaps in the safety domain of encountering ships, such as ship domain [10] and collision diameter [11]. The ship domain has received a great deal of attention in recent years, and massive AIS (Automatic Identification System) data and advances in intelligent technologies have facilitated the development of various ship domain models with different shapes, including circular, elliptical [12], and polygonal [13]. These ship domains have been applied in collision risk analysis. For instance, Wang et al. [14] based on the elliptical ship domain, developed the Quaternion Ship Domain (QSD) by combining the impact of the COLREGS on the process of the actual ship encounter situations and used it for the assessment of ship collision risk. Szlapczynski et al. [15] developed the domain intrusion time/degree indicators to evaluate the collision risk collision during ship navigation. Liu et al. [16] proposed a collision probability model by introducing the maximum interval and the violation degree of two ship domains to measure the collision risk. Li et al. [17] proposed a novel collision risk assessment model based on the integration of elliptic and quadratic ship domains, offering a new way for collision risk measurement.
+
+The work presented in this study is financially supported by the National Natural Science Foundation of China (Grant Number: 52101402, 52271367)
+
+Velocity obstacle-based approaches transform the spatial-temporal correlations between ships into the velocity domain and judge the OS's velocity sets of falling into the dangerous velocity space to determine whether the collision risk exists. Recently, it has been progressively developed to combine the ship domain with the VO, proposing non-linear VO [18] and generalized VO algorithms [19], and the VO algorithms have been widely applied in ship collision risk analysis. For instance, Huang et al.[20] first developed VO-based Time-varying Collision Risk (TCR) measurement method to estimate the collision risk of the single ship in multi-ship encounters. Chen et al. [21] based on TCR measurement, introduced a real-time regional ship collision risk analysis method in different encounter situations. Li et al. [22] proposed a rule-aware TCR model for real-time collision risk analysis, which integrates the impact of various factors in the actual situation.
+
+The above approaches provide a solid foundation for the development of collision risk analysis methods. However, these approaches mainly assess the collision risk from ship-ship encounters' viewpoint and analyses ship collision risk only from a microscopic perspective. With the gradual increase in the number of ships, multi-ship encounters are common at sea. Therefore, it is necessary to propose a novel framework to analyse the ship collision risk in the case of regional multi-ship encounters from multiple perspectives. Relevant work has been done. Zhang et al. [23] combined the density complexity and the multi-vessel collision risk operator to analyse regional vessel collision risk. Zhen et al. [24] considering the impact factors of DCPA, TCPA, ship crossing angle, and navigational environment, proposed a fuzzy logic-based collision risk model for regional multi-ship collision risk assessment. Besides, Liu et al. [25] developed a framework for regional collision risk identification with the spatial clustering method. The contribution of this study is to introduce a novel regional collision risk analysis framework that combines the TCR-based collision risk measurement and the Shapley value method. This framework can accurately identify high-risk ships and quantify the regional collision risk from both micro and macro perspectives, which will help the VTSO to accurately grasp the trend of the regional collision risk and strengthen their capacity and efficiency of maritime safety surveillance.
+
+The structures of this paper are organized as follows. The methodology of the research is introduced in section II. Section III describes the construction of the framework. Section IV conducts a case study with the proposed framework for regional collision risk analysis. Some discussion about the results and comparison are presented to validate the effectiveness and feasibility of the proposed framework in section V. Finally, section VI concludes the research.
+
+§ II. METHODOLOGY
+
+§ A. OVERVIEW OF THE STUDY
+
+In this study, the collision risk is defined as the percentage of velocities that might potentially result in a collision accident within the entire velocity sets of the OS. This definition comprehensively considers the motion state that the ship needs to maintain for effective collision avoidance from the free space's viewpoint and provides a quantitative measurement of the collision risk faced by the OS, which could significantly assist the VTSO in assessing and mitigating potential collision scenarios. Building upon this definition, we proposed a novel framework based on VO method to analyse regional multi-ship collision risk from both microscopic and macroscopic perspectives, which is beneficial to have an overall understanding of regional multi-ship collision risk and improve the efficiency of safety management for the VTSO in jurisdictional waters.
+
+Firstly, the AIS data in the designated region will be collected and preprocessed over a specified time interval. Subsequently, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) method will be employed to classify the ships into different clusters. This density-based clustering technique takes into account the spatial distances between ships to identify regional multi-ship encounter situations, which are critical for effective analysis. Secondly, we utilize the TCR-based collision risk measurement to accurately quantify the collision risk for individual ships. Besides, by combining the Shapely value method, the collision risk of each cluster is measured by calculating the contribution of ships in a cluster. In this way, the macro-regional collision risk can be derived using the collision risk and the contribution of each cluster. Finally, to validate the effectiveness and feasibility of the proposed framework, we conduct two comparative experiments with the existing collision risk approaches. These experiments are designed to rigorously verify the performance of our framework against the traditional approach, allowing us to demonstrate its advantages in terms of accuracy and application in real sailing scenarios. The proposed research framework is shown in Figure 1.
+
+ < g r a p h i c s >
+
+Fig. 1. The proposed research framework
+
+§ B. REGIONAL MULTI-SHIP ENCOUNTER SITUATION RECOGNITION USING DENSITY-BASED CLUSTERING
+
+The density-based spatial clustering approaches are a fundamental category of unsupervised learning algorithms that have achieved widespread application in various applications due to their intuitive and fast advantages recently, mainly including the DBSCAN, hierarchical-DBSCAN, and Ordering Points To Identify the Clustering Structure. These methods are based on the principle that the spatial density distribution of the data is processed with a predetermined threshold to divide them into different groups [21]. In this research, we specifically utilise the DBSCAN method to conduct the clustering technology for the recognition of regional multi-ship encounter situations. This algorithm can divide similar data into the same cluster according to certain principles and find out the noise data that does not belong to any cluster. The implementation of the DBSCAN algorithm requires the setting of two primary parameters: Eps and MinPts. The pseudocode for the DBSCAN algorithm is described in Figure 2. By employing the DBSCAN method, the ships in a selected region can be classified into multiple clusters, which can reduce the burden of collision risk calculation and improve the efficiency of recognizing muti-ship encounter situations. During the clustering process, the ships that are not included in any clusters can be considered noise points in the clustering process. These noise points are spatially distant from other vessels and are considered to have no collision risk with others. Therefore, we can disregard these ships in this research, which can help simplify the calculation of collision risk.
+
+Algorithm 1: The implementation process of DBSCAN algorithm
+
+Input:
+
+ $D$ : a dataset contain $\mathrm{n}$ objects
+
+ Eps: neighborhood parameter
+
+ MinPts: neighborhood parameter
+
+Output: a set of clusters
+
+ 1. Mark all objects as unvisited
+
+ Randomly select an unvisited object $p$ ;
+
+ If $p$ has least MinPts in its Eps neighborhood:
+
+ Create a new cluster $C$ and add $p$ to $C$ ;
+
+ Sets of objects in ${Eps}$ neighborhoods where $N$ is $p$ ;
+
+ For each point in ${Np}$
+
+ Mark ${p}^{ * }$ for visit:
+
+ If ${p}^{ * }$ has at least MinPts objects in its Eps neighborhood, add them to $N$ ;
+
+ If ${p}^{ * }$ is not a member of any cluster, add ${p}^{ * }$ to $C$ ;
+
+ End for:
+
+ 5. Else marker $p$ is noise;
+
+ . Until there is no object marked unvisited;
+
+Fig. 2. The pseudocode for the DBSCAN algorithm
+
+§ C. TCR-BASED MULTI-SHIP COLLISION RISK MEASUREMENT MODEL
+
+Traditional collision risk analysis methods approaches consider the spatiotemporal relationships of encounter ships separately, which can bring contradictory results. To overcome this shortcoming, the TCR collision risk modeling method is employed in this research to analyse and quantify the risk of ship collision. The concept of TCR, first proposed by [20], is described as the likelihood of the event that the OS will not be able to avoid a collision with other ships. The TCR for the collision risk measurement projects the spatiotemporal relationships between ships in the OS's velocity space and assesses the difficulty of avoiding collision accidents. The description of TCR is shown in (1) and Figure 3.
+
+$$
+{TCR}\left( t\right) = \frac{{\operatorname{sets}}_{\text{ collision }}\left( t\right) }{{\operatorname{sets}}_{\text{ reachable }}\left( t\right) } \tag{1}
+$$
+
+where ${\operatorname{sets}}_{\text{ collision }}\left( t\right)$ are the sets of velocities that lead to collisions at time $t$ ; set ${s}_{\text{ reachable }}\left( t\right)$ are the OS’s reachable velocities sets before collisions at time $t$ .
+
+ < g r a p h i c s >
+
+Fig. 3. The description of TCR
+
+§ D. SHAPLEY VALUE METHOD IN COOPERATIVE GAMES
+
+Cooperative games involve competition between different groups that need both coalition and cooperation. It is used to ascertain how to distribute the amounts produced by cooperation, which can be used to measure the contribution of the individual in the group [26]. The Shapley value method, introduced by Shapley and Shubik in 1953 [27], plays a dominant role in cooperative game theory. It allocates cooperative amounts by estimating the contribution of each player. The formula of the Shapley value method is shown as (2):
+
+$$
+S{V}_{i}\left\lbrack A\right\rbrack = \mathop{\sum }\limits_{\substack{{C \subseteq N} \\ {i \subseteq C} }}\frac{\left( {c - 1}\right) !\left( {n - c}\right) !}{n!}\left\lbrack {A\left( C\right) - A\left( {C-\{ i\} }\right) }\right\rbrack \tag{2}
+$$
+
+where $i$ is the player in the game, $C$ signifies the group generated by the player $i,c$ represents the total number of players of the group $C.N$ denotes the group formed by all vessels, $n$ denotes the number of players in group $N.A\left( C\right)$ refers to the amounts generated by the group $C,A\left( {C-\{ i\} }\right)$ refers to the amounts generated by group $C$ before player $i$ joins. $S{V}_{i}\left\lbrack A\right\rbrack$ represents the Shapley value of the player $i$ .
+
+The Shapley value method was first applied in the maritime field to assess the contribution of ships to the global collision risk [26]. In this study, the Shapley value method is also employed to identify the contribution of each ship and cluster to the regional collision risk. With this indicator, the measurement of regional collision risk from a macroscopic viewpoint can be obtained.
+
+§ III.THE CONSTRUCTION OF FRAMEWORK
+
+§ A. ANALYSING THE RISK OF SHIP COLLISION IN MULTI-SHIP ENCOUNTERING
+
+The role of the TCR method could be to detect the collision candidate ships and provide the measurement of collision risk for the single ship navigating at different sea areas. Considering these advantages, we utilize the TCR-based collision risk modeling method to analyse the collision risk of ships in this paper.
+
+The VO method can collect some velocity sets that could lead to collisions between the OS and the TSs, which is essential for the TCR. Supposing that ship $A$ and ship $B$ navigate in the waterways. The motion status of the two ships can be denoted as $A\left\{ {{P}_{A}\left( T\right) ,{V}_{A}\left( T\right) ,{L}_{A}}\right\} ,B\left\{ {{P}_{B}\left( T\right) ,{V}_{B}\left( T\right) ,{L}_{B}}\right\}$ ; $P$ is the position of two ships at time $T.V$ is their velocity at time $T.L$ is the length of the ships. Using the VO method, the spatiotemporal correlations between two ships are transformed into the ship A's velocity space. The condition of collision can be shown as (3).
+
+$$
+{P}_{A}\left( {t}_{c}\right) \subseteq {P}_{B}\left( {t}_{c}\right) \oplus \text{ Conf }P \tag{3}
+$$
+
+$$
+P = P\left( {t}_{0}\right) + {v}^{ * }\left( {t - {t}_{0}}\right)
+$$
+
+where ${P}_{A}\left( {t}_{c}\right) ,{P}_{B}\left( {t}_{c}\right)$ refers to the position of ship $A$ and ship $B$ at collision time ${t}_{c};P$ is the position of two ships at the specified time $t$ . Conf $P$ are all the possible positions of ship $A$ around ship $B$ when the collision happens. $\bigoplus$ represents the Minkowski addition.
+
+In this research, we utilize the NLVO method to obtain the VOs in TCR. The NLVO method can be expressed in (4):
+
+$$
+{NLV}{O}_{A \mid {\operatorname{ship}}_{ji}} = \mathop{\bigcup }\limits_{{t}_{f}}^{\infty }\left( \frac{{P}_{{\operatorname{ship}}_{j}}\left( {t}_{i}\right) - {P}_{A}\left( {t}_{0}\right) }{\left( {t}_{i} - {t}_{0}\right) }\right) \oplus \frac{\operatorname{Conf}{P}_{{\operatorname{ship}}_{j}}}{\left( {t}_{i} - {t}_{0}\right) } \tag{4}
+$$
+
+$$
+{NLV}{O}_{A \mid {allshi}{p}_{ti}} = \mathop{\bigcup }\limits_{{j = 1}}^{n}{NLV}{O}_{A \mid {shi}{p}_{jii}}
+$$
+
+where ${P}_{\text{ ship }}\left( {t}_{i}\right) - {P}_{A}\left( {t}_{0}\right)$ indicates the difference in distance between the ship $j$ at the time ${t}_{i}$ and the OS at time ${t}_{0}$ . ${NLV}{O}_{A \mid {\text{ ship }}_{\text{ jti }}}$ denotes the OS’s velocity sets induced by ship $j.{NLV}{O}_{A \mid {allshi}{p}_{ti}}$ denotes the OS’s velocity sets induced by all target ships based on Boolean operations. To take full account of the ship's maneuverability, velocity, and heading influences, we employ the QSD as a criterion for ConfP. A detailed description of the QSD can be found at $\left\lbrack {{28},{29}}\right\rbrack$ .
+
+To quantify the collision risk of the individual ship, a new collision risk indicator- ${TC}{R}_{QSD}$ , which is the TCR measured by the OS's QSD, is introduced in this study. The calculation formula of the indicator is shown in (5):
+
+$$
+{TC}{R}_{QSD} = \frac{V{O}_{QSD}}{V{O}_{\text{ region }}} \tag{5}
+$$
+
+where $V{O}_{QSD}$ is the area of intersection regions between the VOs induced by the QSD of TS and the velocity region of the OS. $V{O}_{\text{ region }}$ is the area of the ship’s velocity region, representing all the possible velocities that the ship can achieve. To simplify the calculation process, the assumption that the changes of course and reduction of velocity are considered collision avoidance operations to obtain the ship's velocity region. Using this indicator, the measurement of the collision risk of single ships can be proceeded.
+
+§ B. IDENTIFYING THE CONTRIBUTION OF EACH SHIP TO THE REGIONAL COLLISION RISK IN MULTI-SHIP ENCOUNTERING
+
+The Shapley value method can measure the contribution of players to the entire group mentioned in section II. Inspired by this research, the Shapley value method is employed in this paper to estimate the contribution of each ship and cluster to the regional collision risk.
+
+At sea, the multi-ship encounters in a region can be considered cooperative games. The ship in a multi-ship encountering situation can be considered as a game player and the numerical values collision risk of the ship is equivalent to the amount made by the game player. The ships are arranged in the way of permutation and combination to produce various groups. The amount of collision risk for each ship group $A\left( C\right)$ should first be obtained. The amount of group is regarded as the sum of the collision risk of each ship in a multi-ship encounters group. Besides, $A\left( {C-\{ i\} }\right)$ could be obtained by calculating the amounts of the collision risk of ship group $C$ without the participation of ship $i$ . Finally, each ship’s Shapley value could be measured based on (2). Combining the collision risk values for individual ships, the collision risk of clusters can be obtained based on (6). In this way, the regional collision risk from a macroscopic perspective can be also quantified based on (7).
+
+$$
+{CC}{R}_{i} = \mathop{\bigcup }\limits_{{i = 1}}^{n}\left( {{TC}{R}_{QSD} * {S}_{i}}\right) \tag{6}
+$$
+
+$$
+M - {RCR} = \mathop{\bigcup }\limits_{{j = 1}}^{m}{CC}{R}_{j} * {S}_{j} \tag{7}
+$$
+
+where ${TC}{R}_{QSD}$ is the numerical value collision risk for single ships. ${CC}{R}_{i}$ denotes the collision risk of each cluster. $M -$ ${RCR}$ refers to the macro-regional collision risk. ${S}_{i}$ and ${S}_{j}$ denotes the Shapley value of each ship and each cluster, respectively. $n$ represents the number of ships in a cluster. $m$ represents the number of clusters in the research region.
+
+§ IV. CASE STUDY
+
+To validate the feasibility of the proposed framework, in this section, we carried out a case study on the Pearl River Estuary in China for regional multi-ship collision risk analysis. The elaboration of research data and detailed results of the experiment are shown in the following section.
+
+§ A. DESCRIPTION OF THE AIS DATA AND PARAMETER SETTING
+
+In this study, we used the Pearl River Estuary's AIS dataset for one day, which was provided by the Wuhan University of Technology. The AIS data, showing the mooring and berthing status of the vessel, were removed to avoid the influence of abnormal data on the case study. Besides, the ship type was not considered in this research. The MinPt and Eps in the DBSCAN algorithm are set to 2 and 6 nm, respectively. The detailed parameter settings are displayed in Table I.
+
+TABLE I. PARAMETER SETTINGS
+
+max width=
+
+Variable Setting
+
+1-2
+Time 08:00 15th - 08:00 16th May 2020
+
+1-2
+Data boundary Lat: 21.7410-22.1289°N; Log:113.2370-113.7677E;
+
+1-2
+Eps 6nm
+
+1-2
+MinPt 2
+
+1-2
+TCR time 30min
+
+1-2
+Ship Length (if data not available) 200m
+
+1-2
+
+§ B. RESULTS OF THE EXPERIMENTS
+
+In this section, we randomly selected two sets of ship AIS data at different moments to validate the effectiveness of the proposed TCR-based multi-ship collision risk analysis framework. The ships in the designated region are divided into different clusters using the DBSCAN algorithm, then the indicator of the QSD-based TCR that represents the collision risk for single ships, and the M-RCR can be obtained utilizing the proposed framework. Figures 4 and 5 show the visualization of the ship clustering and randomly selected ships' TCR at different moments. The detailed experimental results for these ships are illustrated in Tables II and III.
+
+ < g r a p h i c s >
+
+Fig. 4. Visualization of ship clustering and the ships' TCR in different groups at 13:25:00
+
+ < g r a p h i c s >
+
+Fig. 5. Visualization of ship clustering and the ships' TCR in different groups at 21:30:00
+
+From Figures 4 and 5, it can be found that more than 10 ships are navigating in the research region at both timespots. For timespot 13:25:00, there were 15 ships in the region, which were categorized into three ship groups by implementing the DBSCAN method. To demonstrate the performance of the proposed collision risk analysis framework, the TCR has been visualized for three ships (414XXX660, 412XXX530, 413XXX910). For ship "414XXX660" contained in Group 1 (green), the two ships in Group 1 did not form an encounter situation since the trajectories of the two ships observed from AIS data are divergent. Therefore, there is no collision risk between the two ships, and the QSD-TCR of the ship "414XXX660" is 0 . Meanwhile, ship "412XXX530" in Group 2 (red) had formed the encounter situation with one ship of the group. The QSD-TCR of the ship " 412XXX530" is 0.2625, which shows less collision risk. Differing from ship "414XXX660" and ship "412XXX530", ship "413XXX910" in Group 3 (purple) had formed multiple encounter situations with the rest of the ships in the group, thus ship "413XXX910" has a higher collision risk (QSD-TCR:0.4295) than two ships. In addition, "noises" are successfully recognized by the DBSCAN algorithm.
+
+For timespot 21:30:00, the experimental results for the ships are available utilizing the proposed framework. These ships in the region are classified into two clusters with the DBSCAN method, and each cluster contains five ships. The TCR has also been visualized for three ships (413XXX050, 413XXX960, 413XXX020), For ship "413XXX050" and ship "413XXX960" in Group 1 (green), the two ships had formed encounter situations with several ships of Group 1, showing a high collision risk. The QSD-TCR of ship "413XXX050" and "413XXX960" are 0.4788 and 0.7944, respectively. Based on the proposed collision risk analysis framework, the two ships should take immediate collision avoidance measures to mitigate the collision risk at the moment. Besides, ship "413XXX020" in Group 2 (red) also formed encounter situations with several ships. However, the values of QSD-TCR of the ship "413XXX020" are small. The reason is that the distance between ship"413XXX020" and the ships forming the encounter situation is relatively large. However, a collision accident could occur if the ship "413XXX020" continues to navigate in its current state of motion. The ship "413XXX020" should take collision avoidance operations as far as possible. Besides, each ship's Shapely value can be calculated utilizing the proposed framework. The detailed results of the case study are shown in Table II.
+
+Finally, with the Shapley value indicating the contribution of each ship and cluster, the numerical values of M-RCR at both timespots 13:25 and 21:30 can be measured in the selected region, as shown in Table III. Comparing the M-RCR at timespot 13:25, the values of M-RCR at timespot 21:30 are higher. The VTSO should devote more effort to strengthening the supervision and management of the region at the moment, which can help them accurately grasp the trend of the collision risk from a macroscopic perspective. In conclusion, the proposed collision risk analysis framework can detect ships with high risk and quantify the temporal and spatial distribution of collision risk in designated regions. The VTSO can take action to enhance the supervision of the maritime traffic situation to ensure the safety of ship navigation.
+
+TABLE II. THE COLLISION RISK VALUE FOR SINGLE SHIP UTILIZING THE PROPOSED FRAMEWORK
+
+max width=
+
+Time MMSI ${TCR}_{OSD}$ Shapely value Group
+
+1-5
+13:25:00 413XXX910 0.4295 0.2803 3
+
+1-5
+13:25:00 412XXX530 0.2625 0.1948 2
+
+1-5
+13:25:00 414XXX660 0 0 1
+
+1-5
+21:30:00 413XXX050 0.4788 0.2634 1
+
+1-5
+21:30:00 413XXX960 0.7944 0.3818 1
+
+1-5
+21:30:00 413XXX020 0.1871 0.1524 2
+
+1-5
+
+TABLE III. THE RESULTS OF MACRO-REGIONAL COLLISION RISK Utilizing The Proposed Framework
+
+max width=
+
+Time M-RCR
+
+1-2
+13:25:00 0.2739
+
+1-2
+21:30:00 0.6405
+
+1-2
+
+§ V. DISCUSSION
+
+In the previous sections, multi-ship encounter situations are identified, and the collision risk of the single ship and the regional collision risk are analysed and quantified. To further validate the effectiveness of the proposed collision risk analysis framework. In this section, two comparative experiments employing the traditional collision analysis methods proposed by $\left\lbrack {{25},{30}}\right\rbrack$ will proceed:(1)a comparison between the proposed framework and the CPA-based method [25]. (2) a comparison between the proposed framework and the complexity measurement-based method [30]. The comparison has two parts, mainly including the comparison between the collision risk and complexity of a single ship and the results of regional collision risk and overall complexity of the selected region. The results are shown in Tables IV and V.
+
+TABLE IV. RESULTS OF COLLISION RISK ANALYSIS AND COMPLEXITY OF SHIP UTILIZING THE METHODS [25,30]
+
+max width=
+
+Time MMSI ${TCR}_{OSD}$ CRI Complexity Group
+
+1-6
+13:25:00 413XXX910 0.4295 0.3699 4.3364 3
+
+1-6
+13:25:00 412XXX530 0.2625 0.2778 0.7242 2
+
+1-6
+13:25:00 413XXX660 0 0 $< {0.0001}$ 1
+
+1-6
+21:30:00 413XXX050 0.4788 0.4290 8.2786 1
+
+1-6
+21:30:00 413XXX960 0.7944 0.5251 6.8122 1
+
+1-6
+21:30:00 413XXX020 0.1871 0.3668 1.8870 2
+
+1-6
+
+TABLE V. RESULTS OF REGIONAL COLLISION RISK (RCR) AND COMPLEXITY IN REGION UTILIZING THE METHODS [25,30]
+
+max width=
+
+Time M-RCR RCR Complexity
+
+1-4
+13:25:00 0.2739 0.3472 0.3012
+
+1-4
+21:30:00 0.6405 0.5654 6.2923
+
+1-4
+
+In Tables IV and V, it can be seen that although the numerical values for single-ship collision risk (QSD-TCR, CRI) derived from the proposed algorithm and the CPA-based method for the same scenario have differences. The final results, which indicate the high-risk ships, are consistent with each other. Meanwhile, the region collision risk (M-RCR, RCR) from the two different methods also yields different results, but the region with high risk identified by the CPA-based method is in line with the proposed algorithm. These verify the performance of the proposed framework to identify ships with high collision risk from a microcosmic perspective and gain an overall collision risk in a region from a macroscopic perspective. In addition, The traffic complexity model, first proposed by [30], is used to assess the complexity of maritime traffic situations. According to [31], there is a certain correlation between traffic complexity and the risk of ship collision. In general, the higher the traffic complexity, the greater the risk of collision. It can reflect the magnitude of the instantaneous ship collision risk. Therefore, we introduce this indicator as a comparison to further verify the effectiveness of the proposed framework. From Tables IV and V, the traffic complexity of ships obtained from the complexity measurement-based method can also identify the ships and regions with high risk, and the results are consistent with the proposed framework. In conclusion, we further validate the effectiveness and feasibility of the proposed framework in analysing the collision risk under the multi-ship encounter situations in the region by the two comparative experiments.
+
+§ VI. CONCLUSION
+
+In this paper, a novel regional multi-ship collision risk analysis framework based on the VO method is proposed. The risk of collision is described as the proportion of the velocity obstacle sets generated by the TSs. One risk indicator ${TC}{R}_{OSD}$ is introduced to quantify the collision risk for single ships. Combining the Shapley value method in cooperative games, the macro-regional collision risk can be obtained.
+
+A case study on the Pearl River Estuary to validate the feasibility of the proposed framework. The results indicate that the proposed framework can accurately identify high-risk ships and regions. Comparing the existing collision risk analysis method, the proposed framework can analyse the collision risk of multi-ship encounter situations in a region from both micro and macro perspectives. The contribution of the proposed framework is that utilising spatial clustering techniques and the VO method apply them in the monitoring and management of collision risk in waters under the jurisdiction of maritime authorities. Based on this, maritime surveillance operators can have a better understanding of regional collision risk in multi-ship encounters, thus further enhancing the situational awareness ability and improving the efficiency of maritime traffic management faced with relatively high maritime traffic volumes or complexity challenges. However, the currently proposed framework has some shortcomings. One shortcoming is that the influence of ship heading angle is not considered in the clustering classification, which may affect the accuracy of multi-ship encounter recognition. In addition, other risk influencing factors (eg: weather conditions and ship type) are not integrated into the collision risk model. Future work could focus on improving these limitations and applying the proposed framework to predict the regional collision risk.
+
+§ ACKNOWLEDGMENT
+
+This work is financially supported by the National Natural Science Foundation of China under grants 52101402 and 52271367.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bKg0I5ZIXm/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bKg0I5ZIXm/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7866ec1ba24bdfe5f4514c4754664013019b8d98
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bKg0I5ZIXm/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,611 @@
+# Adaptive Prescribed-time control of Dynamic Positioning ships based on Neural networks
+
+${1}^{\text{st }}$ Yongsheng Dou
+
+College of Navigation
+
+Dalian Maritime University
+
+Dalian, China
+
+dysheng@dlmu.edu.cn
+
+${2}^{\text{nd }}$ Chenfeng Huang
+
+College of Navigation Dalian Maritime University Dalian, China
+
+chenfengh@dlmu.edu.cn
+
+${3}^{\text{rd }}\mathrm{{Yi}}$ Zhao
+
+College of Navigation
+
+Dalian Maritime University
+
+Dalian, China
+
+yi_zhao@dlmu.edu.cn
+
+${Abstract}$ -In this paper, a novel controller with prescribed-time performance is designed for dynamic positioning (DP) system of ships with model uncertainty and unknown time-varying disturbances. Initially, an error transformation function with zero initial value is introduced by constructing fixed-time funnel boundaries (FTFBs) and a fixed-time tracking performance function (FTTPF). The proposed controller ensures stable convergence of the new error, maintaining it within fixed upper and lower boundaries. When the prescribed time is reached, the system state will achieve prescribed-time (PT) stability. Secondly, by deploying radial basis function neural networks (RBF-NNs) and dynamic surface control (DSC), adaptive controller with simple forms are rationally applied to Backstepping technology, and the uncertain terms of the system are approximated online, the singularity and complexity explosion problems of the ship control system are also addressed. In addition to that, the stability analysis results of the system prove that all errors of the closed-loop system are semi-global uniformly ultimately bounded (SGUUB) stable. Finally, the simulation results on a DP ship confirm the superiority of the proposed scheme.
+
+Index Terms-Dynamically positioned ships, prescribed-time control, fixed-time funnel boundaries, Backstepping
+
+## I. INTRODUCTION
+
+In marine engineering, dynamic positioning (DP) systems are critical for maintaining precise ship positions and orientations in the marine environment [1]. These systems enable ships to hold exact positions or follow predetermined paths without anchoring by utilizing thrusters and power systems. This capability is crucial for marine engineering operations, including oil and gas drilling, underwater pipeline installation, and cable laying. As marine operations grow more complex, traditional DP methods encounter significant challenges, such as environmental disturbances, system parameter uncertainties, and operational efficiency concerns [2].
+
+Consequently, researchers are increasingly adopting advanced control strategies to improve the performance and adaptability of DP systems. However, the highly nonlinear terms of ship dynamics and the continuously changing marine environment often cause traditional control methods to struggle under extreme conditions. Furthermore, most existing DP control strategies depend on extended control processes to achieve stability [3], which may not always be the optimal solution. Therefore, developing a control strategy that can respond quickly and complete tasks within a prescribed time is particularly crucial.
+
+With the rapid advancement of control technologies and methods in recent years, DP systems have been more broader application in maritime operations and offshore exploration for ships and drilling platforms. For instance, in the presence of unknown ship parameters, [4] developed a robust adaptive observer for DP systems, capable of estimating ship velocities and unknown parameters under external disturbances. An adaptive observer based on neural networks (NNs) was developed to estimate the velocity data of the unmanned surface vessel (USV) in [5], even though both the system parameters and nonlinearities of the USV were presumed to be uncertain. NNs approximation techniques are used to compensate for uncertainty and unknown external disturbances, removing the prerequisite for a priori knowledge of ship parameters and external disturbances. Meanwhile, MLP technology is employed to address the computational explosion problem [6] [8]. However, in [7], static NNs are used for control force and moment allocation of an over-actuated ship by measuring the thruster force and commands and gathering data for practice of the NNs.
+
+Due to the time-varying boundary functions can achieve prescribed performance of dynamic system on both transient and steady-state phases, [10] proposed a novel boundary function control approach and introduced an error transformation function, showing training stability of the closed-loop systems with prescribed transient and steady-state functions. In the field of marine engineering operations, [11] proposed a robust adaptive prescribed performance control (RAPPC) law by constructing a concise error mapping function and achieved the DP prescribed performance control. To address positioning error constraints, input saturation and unknown external disturbances, [12] proposed a variable gain prescribed performance control law and constructed the error mapping functions to integrate the prescribed performance boundary to the controller design. Soon after that, a robust fault-tolerant control allocation scheme is developed to distribute again the forces among faulty actuators in [13]. Its performance function is united with an auxiliary in-between control technique to create a high-level controller.
+
+Inspired by the above research work, The contributions of this paper are as follows:
+
+1) Building upon the research foundation of reference [11], this article proposes an adaptive prescribed-time control scheme for DP system of ship with model uncertain and unknown environment disturbances. Unlike the reliance on initial conditions discussed in reference [10], the construction of the fixed-time tracking performance function (FTTPF) ensures that the controller's prescribed performance is no longer dependent on initial conditions. Furthermore, the new dynamic errors will deviate from an initial value of 0 , remaining consistently confined within the set fixed-time tracking performance function (FTFBs).
+
+2) Based on NNs, unknown functions of the new dynamic error derivative terms and unknown model parameters of the ship are approximated online. In addition, the adaptive parameters based on weight allocation are reduced to two to compensate for the unknown gain function. The dynamic surface control (DSC) filtering technique is introduced to address the complexity explosion problem caused by the differentiation of the virtual controller, thereby reducing the computational burden. Finally, two comparative simulations of a DP ship is executed to demonstrate the effectiveness of the proposed algorithm.
+
+## II. Mathematical Model Of Dynamically POSITIONED SHIPS AND PROBLEM FORMULATION
+
+In the design of DP systems, a ship is considered a multi-input multi-output (MIMO) control system that includes dynamics influenced by mass, damping, stiffness, and external disturbances. On the basis of the seakeeping and maneuvering theory, the following three DOF nonlinear mathematical model is used to describe the dynamic behavior of the ship in the presence of disturbances [14]:
+
+$$
+\dot{\eta } = J\left( \psi \right) v \tag{1}
+$$
+
+$$
+M\dot{v} + D\left( v\right) v = \tau + {\tau }_{d} \tag{2}
+$$
+
+where $\eta = {\left\lbrack x, y,\psi \right\rbrack }^{\top } \in {\mathcal{R}}^{3}$ represent the attitude vector including the surge position $x$ , the sway position $\mathrm{y}$ and the heading $\psi \in \left\lbrack {0,{2\pi }}\right\rbrack$ in the earth-fixed coordinate system. $v = {\left\lbrack u, v, r\right\rbrack }^{\top } \in {\mathcal{R}}^{3}$ denotes the velocity vector of the ship in the body-fixed coordinate system, which composed of the surge velocity $u$ , sway velocity $v$ and yaw velocity $r$ , respectively. $J\left( \psi \right)$ is the velocity transformation matrix as
+
+follow:
+
+$$
+J\left( \psi \right) = \left\lbrack \begin{matrix} \cos \left( \psi \right) & - \sin \left( \psi \right) & 0 \\ \sin \left( \psi \right) & \cos \left( \psi \right) & 0 \\ 0 & 0 & 1 \end{matrix}\right\rbrack \tag{3}
+$$
+
+with ${J}^{-1}\left( \psi \right) = {J}^{\top }\left( \psi \right)$ and $\parallel J\left( \psi \right) \parallel = 1$ . Equation (4) gives the specific expression of the positive definite symmetric inertia matrix $M \in {\mathcal{R}}^{3 \times 3}$ , which including additional mass. Equation (5) gives the specific expression of the nonlinear hydrodynamic function $D\left( v\right) v$ .
+
+$$
+M = \left\lbrack \begin{matrix} m - {X}_{\dot{u}} & 0 & 0 \\ 0 & m - {Y}_{\dot{v}} & m{x}_{G} - {X}_{\dot{r}} \\ 0 & m{x}_{G} - {X}_{\dot{r}} & {I}_{z} - {N}_{\dot{r}} \end{matrix}\right\rbrack \tag{4}
+$$
+
+$$
+D\left( v\right) v = \left\lbrack \begin{array}{l} {D}_{1} \\ {D}_{1} \\ {D}_{3} \end{array}\right\rbrack
+$$
+
+$$
+{D}_{1} = - {X}_{u}u - {X}_{\left| u\right| u}\left| u\right| u + {Y}_{\dot{v}}v\left| r\right| + {Y}_{\dot{r}}{rr} \tag{5}
+$$
+
+$$
+{D}_{2} = - {X}_{\dot{u}}{ur} - {Y}_{v}v - {Y}_{r}r - {X}_{\left| v\right| v}\left| v\right| v - {X}_{\left| v\right| r}\left| v\right| r
+$$
+
+$$
+{D}_{3} = \left( {{X}_{\dot{u}} - {Y}_{\dot{v}}}\right) {uv} - {Y}_{\dot{r}}{ur} - {N}_{v}v - {N}_{r}r - {N}_{\left| v\right| v}\left| v\right| v
+$$
+
+$$
+- {N}_{\left| v\right| r}\left| v\right| r
+$$
+
+where $m$ are ship’s mass, ${I}_{z}$ are moment of inertia and ${X}_{u}$ , ${X}_{\left| u\right| u},{Y}_{\dot{v}}$ , etc., are every hydrodynamic force derivatives. It is obvious from the expression in (5) that the nonlinear damping force composed of linear and quadratic terms. In the controller design of this paper, $D\left( v\right) v$ is an uncertain term in which the structure and parameters are unknown and is approximated online using NNs in later section.
+
+$\tau = {\left\lbrack {\tau }_{u},{\tau }_{v},{\tau }_{r}\right\rbrack }^{\top } \in {\mathcal{R}}^{3}$ denotes the control inputs, which are the forces and moments generated by the equipped actuators on the ship consisting of the shaft thruster, the tunnel thruster, and the azimuth thruster. In order to simplify the control inputs, all actuator devices inputs are fused into three degrees of freedom : ${\tau }_{u}$ in surge, ${\tau }_{v}$ in sway and ${\tau }_{r}$ in yaw. ${\tau }_{d} = \left\lbrack {\tau }_{du}\right.$ , ${\left. {\tau }_{dv},{\tau }_{dr}\right\rbrack }^{\top }$ indicates the unknown time-varying environment distraction induced by wind, and waves.
+
+Assumption 1. The environment disturbance ${\tau }_{d\upsilon }$ is bounded in the marine environment, indicating the existence of bounded ${\bar{\tau }}_{d\upsilon } > 0$ for ${\tau }_{d\upsilon }$ . i.e., $\left| {\tau }_{d\upsilon }\right| < {\bar{\tau }}_{d\upsilon }$ .
+
+Remark 1. When modeling ship DP systems, it is often necessary to accurately characterize and predict the effects of environmental disturbances on the ship. In order to simplify the model and to facilitate the design and testing of control algorithms, these environmental disturbances can be approximated and modeled using a sine-cosine function. The frequency, amplitude and phase of the interference can be easily adjusted using the sine-cosine function to simulate different intensities and types of environmental conditions.
+
+In the setting of unknown time-varying disturbances and model uncertainty, the goal of the control is to find a control laws $\tau$ makes the ship’s position(x, y)and heading $\psi$ successfully reach the desired position ${\eta }_{d}$ within the prescribed time. At the same time, the constructed zero-initial-value error function also converges within the set boundaries within the settling time and arbitrarily small errors, and all the errors are bounded all the time.
+
+## III. FUNNEL CONTROL AND FUNNEL VARIABLE
+
+In the context of advanced control strategies for DP systems, particularly those addressing strict timing requirements, the concepts of FTFBs and FTTPF are integral. These are designed to ensure that the control system adheres to performance metrics strictly within a settling interval, regardless of initial conditions. In this section, the definitions of FTFBs and FTTPF are introduced for the purpose of imposing error bounds on them and constructing new error functions.
+
+### A.The Design Of Prescribed-time Funnel Boundary
+
+Definiion 1. [15] FTFBs define the permissible bounds within which the system's states must remain over time. These boundaries are set to compact over a fixed-time period, ensuring that the system's behavior converges to the desired state within the settling duration. These boundaries are particularly useful in scenarios where rapid and reliable system stabilization is crucial.
+
+Equation (6) is selected as an FTFBs with the following traits: (1) $\Gamma \left( t\right) > 0$ and $\dot{\Gamma }\left( t\right) \leq 0$ ; (2) $\mathop{\lim }\limits_{{t \rightarrow {T}_{j}}}\Gamma \left( t\right) = {\Gamma }_{jT}$ ; (3) $\Gamma \left( t\right) = {\Gamma }_{jT}$ for $\forall t \geq {T}_{j}$ with ${T}_{j}$ being the predefined fixed time after which the boundary ceases to contracting.
+
+$$
+{\Gamma }_{jv} = \left\{ \begin{array}{ll} {\Gamma }_{jv0}\tanh \left( \frac{{\lambda }_{j}t}{t - {T}_{jv}}\right) + {\Gamma }_{jv0} + {\Gamma }_{jvT}, & t \in \left\lbrack {0,{T}_{jv}}\right) \\ {\Gamma }_{jvT}, & t \in \left\lbrack {{T}_{jv},\infty }\right) \end{array}\right. \tag{6}
+$$
+
+where ${\Gamma }_{jv0},{\Gamma }_{jvT}$ and ${\Gamma }_{jvT}$ are the initial and final boundary values, ${\lambda }_{j}$ is the decay rate, $j = 1,2$ and ${T}_{jv}$ is the predefined fixed time after which the boundary ceases to contracting.
+
+Definiion 2. [16] FTTPF is a function designed to evaluate and ensure the system's tracking performance over a fixed time, dictating how the tracking error should decrease over time to meet specific performance criteria by a predefined deadline.
+
+$$
+{\varphi }_{v}\left( t\right) = \left\{ \begin{array}{ll} {e}^{-\frac{{k}_{v}t}{{T}_{fv} - t}}, & t \in \left\lbrack {0,{T}_{fv}}\right) \\ 0, & t \in \left\lbrack {{T}_{fv},\infty }\right) \end{array}\right. \tag{7}
+$$
+
+Equation (7) is concretely constructed as an FTTPF with the following properties : $\left( 1\right) \varphi \left( 0\right) = 1$ ; (2) $\mathop{\lim }\limits_{{t \rightarrow {T}_{fv}}}\varphi \left( t\right) = 0$ and $\varphi \left( t\right) = 0$ for $\forall t \geq {T}_{fv}$ with ${T}_{fv}$ being a prescribed settling time. ${\Gamma }_{jv0},{\Gamma }_{jvT},{\lambda }_{j},{T}_{jv},{T}_{fv}$ and ${k}_{v}$ are positive constant.
+
+## B. Funnel Error Transformation
+
+In this paper, by embedding FTTPF ${\varphi }_{v}\left( t\right)$ we construct a new error $\chi \left( t\right)$ variable with 0 initial value as in (9).
+
+$$
+{z}_{1} = \eta - {\eta }_{d} \tag{8}
+$$
+
+$$
+\chi \left( t\right) = {z}_{1}\left( t\right) - {z}_{1}\left( 0\right) {\varphi }_{v}\left( t\right) = \eta - {\eta }_{d} - {z}_{1}\left( 0\right) {\varphi }_{v}\left( t\right) \tag{9}
+$$
+
+Then, ${\Gamma }_{j\upsilon }, j = 1,2$ , is applied to ensure that the following symmetry performance constraints on $\chi \left( t\right)$ which are satisfied.
+
+$$
+- {\Gamma }_{1v} < \chi \left( t\right) < {\Gamma }_{2v} \tag{10}
+$$
+
+where ${\eta }_{d} = {\left\lbrack \begin{array}{lll} {x}_{d}, & {y}_{d}, & {\psi }_{d} \end{array}\right\rbrack }^{\top }$ represents the desired position of the ship DP system. Besides, to simplify the design of the controller, ${T}_{1v} = {T}_{2v}$ is adopted in this paper. In order to comply with the definition of $\chi \left( t\right)$ and the requirements of (9), $\chi \left( 0\right) = {z}_{1}\left( 0\right) - {z}_{1}\left( 0\right) {\varphi }_{v}\left( 0\right) = 0$ guarantees that the initial state $- {\Gamma }_{1\upsilon }\left( 0\right) < \chi \left( 0\right) < {\Gamma }_{2\upsilon }\left( 0\right) \Leftrightarrow - {\Gamma }_{1\upsilon }\left( 0\right) + {z}_{1}\left( 0\right) < {z}_{1}\left( 0\right) <$ ${\Gamma }_{2v}\left( 0\right) + {z}_{1}\left( 0\right)$ is always satisfied, which implicitly means that ${\Gamma }_{1v}$ and ${\Gamma }_{2v}$ no longer need to be redesigned in order to keep the characteristic that initial value is 0 of the new error.
+
+By introducing the constructed ${\Gamma }_{1v}$ and ${\Gamma }_{2v}$ , the maximum overshoot, settling time, and steady boundaries of $\chi \left( t\right)$ can be determined by $\max \left\{ {{\Gamma }_{1\mathrm{v}0} + {\Gamma }_{1\mathrm{{vT}}},{\Gamma }_{2\mathrm{v}0} + {\Gamma }_{2\mathrm{{vT}}}}\right\} ,{T}_{jv}$ and ${\Gamma }_{jvT}$ , respectively. The changing of ${z}_{1}\left( t\right)$ is required to be preassigned over $\left\lbrack {0,{T}_{fv}}\right)$ due to $- {\Gamma }_{1v}\left( t\right) + {z}_{1}\left( 0\right) {\varphi }_{v}\left( t\right) <$ ${z}_{1}\left( t\right) < {\Gamma }_{2v}\left( t\right) + {z}_{1}\left( 0\right) {\varphi }_{v}\left( t\right)$ for $\forall t \in \left\lbrack {0,{T}_{fv}}\right)$ . From the above analysis, (10) can be reformulated as:
+
+$$
+- {\Gamma }_{1}\left( t\right) < \chi \left( t\right) = {z}_{1}\left( t\right) - {z}_{1}\left( 0\right) {\Phi }_{1} < {\Gamma }_{2}\left( t\right) ,\forall t \geq 0 \tag{11}
+$$
+
+where ${\Gamma }_{1} = {\left\lbrack \begin{array}{lll} {\Gamma }_{1u}, & {\Gamma }_{1v}, & {\Gamma }_{1r} \end{array}\right\rbrack }^{\top },{\Gamma }_{2} = {\left\lbrack \begin{array}{lll} {\Gamma }_{2u}, & {\Gamma }_{2v}, & {\Gamma }_{2r} \end{array}\right\rbrack }^{\top }$ and ${\Phi }_{1} = {\left\lbrack \begin{array}{lll} {\varphi }_{u}, & {\varphi }_{v}, & {\varphi }_{r} \end{array}\right\rbrack }^{\top }$ .
+
+Although the extant FC results can tuned the transient and steady-state responses of ${z}_{1}$ , the corresponding problem is the need to rely on specific initial conditions. To solve this problem, inspiring from [17], we introduce the following variable transformation:
+
+$$
+\vartheta \left( t\right) = \chi \left( t\right) + \mu \left( t\right) \tag{12}
+$$
+
+with
+
+$$
+\mu \left( t\right) = \left( {{\Gamma }_{1}\left( t\right) - {\Gamma }_{2}\left( t\right) }\right) /2,\omega \left( t\right) = \left( {{\Gamma }_{1}\left( t\right) + {\Gamma }_{2}\left( t\right) }\right) /2 \tag{13}
+$$
+
+From (12) and (13), (11) is equivalent to
+
+$$
+- \omega \left( t\right) < \vartheta \left( t\right) < \omega \left( t\right) \tag{14}
+$$
+
+To improve control performance and achieve control objectives, the funnel error transformation as given by equation (15) is applied.
+
+$$
+{\xi }_{1}\left( t\right) = \frac{\vartheta \left( t\right) }{\sqrt{{\omega }^{2}\left( t\right) - {\vartheta }^{2}\left( t\right) }} \tag{15}
+$$
+
+The derivation of (15) yields ${\dot{\xi }}_{1}$
+
+$$
+{\dot{\xi }}_{1}\left( t\right) = {\Phi }_{2}\left( {\dot{\eta } - {\dot{\eta }}_{d} - {z}_{1}\left( 0\right) {\dot{\Phi }}_{1}\left( t\right) + \dot{\mu }\left( t\right) - \vartheta \left( t\right) \dot{\omega }\left( t\right) /\omega \left( t\right) }\right)
+$$
+
+(16)
+
+where ${\Phi }_{2} = {\omega }^{2}\left( t\right) /\sqrt{{\left( {\omega }^{2}\left( t\right) - {\vartheta }^{2}\left( t\right) \right) }^{3}} > 0$ . It should be noted that for complex representations of ${\dot{\xi }}_{1}$ , NNs are employed to approximate the uncertain terms. In the subsequent function formulations, function arguments are omitted to simplify the presentation and improve readability.
+
+## IV. Adaptive PT Funnel Control Design For Dynamic Positioned Ships
+
+In this section, adaptive parameters are introduced using NNs for online approximation of the uncertainty terms arising during the controller design process. The Backstepping means is utilized to design the virtual controller ${\alpha }_{v}$ and the control law $\tau$ for the second-order ship motion mathematical model (1) and (2). The DSC technique is applied to address the complexity in deriving ${\alpha }_{v}$ . The controller design procedure consists of two steps for the attitudes and velocity parts. The specific details of the controller design are detailed in IV-A, and the corresponding stability analysis is detailed in IV-B.
+
+## A. Controller Design
+
+Step 1: In the ship's DP system, the reference attitude signal ${\eta }_{d}$ is a constant with derivative 0, meaning ${\dot{\eta }}_{d} = 0$ . It is noted that in the derivative form of the boundary transformation error ${\xi }_{1},{\Phi }_{2}\left( {-{z}_{1}\left( 0\right) {\Phi }_{1}\left( t\right) + \dot{\mu }\left( t\right) - \vartheta \left( t\right) \dot{\omega }\left( t\right) /\omega \left( t\right) }\right)$ represents the unknown function vector. It can be obtained as (17) by using RBF-NNs ${F}_{1}\left( \eta \right)$ .
+
+$$
+{F}_{1}\left( \eta \right) = {\Phi }_{2}\left( {-{z}_{1}\left( 0\right) {\Phi }_{1}\left( t\right) + \dot{\mu }\left( t\right) - \vartheta \left( t\right) \dot{\omega }\left( t\right) /\omega \left( t\right) }\right)
+$$
+
+$$
+= {S}_{1}\left( \eta \right) {A}_{1}\eta + {\varepsilon }_{\eta }
+$$
+
+$$
+= \left\lbrack \begin{matrix} {s}_{x}\left( \eta \right) & 0 & 0 \\ 0 & {s}_{y}\left( \eta \right) & 0 \\ 0 & 0 & {s}_{\psi }\left( \eta \right) \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {A}_{x} \\ {A}_{y} \\ {A}_{\psi } \end{array}\right\rbrack \left\lbrack \begin{array}{l} x \\ y \\ \psi \end{array}\right\rbrack + \left\lbrack \begin{array}{l} {\varepsilon }_{x} \\ {\varepsilon }_{y} \\ {\varepsilon }_{\psi } \end{array}\right\rbrack
+$$
+
+(17)
+
+where ${\varepsilon }_{\eta }$ is corresponding upper bound vector. ${s}_{x}\left( \eta \right) =$ ${s}_{y}\left( \eta \right) = {s}_{\psi }\left( \eta \right)$ due to these RBF functions are with the same input vector $v$ . Let ${\theta }_{1} = {\begin{Vmatrix}{A}_{1}\eta \end{Vmatrix}}^{2}$ , where ${\widehat{\theta }}_{1}$ represents the estimated values of ${\theta }_{1}$ . From the above analysis, the immediate virtual controller ${\alpha }_{v}$ is determined as shown in (18).
+
+$$
+{\alpha }_{v} = - \frac{1}{{\Phi }_{2}J\left( \psi \right) }\left( {{k}_{1}{\xi }_{1} + \frac{{S}_{1}{}^{T}{S}_{1}{\widehat{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1} + \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}}\right) \tag{18}
+$$
+
+where ${k}_{1}$ is a strictly positive diagonal matrix of parameters. the DSC technique, i.e., a first-order low-pass filter (19), is applied here, considering that the derivatives of a are difficult to obtain and complex in form.
+
+$$
+{t}_{v}{\dot{\beta }}_{v} + {\beta }_{v} = {\alpha }_{v},{\beta }_{v}\left( 0\right) = {\alpha }_{v}\left( 0\right) \tag{19}
+$$
+
+${t}_{v}$ is a constant time-related matrix, and the input velocity vector signal ${\alpha }_{v}$ is transformed into the output velocity vector ${\beta }_{v}$ which is the reference vector for the velocity signal in the second step. Defining the error vector ${q}_{v} = {\left\lbrack {q}_{u},{q}_{v},{q}_{r}\right\rbrack }^{\top } =$ ${\alpha }_{v} - {\beta }_{v},{z}_{2} = {\beta }_{v} - v$ , the derivative of ${q}_{v}$ is acquired along with (18) and (19).
+
+$$
+{\dot{q}}_{v} = - {\dot{\beta }}_{v} + {\dot{\alpha }}_{v}
+$$
+
+$$
+= {t}_{v}^{-1}{q}_{v} + {B}_{v}\left( {{z}_{1},{\dot{z}}_{1},\psi , r,{\widehat{\theta }}_{1},{\dot{\widehat{\theta }}}_{1}}\right) \tag{21}
+$$
+
+where ${B}_{v} = {\left\lbrack {B}_{u}\left( \cdot \right) ,{B}_{v}\left( \cdot \right) ,{B}_{r}\left( \cdot \right) \right\rbrack }^{\top }$ is a vector which includes 3 bounded continuous functions. Otherwise, there are the unknown positive value ${\bar{B}}_{v} = {\left\lbrack {\bar{B}}_{u}\left( \cdot \right) ,{\bar{B}}_{v}\left( \cdot \right) ,{\bar{B}}_{r}\left( \cdot \right) \right\rbrack }^{\top }$ such that $\left| {B}_{v}\right| \leq {\bar{B}}_{v}$ . Then, the dynamic error ${z}_{1}$ can be expressed as (21).
+
+Step 2: Together with the time derivative (19) of ${z}_{2}$ yields the corresponding result as (22).
+
+$$
+{\dot{z}}_{2} = {\dot{\beta }}_{v} - \dot{v} = {M}^{-1}\left( {M{\dot{\beta }}_{v} + D\left( v\right) v - \tau - {\tau }_{d}}\right) \tag{22}
+$$
+
+It is noted that $D\left( v\right) v$ is the uncertain term in the dynamic positioning system. Similar to the treatment of the unknown function vector in the first step, RBF-NNs are used to approximate this uncertainty term as follows:
+
+$$
+{F}_{2}\left( {v,{A}_{2}}\right) = {S}_{2}\left( v\right) {A}_{2}v + {\varepsilon }_{v}
+$$
+
+$$
+= \left\lbrack \begin{matrix} {s}_{u}\left( v\right) & 0 & 0 \\ 0 & {s}_{v}\left( v\right) & 0 \\ 0 & 0 & {s}_{r}\left( r\right) \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {A}_{u} \\ {A}_{v} \\ {A}_{r} \end{array}\right\rbrack \left\lbrack \begin{array}{l} u \\ v \\ r \end{array}\right\rbrack + \left\lbrack \begin{array}{l} {\varepsilon }_{u} \\ {\varepsilon }_{v} \\ {\varepsilon }_{r} \end{array}\right\rbrack
+$$
+
+(23)
+
+In (23), the output vector ${F}_{2} = \left\lbrack \begin{array}{lll} {f}_{2}\left( u\right) , & {f}_{2}\left( v\right) , & {f}_{2}\left( r\right) \end{array}\right\rbrack$ contains three components correspond to the $u, v, r$ component velocities. Let ${\theta }_{2} = {\begin{Vmatrix}{A}_{2}v\end{Vmatrix}}^{2}$ , where ${\widehat{\theta }}_{2}$ represents the estimated values of ${\theta }_{2}$ . The application of RBF-NNs simplifies the design of subsequent controller and adaptive laws, while reducing the computational complexity of the algorithms to enhance control performance.
+
+In the derivation of formulas involving NNs, three key applications of Youngs inequality are highlighted below:
+
+$$
+{\Phi }_{2}{\xi }_{1}{F}_{1} \leq {\xi }_{1}\left( {{S}_{1}{A}_{1}\eta + {\varepsilon }_{\eta }}\right)
+$$
+
+$$
+\leq \frac{{S}_{1}{}^{T}{S}_{1}{\begin{Vmatrix}{A}_{1}\eta \end{Vmatrix}}^{2}}{2{a}_{1}{}^{2}}{\xi }_{1}{}^{2} + \frac{1}{2}{a}_{1}{}^{2} + {\xi }_{1}{}^{2} + \frac{1}{4}{\varepsilon }_{\eta }{}^{2}
+$$
+
+(24)
+
+$$
+{z}_{2}{F}_{2} \leq {z}_{2}\left( {{S}_{2}{A}_{2}v + {\varepsilon }_{v}}\right)
+$$
+
+$$
+\leq \frac{{S}_{2}{}^{T}{S}_{2}{\begin{Vmatrix}{A}_{2}v\end{Vmatrix}}^{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{2} + \frac{1}{2}{a}_{2}{}^{2} + {z}_{2}{}^{2} + \frac{1}{2}{\varepsilon }_{v}{}^{2}
+$$
+
+(25)
+
+$$
+- {\Phi }_{2}J\left( \psi \right) {\xi }_{1}{q}_{v} \leq {\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} + \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}{}^{2} \tag{26}
+$$
+
+Based on the above analysis, (27) is chosen as the control input $\tau$ for the ship dynamic positioning system in this paper. Equation (28),(29) are the expression for the adaptive rate ${\widehat{\theta }}_{1}$ , ${\widehat{\theta }}_{2}$ .
+
+$$
+\tau = {k}_{2}{z}_{2} + {\dot{\beta }}_{v} + \frac{{S}_{2}{}^{T}{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2} + {\Phi }_{2}J\left( \psi \right) {\xi }_{1} \tag{27}
+$$
+
+$$
+{\dot{\widehat{\theta }}}_{1} = \frac{{\gamma }_{1}{S}_{1}^{T}{S}_{1}{\widehat{\theta }}_{1}}{2{a}_{1}{}^{2}} - {\varsigma }_{1}{\widehat{\theta }}_{1} \tag{28}
+$$
+
+$$
+{\dot{\widehat{\theta }}}_{2} = \frac{{\gamma }_{2}{S}_{2}{}^{T}{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}} - {\varsigma }_{2}{\widehat{\theta }}_{2} \tag{29}
+$$
+
+where ${k}_{2}$ is a strictly negative diagonal parameter matrix, ${a}_{1}$ , ${a}_{2},{\gamma }_{1},{\gamma }_{2},{\zeta }_{1}$ and ${\zeta }_{2}$ is positive design constants. It can be obviously observed that the designed controller has a very simple form, which significantly reduces the computational load and memory usage. Next, the semi-global uniformly ultimately bounded (SGUUB) stability of the DP system is demonstrated after incorporating the proposed algorithm, through a stability analysis.
+
+## B. Stability Analysis
+
+Select the Lyapunov function as following:
+
+$$
+V = \frac{1}{2}{\xi }_{1}^{\top }{\xi }_{1} + \frac{1}{2}{z}_{2}^{\top }M{z}_{2} + \frac{1}{2}{q}_{v}^{\top }{q}_{v} + \frac{1}{2{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1} + \frac{1}{2{\gamma }_{2}}{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1}
+$$
+
+(30)where ${\widetilde{\theta }}_{1} = {\widehat{\theta }}_{1} - {\theta }_{1}$ , and ${\widetilde{\theta }}_{2} = {\widehat{\theta }}_{2} - {\theta }_{2}$ . By considering $\vartheta \left( t\right) /\sqrt{{\omega }^{2}\left( t\right) - {\vartheta }^{2}\left( t\right) }$ and ${z}_{2} = {\beta }_{v} - v$ , the time derivative of $V$ is expressed as:
+
+$$
+\dot{V} = {\xi }_{1}^{\top }{\dot{\xi }}_{1} + {z}_{2}{}^{T}M{\dot{z}}_{2} + {q}_{v}{}^{T}{\dot{q}}_{v} + \frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{T}{\dot{\widehat{\theta }}}_{1} + \frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{2}^{T}{\dot{\widehat{\theta }}}_{2} \tag{31}
+$$
+
+According to (24),(26), $\parallel J\left( \psi \right) \parallel = 1$ and Young’s inequality, it is obtained that
+
+$$
+{\xi }_{1}^{\top }{\dot{\xi }}_{1} = {\xi }_{1}^{\top }\left\lbrack {{\Phi }_{2}J\left( \psi \right) \left( {{\alpha }_{v} - \left( {{z}_{2} - {q}_{v}}\right) }\right) + {S}_{1}{A}_{1}\eta + {\varepsilon }_{\eta }}\right\rbrack
+$$
+
+$$
+= {\xi }_{1}^{\top }\left\{ {{\Phi }_{2}J\left( \psi \right) \left( {-\frac{1}{{\Phi }_{2}}J{\left( \psi \right) }^{-1}\left( {{k}_{1}{\xi }_{1} + \frac{{S}_{1}^{\top }{S}_{1}{\widehat{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1}}\right. }\right. }\right.
+$$
+
+$$
+\left. \left. {\left. {+\frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}}\right) - \left( {{z}_{2} - {q}_{v}}\right) }\right) \right\} + {\xi }_{1}^{\top }{S}_{1}{A}_{1}\eta + {\xi }_{1}^{\top }{\varepsilon }_{\eta }
+$$
+
+$$
+\leq {\xi }_{1}{}^{T}\left\{ {-{k}_{1}{\xi }_{1} - \frac{{S}_{1}{}^{T}{S}_{1}{\widehat{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1} - \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}}\right\}
+$$
+
+$$
+- {\xi }_{1}^{\top }{\Phi }_{2}J\left( \psi \right) {z}_{2} - {\xi }_{1}^{\top }{\Phi }_{2}J\left( \psi \right) {q}_{v} + \frac{{S}_{1}^{\top }{S}_{1}{\begin{Vmatrix}{A}_{1}\eta \end{Vmatrix}}^{2}}{2{a}_{1}{}^{2}}{\xi }_{1}{}^{\top }{\xi }_{1}
+$$
+
+$$
++ \frac{1}{2}{a}_{1}{}^{2} + {\xi }_{1}{}^{\top }{\xi }_{1} + \frac{1}{4}{\varepsilon }_{\eta }{}^{2}
+$$
+
+$$
+\leq - {k}_{1}{\xi }_{1}^{\top }{\xi }_{1} + \frac{{S}_{1}^{\top }{S}_{1}\left( {{\theta }_{1} - {\widehat{\theta }}_{1}}\right) }{2{a}_{1}{}^{2}}{\xi }_{1}{}^{\top }{\xi }_{1} - \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}{}^{\top }{\xi }_{1}
+$$
+
+$$
+- {\xi }_{1}^{\top }{\Phi }_{2}J\left( \psi \right) {z}_{2} + {\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} + \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}^{\top }{\xi }_{1} + {\xi }_{1}^{\top }{\xi }_{1}
+$$
+
+$$
++ \frac{1}{4}{\varepsilon }_{\eta }{}^{2} + \frac{1}{2}{a}_{1}{}^{2}
+$$
+
+$$
+\leq - {k}_{1}{\xi }_{1}^{\top }{\xi }_{1} - \frac{{S}_{1}{}^{T}{S}_{1}{\widetilde{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1}{}^{\top }{\xi }_{1} - {\xi }_{1}{}^{\top }{\Phi }_{2}J\left( \psi \right) {z}_{2}
+$$
+
+$$
++ {\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} + {\xi }_{1}^{\top }{\xi }_{1} + \frac{1}{4}{\varepsilon }_{\eta }{}^{2} + \frac{1}{2}{a}_{1}{}^{2} \tag{32}
+$$
+
+In view of (22),(23),(25) and (27), $\parallel J\left( \psi \right) \parallel = 1$ and Young's inequality, it follows that
+
+$$
+{z}_{2}^{\top }M{\dot{z}}_{2} = {z}_{2}^{\top }M\left\lbrack {{M}^{-1}\left( {M{\dot{\beta }}_{v} + {Dv} - \tau - {\tau }_{d}}\right) }\right\rbrack
+$$
+
+$$
+= {z}_{2}^{\top }\left\lbrack {M{\dot{\beta }}_{v} + {F}_{2} - \left( {{k}_{2}{z}_{2} + {\dot{\beta }}_{v} + \frac{{S}_{2}^{T}{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}}\right) }\right.
+$$
+
+$$
+\left. {-{\Phi }_{2}R\left( \psi \right) {\xi }_{1} - {\tau }_{d}}\right\rbrack
+$$
+
+$$
+\leq {z}_{2}^{\top }\left( {M - I}\right) {\dot{\beta }}_{v} + \frac{{S}_{2}^{\top }{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{\top }{z}_{2} + \frac{1}{2}{a}_{2}{}^{2} + {z}_{2}{}^{\top }{z}_{2}
+$$
+
+$$
++ \frac{1}{2}{\varepsilon }_{v}{}^{2} - {k}_{2}{z}_{2}{}^{\top }{z}_{2} - \frac{{S}_{2}{}^{\top }{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2} + {\Phi }_{2}R\left( \psi \right) {z}_{2}{}^{\top }{\xi }_{1} - {z}_{2}{}^{\top }{\tau }_{d}
+$$
+
+$$
+\leq {z}_{2}^{\top }\left( {M - I}\right) {\dot{\beta }}_{v} + \frac{{S}_{2}^{\top }{S}_{2}{\widetilde{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{\top }{z}_{2} + \frac{1}{2}{a}_{2}{}^{2} + {z}_{2}{}^{\top }{z}_{2}
+$$
+
+$$
++ \frac{1}{2}{\varepsilon }_{v}{}^{2} - {k}_{2}{z}_{2}{}^{\top }{z}_{2} + {\Phi }_{2}R\left( \psi \right) {z}_{2}{}^{\top }{\xi }_{1} - {z}_{2}{}^{\top }{\tau }_{d} \tag{33}
+$$
+
+It is worth noticing that
+
+$$
+{z}_{2}\left( {M - I}\right) {\dot{\beta }}_{v} \leq {\begin{Vmatrix}\left( M - I\right) {t}_{v}{}^{-1}\end{Vmatrix}}_{F}^{2}{\begin{Vmatrix}{z}_{2}\end{Vmatrix}}^{2} + \frac{1}{4}{\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} \tag{34}
+$$
+
+$$
+- {z}_{2}{\tau }_{d} \leq {z}_{2}^{\top }{z}_{2} + \frac{{\tau }_{d}^{\top }{\tau }_{d}}{4} \tag{35}
+$$
+
+Note that $I$ is the identity matrix. Then (33) becomes
+
+$$
+{z}_{2}^{\top }M{\dot{z}}_{2} \leq {\begin{Vmatrix}\left( M - I\right) {t}_{v}{}^{-1}\end{Vmatrix}}_{F}^{2}{\begin{Vmatrix}{z}_{2}\end{Vmatrix}}^{2} + 2{z}_{2}{}^{\top }{z}_{2} - {k}_{2}{z}_{2}{}^{\top }{z}_{2}
+$$
+
+$$
+- \frac{{S}_{2}^{\top }{S}_{2}{\widetilde{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{\top }{z}_{2} + \frac{1}{4}{\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} + {\Phi }_{2}R\left( \psi \right) {z}_{2}{}^{\top }{\xi }_{1}
+$$
+
+$$
++ \frac{{\tau }_{d}{}^{\top }{\tau }_{d}}{4} + \frac{1}{2}{a}_{2}{}^{2} + \frac{1}{2}{\varepsilon }_{v}{}^{2} \tag{36}
+$$
+
+Incorporating adaptive law (28) and ${\widetilde{\theta }}_{2} = {\widehat{\theta }}_{2} - {\theta }_{2}$ ,(37) and (38) is obtained.
+
+$$
+\frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\dot{\widehat{\theta }}}_{1} \leq \frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }\left( {\frac{{\gamma }_{1}{S}_{1}^{\top }{S}_{1}{\xi }_{1}^{\top }{\xi }_{1}}{2{a}_{1}{}^{2}} - {\varsigma }_{1}{\widehat{\theta }}_{1}}\right)
+$$
+
+$$
+\leq \frac{{S}_{1}^{\top }{S}_{1}{\widetilde{\theta }}_{1}^{\top }{\xi }_{1}^{\top }{\xi }_{1}}{2{a}_{1}{}^{2}} - \frac{{\varsigma }_{1}{\widetilde{\theta }}_{1}^{\top }{\widehat{\theta }}_{1}}{{\gamma }_{1}} \tag{37}
+$$
+
+$$
+{\widetilde{\theta }}_{1}^{\top }{\widehat{\theta }}_{1} \leq {\widetilde{\theta }}_{1}^{\top }\left( {{\widetilde{\theta }}_{1} + {\theta }_{1}}\right)
+$$
+
+$$
+\leq {\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1} + {\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1}^{\top }
+$$
+
+$$
+\leq 2{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1} + \frac{1}{4}{\theta }_{1}^{2} \tag{38}
+$$
+
+Substituting (38) into (37), one gets
+
+$$
+\frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\dot{\widehat{\theta }}}_{1} \leq \frac{{S}_{1}^{\top }{S}_{1}{\widetilde{\theta }}_{1}^{\top }{\xi }_{1}^{\top }{\xi }_{1}}{2{a}_{1}{}^{2}} - \frac{2{\varsigma }_{1}}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1} - \frac{{\varsigma }_{1}}{4{\gamma }_{1}}{\theta }_{1}{}^{2} \tag{39}
+$$
+
+As the same as above steps, another gets:
+
+$$
+\frac{1}{{\gamma }_{2}}{\widetilde{\theta }}_{2}^{\top }{\dot{\widehat{\theta }}}_{2} \leq \frac{{S}_{2}{}^{\top }{S}_{2}{\widetilde{\theta }}_{2}^{\top }{z}_{2}{}^{\top }{z}_{2}}{2{a}_{2}{}^{2}} - \frac{2{\varsigma }_{2}}{{\gamma }_{2}}{\widetilde{\theta }}_{2}^{\top }{\widetilde{\theta }}_{2} - \frac{{\varsigma }_{2}}{4{\gamma }_{2}}{\theta }_{2}{}^{2} \tag{40}
+$$
+
+Using (21) and Young’s inequality, ${q}_{v}^{\top }{\dot{q}}_{v}$ follows that
+
+$$
+{q}_{v}^{\top }{\dot{q}}_{v} \leq - \mathop{\sum }\limits_{{i = u, v, r}}\left( {\frac{{q}_{i}{}^{2}}{{t}_{i}} - \frac{{B}_{i}^{2}{q}_{i}{\bar{B}}_{i}^{2}}{{2b}{\bar{B}}_{i}} - \frac{b}{2}}\right)
+$$
+
+$$
+\leq - \mathop{\sum }\limits_{{i = u, v, r}}\left\lbrack {\left( {\frac{1}{{t}_{i}} - \frac{{\bar{B}}_{i}^{2}}{2b}}\right) {q}_{i}{}^{2} + \left( {1 - \frac{{B}_{i}^{2}}{{\bar{B}}_{i}^{2}}}\right) \frac{{\bar{B}}_{i}^{2}{q}_{i}{}^{2}}{2b} - \frac{b}{2}}\right\rbrack
+$$
+
+$$
+\leq - \mathop{\sum }\limits_{{i = u, v, r}}\left\lbrack {\left( {\frac{1}{{t}_{i}} - \frac{{\bar{B}}_{i}^{2}}{2b}}\right) {q}_{i}{}^{2}}\right\rbrack + \frac{3b}{2} \tag{41}
+$$
+
+Submitting (32) (36) (39) (40) and (41) into (31), the time
+
+$$
+\dot{V} \leq - \left( {{k}_{1} - I}\right) {\xi }_{1}^{\top }{\xi }_{1} - \left( {{k}_{2} - {2I}}\right) {z}_{2}^{\top }{z}_{2} + {\begin{Vmatrix}\left( M - I\right) {t}_{v}{}^{-1}\end{Vmatrix}}_{F}^{2}{\begin{Vmatrix}{z}_{2}\end{Vmatrix}}^{2}
+$$
+
+$$
++ \frac{5}{4}{\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} - \mathop{\sum }\limits_{{i = u, v, r}}\left\lbrack {\left( {\frac{1}{{t}_{i}} - \frac{{\bar{B}}_{i}^{2}}{2b}}\right) {q}_{i}{}^{2}}\right\rbrack - \frac{2{\varsigma }_{1}}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1}
+$$
+
+$$
+- \frac{2{\varsigma }_{2}}{{\gamma }_{2}}{\widetilde{\theta }}_{2}^{\top }{\widetilde{\theta }}_{2} - {\xi }_{1}^{\top }{\Phi }_{2}J\left( \psi \right) {z}_{2} + {\xi }_{1}{\Phi }_{2}R\left( \psi \right) {z}_{2}^{\top } - \frac{{S}_{1}^{\top }{S}_{1}{\widetilde{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1}{}^{\top }{\xi }_{1}
+$$
+
+$$
++ \frac{{S}_{1}^{\top }{S}_{1}{\widetilde{\theta }}_{1}^{\top }{\xi }_{1}^{\top }{\xi }_{1}}{2{a}_{1}{}^{2}} - \frac{{S}_{2}{}^{T}{S}_{2}{\widetilde{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{\top }{z}_{2} + \frac{{S}_{2}{}^{T}{S}_{2}{\widetilde{\theta }}_{2}^{\top }{\xi }_{2}{}^{\top }{\xi }_{2}}{2{a}_{2}{}^{2}}
+$$
+
+$$
++ \frac{1}{4}{\varepsilon }_{\eta }{}^{2} + \frac{1}{2}{a}_{1}{}^{2} + \frac{{\tau }_{d}{}^{\top }{\tau }_{d}}{4} + \frac{1}{2}{a}_{2}{}^{2} + \frac{1}{2}{\varepsilon }_{\upsilon }{}^{2} - \frac{{\varsigma }_{1}}{4{\gamma }_{1}}{\theta }_{1}{}^{2}
+$$
+
+$$
+- \frac{{\varsigma }_{2}}{4{\gamma }_{2}}{\theta }_{2}{}^{2} + \frac{3b}{2}
+$$
+
+$$
+\leq - {2aV} + \varrho \tag{42}
+$$
+
+where $a = {\lambda }_{\min }\left\{ {-\left( {{k}_{1} - I}\right) , - \left( {{k}_{2} - {2I}}\right) + {\begin{Vmatrix}\left( M - I\right) {t}^{-1}{}_{v}\end{Vmatrix}}_{F}}\right.$ , $\left\{ {5/4\underset{i = u, v, r}{-\sum }\left\lbrack {1/{t}_{i} - {\bar{B}}_{i}/{2b}}\right\rbrack }\right\} ,\left. {-2{\varsigma }_{1}/{\gamma }_{2} - 2{\varsigma }_{2}/{\gamma }_{2}}\right\} ,\varrho = 1/\left( {4{\varepsilon }_{\eta }^{2}}\right)$ $+ 1/\left( {2{a}_{1}^{2}}\right) + {\tau }_{d}^{\top }{\tau }_{d}/4 + 1/\left( {2{a}_{2}^{2}}\right) + 1/\left( {2{\varepsilon }_{v}^{2}}\right) - {\varsigma }_{1}{\theta }_{1}^{2}/\left( {4{\gamma }_{1}}\right)$ $- {\varsigma }_{2}{\theta }_{2}^{2}/\left( {4{\gamma }_{2}}\right) + {3b}/2$ .
+
+By integrating both sides of equation (42), we obtain:
+
+$$
+V\left( t\right) \leq \left( {V\left( 0\right) - \frac{\varrho }{2a}}\right) {e}^{\left( -2at\right) } + \frac{\varrho }{2a} \tag{43}
+$$
+
+According to the closed-loop gain shaping algorithm [18], all errors variables in closed-loop system decrease to the compact set $\Omega \mathrel{\text{:=}} \left\{ {\left. \left( {{\xi }_{1},{z}_{2},{q}_{v},{\widetilde{\theta }}_{1},{\widehat{\widetilde{\theta }}}_{2}}\right) \right| \;\parallel {\xi }_{1}\parallel \leq {C}_{0},{C}_{0} > \sqrt{\varrho /a}}\right\}$ as $t \rightarrow \infty$ by choosing appropriate parameters. ${C}_{0} > \sqrt{\varrho /a}$ is a positive constant. Thus, the closed-loop control system is SGUUB stable under the proposed control scheme, given the positive constant ${C}_{0}$ , where all signal errors in the closed-loop system can be made arbitrarily small.
+
+## V. SIMULATION
+
+In this section, to verify the effectiveness of the proposed prescribed-time algorithm, a simulation example for a supply ship (length: ${76.2}\mathrm{\;m}$ , mass: ${4.591} \times {10}^{6}\mathrm{\;{kg}}$ ) equipped with a DP system is executed and compared with the Optimum-seeking Guidance scheme (OSG) in [19] and robust control scheme in [20]. The ship mathematic model parameters are presented in TABLE I. In the modeling of ship Dynamic Positioning (DP) systems, it is essential to precisely characterize and predict the impacts of environmental disturbances, such as wind, waves, and ocean currents on the ship's performance. For the sake of simplifying the ship model and facilitating the design and testing of control algorithms, these environmental disturbances are approximated and modeled using a sine-cosine function (44).
+
+$$
+\left\{ \begin{array}{l} {\tau }_{du} = 2\left( {1 + {35}\sin \left( {{0.2t} + {15}\cos \left( {0.5t}\right) }\right) }\right) \left( \mathrm{N}\right) \\ {\tau }_{dv} = 2\left( {1 + {30}\cos \left( {{0.4t} + {20}\cos \left( {0.1t}\right) }\right) }\right) \left( \mathrm{N}\right) \\ {\tau }_{dr} = 3\left( {1 + {30}\cos \left( {{0.3t} + {10}\sin \left( {0.5t}\right) }\right) }\right) \left( {\mathrm{N} \cdot \mathrm{r}}\right. \end{array}\right. \tag{44}
+$$
+
+$$
+{k}_{1} = \operatorname{diag}\left\lbrack {{0.2},{0.38},{0.20}}\right\rbrack ,{k}_{2} = \operatorname{diag}\left\lbrack {{44},{12.8},{78.1}}\right\rbrack ;
+$$
+
+$$
+{t}_{v} = {0.05} \times I;{a}_{1} = {a}_{2} = {80};{\gamma }_{1} = {\gamma }_{2} = {0.5};{\varsigma }_{1} = {\varsigma }_{2} = {0.5}\text{;}
+$$
+
+$$
+{T}_{j\upsilon } = \left\lbrack {{T}_{ju},{T}_{jv},{T}_{jr}}\right\rbrack = \left\lbrack {{80s},{80s},{90s}}\right\rbrack ;
+$$
+
+$$
+{T}_{fv} = \left\lbrack {{T}_{fu},{T}_{fv},{T}_{fr}}\right\rbrack = \left\lbrack {{80s},{80s},{90s}}\right\rbrack ; \tag{45}
+$$
+
+TABLE I
+
+MODEL PARAMETERS
+
+| Indexes | Values | Indexes | Values |
| ${X}_{\dot{u}}$ | $- {0.72} \times {10}^{6}$ | ${X}_{\dot{u}}$ | ${5.0242} \times {10}^{4}$ |
| ${Y}_{v}$ | $- {3.6921} \times {10}^{6}$ | ${Y}_{v}$ | ${2.7229} \times {10}^{6}$ |
| ${Y}_{\dot{r}}$ | $- {1.0234} \times {10}^{6}$ | ${Y}_{r}$ | $- {4.3933} \times {10}^{6}$ |
| ${I}_{z} - {N}_{\dot{r}}$ | ${3.7454} \times {10}^{9}$ | ${Y}_{\left| v\right| v}$ | ${1.7860} \times {10}^{4}$ |
| ${X}_{\left| u\right| u}$ | ${1.0179} \times {10}^{3}$ | ${Y}_{\left| v\right| r}$ | $- {3.0068} \times {10}^{5}$ |
| ${N}_{v}$ | $- {4.3821} \times {10}^{6}$ | ${N}_{r}$ | ${4.1894} \times {10}^{6}$ |
| ${N}_{\left| v\right| v}$ | $- {2.4684} \times {10}^{5}$ | ${N}_{\left| v\right| r}$ | ${6.5759} \times {10}^{6}$ |
+
+16 6 8 12 14 $x\left( \mathrm{\;m}\right)$ The proposed scheme 14 OSG in [17] Robust contol in [18] 12 10 8 $y\left( \mathrm{\;m}\right)$ 6 4 2 0 -2 -2 0 2 4
+
+Fig. 1. Trajectory of the ship in ${xy}$ -plane.
+
+In this simulation, the desired attitude is set to ${\eta }_{d} =$ $\left\lbrack \begin{array}{lll} 0\mathrm{m}, & 0\mathrm{m}, & 0\mathrm{{deg}} \end{array}\right\rbrack$ . The initial states are set to $\eta \left( 0\right) =$ $\left\lbrack {{12}\mathrm{\;m},{14}\mathrm{\;m},{10}\mathrm{{deg}}}\right\rbrack , v\left( 0\right) = \left\lbrack {0\mathrm{\;m}/\mathrm{s},{14}\mathrm{\;m}/\mathrm{s},{10}\mathrm{{deg}}/\mathrm{s}}\right\rbrack .$ The concrete parameters values setting follows (45). Besides, the RBF-NNs for ${F}_{1}$ and ${F}_{1}$ consisted of 25 nodes with centers spaced in $\left\lbrack {-{2.5}\mathrm{\;m}/\mathrm{s},{2.5}\mathrm{\;m}/\mathrm{s}}\right\rbrack$ for $x, y, u$ and $r$ , $\left\lbrack {-{0.16}\mathrm{\;m}/\mathrm{s},{0.16}\mathrm{\;m}/\mathrm{s}}\right\rbrack$ for $\psi$ and $r$ , respectively. For the comparison algorithms, corresponding parameters refers to [19] and [20].
+
+Fig. 1 exhibits simulation results under the proposed algorithm, OSG and robust control making the ship stay at the desired attitude in the ${xy}$ -plant. It is clear that the proposed algorithm provides a more satisfactory trajectory accuracy compared to the algorithms for comparison. Fig. 2 illustrates that the ship attitude $x, y$ and $\psi$ are stabilized to the desired attitude near the prescribed-time ${T}_{j\upsilon }$ . The proposed scheme achieves faster stabilization compared to the schemes for comparison. The velocities of surge, sway and yaw are showed in Fig. 3. The proposed scheme has a improved convergence performance. Fig. 4 illustrates the fluctuation of the values of the three input signals over time. It is apparent that prior to system stabilization, the proposed scheme exhibits superior convergence performance of ${\tau }_{r}$ compared to other schemes. Once the system has stabilized, the values of ${\tau }_{u}$ and ${\tau }_{v}$ in the proposed scheme converge more rapidly towards zero, further outperforming the other schemes in convergence efficiency. Finally, it can be seen that the constructed new error is successfully confined within the boundaries and converges stably to 0 at the moment of settling time ${T}_{fu},{T}_{fv},{T}_{fr}$ as shown in Fig. 5. Additionally, Fig. 6 and Fig. 7 illustrate the fitting performance between the estimated and true values of the adaptive parameters ${\theta }_{1}$ and ${\theta }_{2}$ , respectively, representing the approximation capability of the RBF-NNs for the system uncertainties terms. It can be observed that, within the permissible margin of error, the RBF-NNs successfully approximate uncertainties terms described by (13) and (17).
+
+10 The proposed scheme OSG in [17] Robust contol in [18] 150 200 250 300 80 150 300 200 Time (s) $x\left( \mathrm{m}\right)$ 50 100 10 $y\;\left( \mathrm{m}\right)$ 100 $\psi$ (deg) 0 100
+
+Fig. 2. Ship’s actual position(x, y)and heading $\psi$ .
+
+0 The proposed scheme OSG in [17] Robust contol in [18] 150 200 250 300 150 200 250 300 150 200 250 300 Time (s) $u\left( {\mathrm{m}/\mathrm{s}}\right)$ -0.2 50 100 $v\;\left( {\mathrm{m}/\mathrm{s}}\right)$ -0.2 -0.4 50 100 0.5 $r\left( {\mathrm{{deg}}/\mathrm{s}}\right)$ 0 -0.5 50 100
+
+Fig. 3. Ship’s surge velocity $u$ , sway velocity $v$ and yaw rate $r$ .
+
+$\times {10}^{7}$ The proposed scheme OSG in [17] Robust contol in [18] 150 300 119.4 119.8 $\times {10}^{4}$ 108.1 108.2 108.3 150 Time (s) Let(N) -5 50 100 $\times {10}^{8}$ p (N) 0 100 $\times {10}^{6}$ ${\tau }_{r}\left( {\mathrm{N} \cdot \mathrm{m}}\right)$ -10 -20
+
+Fig. 4. Ship’s surge force ${\tau }_{u}$ , sway force ${\tau }_{v}$ and yaw force ${\tau }_{r}$ .
+
+${\xi }_{1}\left( x\right) \left( \mathrm{m}\right)$ -0.2 150 200 250 300 _=80s T ${}_{fu}$ =90s 200 250 300 Time (s) 100 ${\xi }_{1}\left( y\right) \left( \mathrm{m}\right)$ 0 -0.5 0.2 ${\xi }_{1}\left( \psi \right) \left( \mathrm{m}\right)$ -0.2 50 100
+
+Fig. 5. The new error ${\xi }_{1}$ for the simulation with the proposed scheme.
+
+In summary, the NNs-based prescribed-time control scheme proposed in this paper demonstrates superior performance and robustness compared to the schemes for comparison. By introducing FTFBs, FTTFBs and constructing new error functions, the control laws are made more concise. Finally, the proposed scheme is validated through simulations to demonstrate its effectiveness on DP ships.
+
+30 ${\theta }_{1}$ ${\widehat{\theta }}_{1}$ 150 200 250 300 Time (s) 25 20 ${\theta }_{1}/{\widehat{\theta }}_{1}$ 15 10 5 0 50 100
+
+Fig. 6. The estimation performance of ${\theta }_{1}$ .
+
+6 $\times {10}^{4}$ ${\theta }_{2}$ ${\widehat{\theta }}_{2}$ 150 200 250 300 Time (s) $\times {10}^{4}$ 2.84 5 2.835 2.83 2.825 4 ${\theta }_{2}/\widehat{{\theta }_{2}}$ 4800 2 4700 4600 4500 1 4400 0 0 50 100
+
+Fig. 7. The estimation performance of ${\theta }_{2}$ .
+
+## VI. CONCLUTION
+
+In this paper, a novel NNs-based control scheme is proposed for the ship DP system under model uncertain and unknown environmental disturbances, making the new dynamic errors converging within fixed boundaries. The prescribed-time performance of the algorithm is validated through by a simulation example and two comparative simulations with satisfactory results. Consequently, the prescribed-time control algorithm proposed in this paper can be applied to ships performing DP tasks, enabling the ship's dynamic system to achieve more precise time-based prescribed performance.
+
+Given the presence of multiple dynamic actuators in engineering practices related to marine equipment, future research on the proposed algorithm could focus on the issue of actuator control allocation. In addition, the integration of event-triggered control, fault-tolerant control, and blind zone constraints could further enhance the development of this control algorithm toward more advanced and precise control techniques.
+
+## REFERENCES
+
+[1] S $\phi$ rensen A J. Propulsion and motion control of ships and ocean structures[J]. Marine Technology Center, Department of Marine Technology. Lecture notes, 2011.
+
+[2] Du J, Hu X, Krstić M, et al. Dynamic positioning of ships with unknown parameters and disturbances[J]. Control Engineering Practice, 2018, 76: 22-30.
+
+[3] Sørensen A J. A survey of dynamic positioning control systems[J]. Annual reviews in control, 2011, 35(1): 123-136.
+
+[4] Do K D. Global robust and adaptive output feedback dynamic positioning of surface ships[J]. Journal of Marine Science and Application, 2011, 10: 325-332.
+
+[5] Park B S, Kwon J W, Kim H. Neural network-based output feedback control for reference tracking of underactuated surface vessels[J]. Au-tomatica, 2017, 77: 353-359.
+
+[6] Yang Y, Guo C, Du J L. Robust adaptive NN-based output feedback control for a dynamic positioning ship using DSC approach[J]. Science China Information Sciences, 2014, 57: 1-13.
+
+[7] Skulstad R, Li G, Zhang H, et al. A neural network approach to control allocation of ships for dynamic positioning[J]. IFAC-PapersOnLine, 2018, 51(29): 128-133.
+
+[8] Liang K, Lin X, Chen Y, et al. Robust adaptive neural networks control for dynamic positioning of ships with unknown saturation and time-delay[J]. Applied Ocean Research, 2021, 110: 102609.
+
+[9] Dai S L, He S, Lin H. Transverse function control with prescribed performance guarantees for underactuated marine surface vehicles[J]. International Journal of Robust and Nonlinear Control, 2019, 29(5): 1577-1596.
+
+[10] Dai S L, He S, Wang M, et al. Adaptive neural control of underactu-ated surface vessels with prescribed performance guarantees[J]. IEEE transactions on neural networks and learning systems, 2018, 30(12): 3686-3698.
+
+[11] Li J, Du J, Hu X. Robust adaptive prescribed performance control for dynamic positioning of ships under unknown disturbances and input constraints[J]. Ocean Engineering, 2020, 206: 107254.
+
+[12] Gong C, Su Y, Zhang D. Variable gain prescribed performance control for dynamic positioning of ships with positioning error constraints[J]. Journal of Marine Science and Engineering, 2022, 10(1): 74.
+
+[13] Li H, Lin X. Robust fault-tolerant control for dynamic positioning of ships with prescribed performance[J]. Ocean Engineering, 2024, 298: 117314.
+
+[14] Fossen T I. Marine control systems. Marine cybernetics[J]. Trondhiem, Norway, 2002.
+
+[15] Li Z, Chen X, Ding S, et al. TCP/AWM network congestion algorithm with funnel control and arbitrary setting time[J]. Applied Mathematics and Computation, 2020, 385: 125410.
+
+[16] Wu J, Chen W, Li J. Global finite-time adaptive stabilization for nonlinear systems with multiple unknown control directions[J]. Automatica, 2016, 69: 298-307.
+
+[17] Xie H, Jing Y, Dimirovski G M, et al. Adaptive fuzzy prescribed time tracking control for nonlinear systems with input saturation[J]. ISA transactions, 2023, 143: 370-384.
+
+[18] X.-K. Zhang. Ship Motion Concise Robust Control. Beijing, China: Sci. Press, 2012.
+
+[19] Zhang G, Cai Y, Zhang W. Robust neural control for dynamic positioning ships with the optimum-seeking guidance[J]. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2016, 47(7): 1500-1509.
+
+[20] Du J, Yang Y, Wang D, et al. A robust adaptive neural networks controller for maritime dynamic positioning system[J]. Neurocomputing, 2013, 110: 128-136.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bKg0I5ZIXm/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bKg0I5ZIXm/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2257aa83dcd56c70deb9d6a4e5161592e2645553
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bKg0I5ZIXm/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,594 @@
+§ ADAPTIVE PRESCRIBED-TIME CONTROL OF DYNAMIC POSITIONING SHIPS BASED ON NEURAL NETWORKS
+
+${1}^{\text{ st }}$ Yongsheng Dou
+
+College of Navigation
+
+Dalian Maritime University
+
+Dalian, China
+
+dysheng@dlmu.edu.cn
+
+${2}^{\text{ nd }}$ Chenfeng Huang
+
+College of Navigation Dalian Maritime University Dalian, China
+
+chenfengh@dlmu.edu.cn
+
+${3}^{\text{ rd }}\mathrm{{Yi}}$ Zhao
+
+College of Navigation
+
+Dalian Maritime University
+
+Dalian, China
+
+yi_zhao@dlmu.edu.cn
+
+${Abstract}$ -In this paper, a novel controller with prescribed-time performance is designed for dynamic positioning (DP) system of ships with model uncertainty and unknown time-varying disturbances. Initially, an error transformation function with zero initial value is introduced by constructing fixed-time funnel boundaries (FTFBs) and a fixed-time tracking performance function (FTTPF). The proposed controller ensures stable convergence of the new error, maintaining it within fixed upper and lower boundaries. When the prescribed time is reached, the system state will achieve prescribed-time (PT) stability. Secondly, by deploying radial basis function neural networks (RBF-NNs) and dynamic surface control (DSC), adaptive controller with simple forms are rationally applied to Backstepping technology, and the uncertain terms of the system are approximated online, the singularity and complexity explosion problems of the ship control system are also addressed. In addition to that, the stability analysis results of the system prove that all errors of the closed-loop system are semi-global uniformly ultimately bounded (SGUUB) stable. Finally, the simulation results on a DP ship confirm the superiority of the proposed scheme.
+
+Index Terms-Dynamically positioned ships, prescribed-time control, fixed-time funnel boundaries, Backstepping
+
+§ I. INTRODUCTION
+
+In marine engineering, dynamic positioning (DP) systems are critical for maintaining precise ship positions and orientations in the marine environment [1]. These systems enable ships to hold exact positions or follow predetermined paths without anchoring by utilizing thrusters and power systems. This capability is crucial for marine engineering operations, including oil and gas drilling, underwater pipeline installation, and cable laying. As marine operations grow more complex, traditional DP methods encounter significant challenges, such as environmental disturbances, system parameter uncertainties, and operational efficiency concerns [2].
+
+Consequently, researchers are increasingly adopting advanced control strategies to improve the performance and adaptability of DP systems. However, the highly nonlinear terms of ship dynamics and the continuously changing marine environment often cause traditional control methods to struggle under extreme conditions. Furthermore, most existing DP control strategies depend on extended control processes to achieve stability [3], which may not always be the optimal solution. Therefore, developing a control strategy that can respond quickly and complete tasks within a prescribed time is particularly crucial.
+
+With the rapid advancement of control technologies and methods in recent years, DP systems have been more broader application in maritime operations and offshore exploration for ships and drilling platforms. For instance, in the presence of unknown ship parameters, [4] developed a robust adaptive observer for DP systems, capable of estimating ship velocities and unknown parameters under external disturbances. An adaptive observer based on neural networks (NNs) was developed to estimate the velocity data of the unmanned surface vessel (USV) in [5], even though both the system parameters and nonlinearities of the USV were presumed to be uncertain. NNs approximation techniques are used to compensate for uncertainty and unknown external disturbances, removing the prerequisite for a priori knowledge of ship parameters and external disturbances. Meanwhile, MLP technology is employed to address the computational explosion problem [6] [8]. However, in [7], static NNs are used for control force and moment allocation of an over-actuated ship by measuring the thruster force and commands and gathering data for practice of the NNs.
+
+Due to the time-varying boundary functions can achieve prescribed performance of dynamic system on both transient and steady-state phases, [10] proposed a novel boundary function control approach and introduced an error transformation function, showing training stability of the closed-loop systems with prescribed transient and steady-state functions. In the field of marine engineering operations, [11] proposed a robust adaptive prescribed performance control (RAPPC) law by constructing a concise error mapping function and achieved the DP prescribed performance control. To address positioning error constraints, input saturation and unknown external disturbances, [12] proposed a variable gain prescribed performance control law and constructed the error mapping functions to integrate the prescribed performance boundary to the controller design. Soon after that, a robust fault-tolerant control allocation scheme is developed to distribute again the forces among faulty actuators in [13]. Its performance function is united with an auxiliary in-between control technique to create a high-level controller.
+
+Inspired by the above research work, The contributions of this paper are as follows:
+
+1) Building upon the research foundation of reference [11], this article proposes an adaptive prescribed-time control scheme for DP system of ship with model uncertain and unknown environment disturbances. Unlike the reliance on initial conditions discussed in reference [10], the construction of the fixed-time tracking performance function (FTTPF) ensures that the controller's prescribed performance is no longer dependent on initial conditions. Furthermore, the new dynamic errors will deviate from an initial value of 0, remaining consistently confined within the set fixed-time tracking performance function (FTFBs).
+
+2) Based on NNs, unknown functions of the new dynamic error derivative terms and unknown model parameters of the ship are approximated online. In addition, the adaptive parameters based on weight allocation are reduced to two to compensate for the unknown gain function. The dynamic surface control (DSC) filtering technique is introduced to address the complexity explosion problem caused by the differentiation of the virtual controller, thereby reducing the computational burden. Finally, two comparative simulations of a DP ship is executed to demonstrate the effectiveness of the proposed algorithm.
+
+§ II. MATHEMATICAL MODEL OF DYNAMICALLY POSITIONED SHIPS AND PROBLEM FORMULATION
+
+In the design of DP systems, a ship is considered a multi-input multi-output (MIMO) control system that includes dynamics influenced by mass, damping, stiffness, and external disturbances. On the basis of the seakeeping and maneuvering theory, the following three DOF nonlinear mathematical model is used to describe the dynamic behavior of the ship in the presence of disturbances [14]:
+
+$$
+\dot{\eta } = J\left( \psi \right) v \tag{1}
+$$
+
+$$
+M\dot{v} + D\left( v\right) v = \tau + {\tau }_{d} \tag{2}
+$$
+
+where $\eta = {\left\lbrack x,y,\psi \right\rbrack }^{\top } \in {\mathcal{R}}^{3}$ represent the attitude vector including the surge position $x$ , the sway position $\mathrm{y}$ and the heading $\psi \in \left\lbrack {0,{2\pi }}\right\rbrack$ in the earth-fixed coordinate system. $v = {\left\lbrack u,v,r\right\rbrack }^{\top } \in {\mathcal{R}}^{3}$ denotes the velocity vector of the ship in the body-fixed coordinate system, which composed of the surge velocity $u$ , sway velocity $v$ and yaw velocity $r$ , respectively. $J\left( \psi \right)$ is the velocity transformation matrix as
+
+follow:
+
+$$
+J\left( \psi \right) = \left\lbrack \begin{matrix} \cos \left( \psi \right) & - \sin \left( \psi \right) & 0 \\ \sin \left( \psi \right) & \cos \left( \psi \right) & 0 \\ 0 & 0 & 1 \end{matrix}\right\rbrack \tag{3}
+$$
+
+with ${J}^{-1}\left( \psi \right) = {J}^{\top }\left( \psi \right)$ and $\parallel J\left( \psi \right) \parallel = 1$ . Equation (4) gives the specific expression of the positive definite symmetric inertia matrix $M \in {\mathcal{R}}^{3 \times 3}$ , which including additional mass. Equation (5) gives the specific expression of the nonlinear hydrodynamic function $D\left( v\right) v$ .
+
+$$
+M = \left\lbrack \begin{matrix} m - {X}_{\dot{u}} & 0 & 0 \\ 0 & m - {Y}_{\dot{v}} & m{x}_{G} - {X}_{\dot{r}} \\ 0 & m{x}_{G} - {X}_{\dot{r}} & {I}_{z} - {N}_{\dot{r}} \end{matrix}\right\rbrack \tag{4}
+$$
+
+$$
+D\left( v\right) v = \left\lbrack \begin{array}{l} {D}_{1} \\ {D}_{1} \\ {D}_{3} \end{array}\right\rbrack
+$$
+
+$$
+{D}_{1} = - {X}_{u}u - {X}_{\left| u\right| u}\left| u\right| u + {Y}_{\dot{v}}v\left| r\right| + {Y}_{\dot{r}}{rr} \tag{5}
+$$
+
+$$
+{D}_{2} = - {X}_{\dot{u}}{ur} - {Y}_{v}v - {Y}_{r}r - {X}_{\left| v\right| v}\left| v\right| v - {X}_{\left| v\right| r}\left| v\right| r
+$$
+
+$$
+{D}_{3} = \left( {{X}_{\dot{u}} - {Y}_{\dot{v}}}\right) {uv} - {Y}_{\dot{r}}{ur} - {N}_{v}v - {N}_{r}r - {N}_{\left| v\right| v}\left| v\right| v
+$$
+
+$$
+- {N}_{\left| v\right| r}\left| v\right| r
+$$
+
+where $m$ are ship’s mass, ${I}_{z}$ are moment of inertia and ${X}_{u}$ , ${X}_{\left| u\right| u},{Y}_{\dot{v}}$ , etc., are every hydrodynamic force derivatives. It is obvious from the expression in (5) that the nonlinear damping force composed of linear and quadratic terms. In the controller design of this paper, $D\left( v\right) v$ is an uncertain term in which the structure and parameters are unknown and is approximated online using NNs in later section.
+
+$\tau = {\left\lbrack {\tau }_{u},{\tau }_{v},{\tau }_{r}\right\rbrack }^{\top } \in {\mathcal{R}}^{3}$ denotes the control inputs, which are the forces and moments generated by the equipped actuators on the ship consisting of the shaft thruster, the tunnel thruster, and the azimuth thruster. In order to simplify the control inputs, all actuator devices inputs are fused into three degrees of freedom : ${\tau }_{u}$ in surge, ${\tau }_{v}$ in sway and ${\tau }_{r}$ in yaw. ${\tau }_{d} = \left\lbrack {\tau }_{du}\right.$ , ${\left. {\tau }_{dv},{\tau }_{dr}\right\rbrack }^{\top }$ indicates the unknown time-varying environment distraction induced by wind, and waves.
+
+Assumption 1. The environment disturbance ${\tau }_{d\upsilon }$ is bounded in the marine environment, indicating the existence of bounded ${\bar{\tau }}_{d\upsilon } > 0$ for ${\tau }_{d\upsilon }$ . i.e., $\left| {\tau }_{d\upsilon }\right| < {\bar{\tau }}_{d\upsilon }$ .
+
+Remark 1. When modeling ship DP systems, it is often necessary to accurately characterize and predict the effects of environmental disturbances on the ship. In order to simplify the model and to facilitate the design and testing of control algorithms, these environmental disturbances can be approximated and modeled using a sine-cosine function. The frequency, amplitude and phase of the interference can be easily adjusted using the sine-cosine function to simulate different intensities and types of environmental conditions.
+
+In the setting of unknown time-varying disturbances and model uncertainty, the goal of the control is to find a control laws $\tau$ makes the ship’s position(x, y)and heading $\psi$ successfully reach the desired position ${\eta }_{d}$ within the prescribed time. At the same time, the constructed zero-initial-value error function also converges within the set boundaries within the settling time and arbitrarily small errors, and all the errors are bounded all the time.
+
+§ III. FUNNEL CONTROL AND FUNNEL VARIABLE
+
+In the context of advanced control strategies for DP systems, particularly those addressing strict timing requirements, the concepts of FTFBs and FTTPF are integral. These are designed to ensure that the control system adheres to performance metrics strictly within a settling interval, regardless of initial conditions. In this section, the definitions of FTFBs and FTTPF are introduced for the purpose of imposing error bounds on them and constructing new error functions.
+
+§ A.THE DESIGN OF PRESCRIBED-TIME FUNNEL BOUNDARY
+
+Definiion 1. [15] FTFBs define the permissible bounds within which the system's states must remain over time. These boundaries are set to compact over a fixed-time period, ensuring that the system's behavior converges to the desired state within the settling duration. These boundaries are particularly useful in scenarios where rapid and reliable system stabilization is crucial.
+
+Equation (6) is selected as an FTFBs with the following traits: (1) $\Gamma \left( t\right) > 0$ and $\dot{\Gamma }\left( t\right) \leq 0$ ; (2) $\mathop{\lim }\limits_{{t \rightarrow {T}_{j}}}\Gamma \left( t\right) = {\Gamma }_{jT}$ ; (3) $\Gamma \left( t\right) = {\Gamma }_{jT}$ for $\forall t \geq {T}_{j}$ with ${T}_{j}$ being the predefined fixed time after which the boundary ceases to contracting.
+
+$$
+{\Gamma }_{jv} = \left\{ \begin{array}{ll} {\Gamma }_{jv0}\tanh \left( \frac{{\lambda }_{j}t}{t - {T}_{jv}}\right) + {\Gamma }_{jv0} + {\Gamma }_{jvT}, & t \in \left\lbrack {0,{T}_{jv}}\right) \\ {\Gamma }_{jvT}, & t \in \left\lbrack {{T}_{jv},\infty }\right) \end{array}\right. \tag{6}
+$$
+
+where ${\Gamma }_{jv0},{\Gamma }_{jvT}$ and ${\Gamma }_{jvT}$ are the initial and final boundary values, ${\lambda }_{j}$ is the decay rate, $j = 1,2$ and ${T}_{jv}$ is the predefined fixed time after which the boundary ceases to contracting.
+
+Definiion 2. [16] FTTPF is a function designed to evaluate and ensure the system's tracking performance over a fixed time, dictating how the tracking error should decrease over time to meet specific performance criteria by a predefined deadline.
+
+$$
+{\varphi }_{v}\left( t\right) = \left\{ \begin{array}{ll} {e}^{-\frac{{k}_{v}t}{{T}_{fv} - t}}, & t \in \left\lbrack {0,{T}_{fv}}\right) \\ 0, & t \in \left\lbrack {{T}_{fv},\infty }\right) \end{array}\right. \tag{7}
+$$
+
+Equation (7) is concretely constructed as an FTTPF with the following properties : $\left( 1\right) \varphi \left( 0\right) = 1$ ; (2) $\mathop{\lim }\limits_{{t \rightarrow {T}_{fv}}}\varphi \left( t\right) = 0$ and $\varphi \left( t\right) = 0$ for $\forall t \geq {T}_{fv}$ with ${T}_{fv}$ being a prescribed settling time. ${\Gamma }_{jv0},{\Gamma }_{jvT},{\lambda }_{j},{T}_{jv},{T}_{fv}$ and ${k}_{v}$ are positive constant.
+
+§ B. FUNNEL ERROR TRANSFORMATION
+
+In this paper, by embedding FTTPF ${\varphi }_{v}\left( t\right)$ we construct a new error $\chi \left( t\right)$ variable with 0 initial value as in (9).
+
+$$
+{z}_{1} = \eta - {\eta }_{d} \tag{8}
+$$
+
+$$
+\chi \left( t\right) = {z}_{1}\left( t\right) - {z}_{1}\left( 0\right) {\varphi }_{v}\left( t\right) = \eta - {\eta }_{d} - {z}_{1}\left( 0\right) {\varphi }_{v}\left( t\right) \tag{9}
+$$
+
+Then, ${\Gamma }_{j\upsilon },j = 1,2$ , is applied to ensure that the following symmetry performance constraints on $\chi \left( t\right)$ which are satisfied.
+
+$$
+- {\Gamma }_{1v} < \chi \left( t\right) < {\Gamma }_{2v} \tag{10}
+$$
+
+where ${\eta }_{d} = {\left\lbrack \begin{array}{lll} {x}_{d}, & {y}_{d}, & {\psi }_{d} \end{array}\right\rbrack }^{\top }$ represents the desired position of the ship DP system. Besides, to simplify the design of the controller, ${T}_{1v} = {T}_{2v}$ is adopted in this paper. In order to comply with the definition of $\chi \left( t\right)$ and the requirements of (9), $\chi \left( 0\right) = {z}_{1}\left( 0\right) - {z}_{1}\left( 0\right) {\varphi }_{v}\left( 0\right) = 0$ guarantees that the initial state $- {\Gamma }_{1\upsilon }\left( 0\right) < \chi \left( 0\right) < {\Gamma }_{2\upsilon }\left( 0\right) \Leftrightarrow - {\Gamma }_{1\upsilon }\left( 0\right) + {z}_{1}\left( 0\right) < {z}_{1}\left( 0\right) <$ ${\Gamma }_{2v}\left( 0\right) + {z}_{1}\left( 0\right)$ is always satisfied, which implicitly means that ${\Gamma }_{1v}$ and ${\Gamma }_{2v}$ no longer need to be redesigned in order to keep the characteristic that initial value is 0 of the new error.
+
+By introducing the constructed ${\Gamma }_{1v}$ and ${\Gamma }_{2v}$ , the maximum overshoot, settling time, and steady boundaries of $\chi \left( t\right)$ can be determined by $\max \left\{ {{\Gamma }_{1\mathrm{v}0} + {\Gamma }_{1\mathrm{{vT}}},{\Gamma }_{2\mathrm{v}0} + {\Gamma }_{2\mathrm{{vT}}}}\right\} ,{T}_{jv}$ and ${\Gamma }_{jvT}$ , respectively. The changing of ${z}_{1}\left( t\right)$ is required to be preassigned over $\left\lbrack {0,{T}_{fv}}\right)$ due to $- {\Gamma }_{1v}\left( t\right) + {z}_{1}\left( 0\right) {\varphi }_{v}\left( t\right) <$ ${z}_{1}\left( t\right) < {\Gamma }_{2v}\left( t\right) + {z}_{1}\left( 0\right) {\varphi }_{v}\left( t\right)$ for $\forall t \in \left\lbrack {0,{T}_{fv}}\right)$ . From the above analysis, (10) can be reformulated as:
+
+$$
+- {\Gamma }_{1}\left( t\right) < \chi \left( t\right) = {z}_{1}\left( t\right) - {z}_{1}\left( 0\right) {\Phi }_{1} < {\Gamma }_{2}\left( t\right) ,\forall t \geq 0 \tag{11}
+$$
+
+where ${\Gamma }_{1} = {\left\lbrack \begin{array}{lll} {\Gamma }_{1u}, & {\Gamma }_{1v}, & {\Gamma }_{1r} \end{array}\right\rbrack }^{\top },{\Gamma }_{2} = {\left\lbrack \begin{array}{lll} {\Gamma }_{2u}, & {\Gamma }_{2v}, & {\Gamma }_{2r} \end{array}\right\rbrack }^{\top }$ and ${\Phi }_{1} = {\left\lbrack \begin{array}{lll} {\varphi }_{u}, & {\varphi }_{v}, & {\varphi }_{r} \end{array}\right\rbrack }^{\top }$ .
+
+Although the extant FC results can tuned the transient and steady-state responses of ${z}_{1}$ , the corresponding problem is the need to rely on specific initial conditions. To solve this problem, inspiring from [17], we introduce the following variable transformation:
+
+$$
+\vartheta \left( t\right) = \chi \left( t\right) + \mu \left( t\right) \tag{12}
+$$
+
+with
+
+$$
+\mu \left( t\right) = \left( {{\Gamma }_{1}\left( t\right) - {\Gamma }_{2}\left( t\right) }\right) /2,\omega \left( t\right) = \left( {{\Gamma }_{1}\left( t\right) + {\Gamma }_{2}\left( t\right) }\right) /2 \tag{13}
+$$
+
+From (12) and (13), (11) is equivalent to
+
+$$
+- \omega \left( t\right) < \vartheta \left( t\right) < \omega \left( t\right) \tag{14}
+$$
+
+To improve control performance and achieve control objectives, the funnel error transformation as given by equation (15) is applied.
+
+$$
+{\xi }_{1}\left( t\right) = \frac{\vartheta \left( t\right) }{\sqrt{{\omega }^{2}\left( t\right) - {\vartheta }^{2}\left( t\right) }} \tag{15}
+$$
+
+The derivation of (15) yields ${\dot{\xi }}_{1}$
+
+$$
+{\dot{\xi }}_{1}\left( t\right) = {\Phi }_{2}\left( {\dot{\eta } - {\dot{\eta }}_{d} - {z}_{1}\left( 0\right) {\dot{\Phi }}_{1}\left( t\right) + \dot{\mu }\left( t\right) - \vartheta \left( t\right) \dot{\omega }\left( t\right) /\omega \left( t\right) }\right)
+$$
+
+(16)
+
+where ${\Phi }_{2} = {\omega }^{2}\left( t\right) /\sqrt{{\left( {\omega }^{2}\left( t\right) - {\vartheta }^{2}\left( t\right) \right) }^{3}} > 0$ . It should be noted that for complex representations of ${\dot{\xi }}_{1}$ , NNs are employed to approximate the uncertain terms. In the subsequent function formulations, function arguments are omitted to simplify the presentation and improve readability.
+
+§ IV. ADAPTIVE PT FUNNEL CONTROL DESIGN FOR DYNAMIC POSITIONED SHIPS
+
+In this section, adaptive parameters are introduced using NNs for online approximation of the uncertainty terms arising during the controller design process. The Backstepping means is utilized to design the virtual controller ${\alpha }_{v}$ and the control law $\tau$ for the second-order ship motion mathematical model (1) and (2). The DSC technique is applied to address the complexity in deriving ${\alpha }_{v}$ . The controller design procedure consists of two steps for the attitudes and velocity parts. The specific details of the controller design are detailed in IV-A, and the corresponding stability analysis is detailed in IV-B.
+
+§ A. CONTROLLER DESIGN
+
+Step 1: In the ship's DP system, the reference attitude signal ${\eta }_{d}$ is a constant with derivative 0, meaning ${\dot{\eta }}_{d} = 0$ . It is noted that in the derivative form of the boundary transformation error ${\xi }_{1},{\Phi }_{2}\left( {-{z}_{1}\left( 0\right) {\Phi }_{1}\left( t\right) + \dot{\mu }\left( t\right) - \vartheta \left( t\right) \dot{\omega }\left( t\right) /\omega \left( t\right) }\right)$ represents the unknown function vector. It can be obtained as (17) by using RBF-NNs ${F}_{1}\left( \eta \right)$ .
+
+$$
+{F}_{1}\left( \eta \right) = {\Phi }_{2}\left( {-{z}_{1}\left( 0\right) {\Phi }_{1}\left( t\right) + \dot{\mu }\left( t\right) - \vartheta \left( t\right) \dot{\omega }\left( t\right) /\omega \left( t\right) }\right)
+$$
+
+$$
+= {S}_{1}\left( \eta \right) {A}_{1}\eta + {\varepsilon }_{\eta }
+$$
+
+$$
+= \left\lbrack \begin{matrix} {s}_{x}\left( \eta \right) & 0 & 0 \\ 0 & {s}_{y}\left( \eta \right) & 0 \\ 0 & 0 & {s}_{\psi }\left( \eta \right) \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {A}_{x} \\ {A}_{y} \\ {A}_{\psi } \end{array}\right\rbrack \left\lbrack \begin{array}{l} x \\ y \\ \psi \end{array}\right\rbrack + \left\lbrack \begin{array}{l} {\varepsilon }_{x} \\ {\varepsilon }_{y} \\ {\varepsilon }_{\psi } \end{array}\right\rbrack
+$$
+
+(17)
+
+where ${\varepsilon }_{\eta }$ is corresponding upper bound vector. ${s}_{x}\left( \eta \right) =$ ${s}_{y}\left( \eta \right) = {s}_{\psi }\left( \eta \right)$ due to these RBF functions are with the same input vector $v$ . Let ${\theta }_{1} = {\begin{Vmatrix}{A}_{1}\eta \end{Vmatrix}}^{2}$ , where ${\widehat{\theta }}_{1}$ represents the estimated values of ${\theta }_{1}$ . From the above analysis, the immediate virtual controller ${\alpha }_{v}$ is determined as shown in (18).
+
+$$
+{\alpha }_{v} = - \frac{1}{{\Phi }_{2}J\left( \psi \right) }\left( {{k}_{1}{\xi }_{1} + \frac{{S}_{1}{}^{T}{S}_{1}{\widehat{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1} + \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}}\right) \tag{18}
+$$
+
+where ${k}_{1}$ is a strictly positive diagonal matrix of parameters. the DSC technique, i.e., a first-order low-pass filter (19), is applied here, considering that the derivatives of a are difficult to obtain and complex in form.
+
+$$
+{t}_{v}{\dot{\beta }}_{v} + {\beta }_{v} = {\alpha }_{v},{\beta }_{v}\left( 0\right) = {\alpha }_{v}\left( 0\right) \tag{19}
+$$
+
+${t}_{v}$ is a constant time-related matrix, and the input velocity vector signal ${\alpha }_{v}$ is transformed into the output velocity vector ${\beta }_{v}$ which is the reference vector for the velocity signal in the second step. Defining the error vector ${q}_{v} = {\left\lbrack {q}_{u},{q}_{v},{q}_{r}\right\rbrack }^{\top } =$ ${\alpha }_{v} - {\beta }_{v},{z}_{2} = {\beta }_{v} - v$ , the derivative of ${q}_{v}$ is acquired along with (18) and (19).
+
+$$
+{\dot{q}}_{v} = - {\dot{\beta }}_{v} + {\dot{\alpha }}_{v}
+$$
+
+$$
+= {t}_{v}^{-1}{q}_{v} + {B}_{v}\left( {{z}_{1},{\dot{z}}_{1},\psi ,r,{\widehat{\theta }}_{1},{\dot{\widehat{\theta }}}_{1}}\right) \tag{21}
+$$
+
+where ${B}_{v} = {\left\lbrack {B}_{u}\left( \cdot \right) ,{B}_{v}\left( \cdot \right) ,{B}_{r}\left( \cdot \right) \right\rbrack }^{\top }$ is a vector which includes 3 bounded continuous functions. Otherwise, there are the unknown positive value ${\bar{B}}_{v} = {\left\lbrack {\bar{B}}_{u}\left( \cdot \right) ,{\bar{B}}_{v}\left( \cdot \right) ,{\bar{B}}_{r}\left( \cdot \right) \right\rbrack }^{\top }$ such that $\left| {B}_{v}\right| \leq {\bar{B}}_{v}$ . Then, the dynamic error ${z}_{1}$ can be expressed as (21).
+
+Step 2: Together with the time derivative (19) of ${z}_{2}$ yields the corresponding result as (22).
+
+$$
+{\dot{z}}_{2} = {\dot{\beta }}_{v} - \dot{v} = {M}^{-1}\left( {M{\dot{\beta }}_{v} + D\left( v\right) v - \tau - {\tau }_{d}}\right) \tag{22}
+$$
+
+It is noted that $D\left( v\right) v$ is the uncertain term in the dynamic positioning system. Similar to the treatment of the unknown function vector in the first step, RBF-NNs are used to approximate this uncertainty term as follows:
+
+$$
+{F}_{2}\left( {v,{A}_{2}}\right) = {S}_{2}\left( v\right) {A}_{2}v + {\varepsilon }_{v}
+$$
+
+$$
+= \left\lbrack \begin{matrix} {s}_{u}\left( v\right) & 0 & 0 \\ 0 & {s}_{v}\left( v\right) & 0 \\ 0 & 0 & {s}_{r}\left( r\right) \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {A}_{u} \\ {A}_{v} \\ {A}_{r} \end{array}\right\rbrack \left\lbrack \begin{array}{l} u \\ v \\ r \end{array}\right\rbrack + \left\lbrack \begin{array}{l} {\varepsilon }_{u} \\ {\varepsilon }_{v} \\ {\varepsilon }_{r} \end{array}\right\rbrack
+$$
+
+(23)
+
+In (23), the output vector ${F}_{2} = \left\lbrack \begin{array}{lll} {f}_{2}\left( u\right) , & {f}_{2}\left( v\right) , & {f}_{2}\left( r\right) \end{array}\right\rbrack$ contains three components correspond to the $u,v,r$ component velocities. Let ${\theta }_{2} = {\begin{Vmatrix}{A}_{2}v\end{Vmatrix}}^{2}$ , where ${\widehat{\theta }}_{2}$ represents the estimated values of ${\theta }_{2}$ . The application of RBF-NNs simplifies the design of subsequent controller and adaptive laws, while reducing the computational complexity of the algorithms to enhance control performance.
+
+In the derivation of formulas involving NNs, three key applications of Youngs inequality are highlighted below:
+
+$$
+{\Phi }_{2}{\xi }_{1}{F}_{1} \leq {\xi }_{1}\left( {{S}_{1}{A}_{1}\eta + {\varepsilon }_{\eta }}\right)
+$$
+
+$$
+\leq \frac{{S}_{1}{}^{T}{S}_{1}{\begin{Vmatrix}{A}_{1}\eta \end{Vmatrix}}^{2}}{2{a}_{1}{}^{2}}{\xi }_{1}{}^{2} + \frac{1}{2}{a}_{1}{}^{2} + {\xi }_{1}{}^{2} + \frac{1}{4}{\varepsilon }_{\eta }{}^{2}
+$$
+
+(24)
+
+$$
+{z}_{2}{F}_{2} \leq {z}_{2}\left( {{S}_{2}{A}_{2}v + {\varepsilon }_{v}}\right)
+$$
+
+$$
+\leq \frac{{S}_{2}{}^{T}{S}_{2}{\begin{Vmatrix}{A}_{2}v\end{Vmatrix}}^{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{2} + \frac{1}{2}{a}_{2}{}^{2} + {z}_{2}{}^{2} + \frac{1}{2}{\varepsilon }_{v}{}^{2}
+$$
+
+(25)
+
+$$
+- {\Phi }_{2}J\left( \psi \right) {\xi }_{1}{q}_{v} \leq {\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} + \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}{}^{2} \tag{26}
+$$
+
+Based on the above analysis, (27) is chosen as the control input $\tau$ for the ship dynamic positioning system in this paper. Equation (28),(29) are the expression for the adaptive rate ${\widehat{\theta }}_{1}$ , ${\widehat{\theta }}_{2}$ .
+
+$$
+\tau = {k}_{2}{z}_{2} + {\dot{\beta }}_{v} + \frac{{S}_{2}{}^{T}{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2} + {\Phi }_{2}J\left( \psi \right) {\xi }_{1} \tag{27}
+$$
+
+$$
+{\dot{\widehat{\theta }}}_{1} = \frac{{\gamma }_{1}{S}_{1}^{T}{S}_{1}{\widehat{\theta }}_{1}}{2{a}_{1}{}^{2}} - {\varsigma }_{1}{\widehat{\theta }}_{1} \tag{28}
+$$
+
+$$
+{\dot{\widehat{\theta }}}_{2} = \frac{{\gamma }_{2}{S}_{2}{}^{T}{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}} - {\varsigma }_{2}{\widehat{\theta }}_{2} \tag{29}
+$$
+
+where ${k}_{2}$ is a strictly negative diagonal parameter matrix, ${a}_{1}$ , ${a}_{2},{\gamma }_{1},{\gamma }_{2},{\zeta }_{1}$ and ${\zeta }_{2}$ is positive design constants. It can be obviously observed that the designed controller has a very simple form, which significantly reduces the computational load and memory usage. Next, the semi-global uniformly ultimately bounded (SGUUB) stability of the DP system is demonstrated after incorporating the proposed algorithm, through a stability analysis.
+
+§ B. STABILITY ANALYSIS
+
+Select the Lyapunov function as following:
+
+$$
+V = \frac{1}{2}{\xi }_{1}^{\top }{\xi }_{1} + \frac{1}{2}{z}_{2}^{\top }M{z}_{2} + \frac{1}{2}{q}_{v}^{\top }{q}_{v} + \frac{1}{2{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1} + \frac{1}{2{\gamma }_{2}}{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1}
+$$
+
+(30)where ${\widetilde{\theta }}_{1} = {\widehat{\theta }}_{1} - {\theta }_{1}$ , and ${\widetilde{\theta }}_{2} = {\widehat{\theta }}_{2} - {\theta }_{2}$ . By considering $\vartheta \left( t\right) /\sqrt{{\omega }^{2}\left( t\right) - {\vartheta }^{2}\left( t\right) }$ and ${z}_{2} = {\beta }_{v} - v$ , the time derivative of $V$ is expressed as:
+
+$$
+\dot{V} = {\xi }_{1}^{\top }{\dot{\xi }}_{1} + {z}_{2}{}^{T}M{\dot{z}}_{2} + {q}_{v}{}^{T}{\dot{q}}_{v} + \frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{T}{\dot{\widehat{\theta }}}_{1} + \frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{2}^{T}{\dot{\widehat{\theta }}}_{2} \tag{31}
+$$
+
+According to (24),(26), $\parallel J\left( \psi \right) \parallel = 1$ and Young’s inequality, it is obtained that
+
+$$
+{\xi }_{1}^{\top }{\dot{\xi }}_{1} = {\xi }_{1}^{\top }\left\lbrack {{\Phi }_{2}J\left( \psi \right) \left( {{\alpha }_{v} - \left( {{z}_{2} - {q}_{v}}\right) }\right) + {S}_{1}{A}_{1}\eta + {\varepsilon }_{\eta }}\right\rbrack
+$$
+
+$$
+= {\xi }_{1}^{\top }\left\{ {{\Phi }_{2}J\left( \psi \right) \left( {-\frac{1}{{\Phi }_{2}}J{\left( \psi \right) }^{-1}\left( {{k}_{1}{\xi }_{1} + \frac{{S}_{1}^{\top }{S}_{1}{\widehat{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1}}\right. }\right. }\right.
+$$
+
+$$
+\left. \left. {\left. {+\frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}}\right) - \left( {{z}_{2} - {q}_{v}}\right) }\right) \right\} + {\xi }_{1}^{\top }{S}_{1}{A}_{1}\eta + {\xi }_{1}^{\top }{\varepsilon }_{\eta }
+$$
+
+$$
+\leq {\xi }_{1}{}^{T}\left\{ {-{k}_{1}{\xi }_{1} - \frac{{S}_{1}{}^{T}{S}_{1}{\widehat{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1} - \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}}\right\}
+$$
+
+$$
+- {\xi }_{1}^{\top }{\Phi }_{2}J\left( \psi \right) {z}_{2} - {\xi }_{1}^{\top }{\Phi }_{2}J\left( \psi \right) {q}_{v} + \frac{{S}_{1}^{\top }{S}_{1}{\begin{Vmatrix}{A}_{1}\eta \end{Vmatrix}}^{2}}{2{a}_{1}{}^{2}}{\xi }_{1}{}^{\top }{\xi }_{1}
+$$
+
+$$
++ \frac{1}{2}{a}_{1}{}^{2} + {\xi }_{1}{}^{\top }{\xi }_{1} + \frac{1}{4}{\varepsilon }_{\eta }{}^{2}
+$$
+
+$$
+\leq - {k}_{1}{\xi }_{1}^{\top }{\xi }_{1} + \frac{{S}_{1}^{\top }{S}_{1}\left( {{\theta }_{1} - {\widehat{\theta }}_{1}}\right) }{2{a}_{1}{}^{2}}{\xi }_{1}{}^{\top }{\xi }_{1} - \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}{}^{\top }{\xi }_{1}
+$$
+
+$$
+- {\xi }_{1}^{\top }{\Phi }_{2}J\left( \psi \right) {z}_{2} + {\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} + \frac{1}{4}{\begin{Vmatrix}{\Phi }_{2}\end{Vmatrix}}^{2}{\xi }_{1}^{\top }{\xi }_{1} + {\xi }_{1}^{\top }{\xi }_{1}
+$$
+
+$$
++ \frac{1}{4}{\varepsilon }_{\eta }{}^{2} + \frac{1}{2}{a}_{1}{}^{2}
+$$
+
+$$
+\leq - {k}_{1}{\xi }_{1}^{\top }{\xi }_{1} - \frac{{S}_{1}{}^{T}{S}_{1}{\widetilde{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1}{}^{\top }{\xi }_{1} - {\xi }_{1}{}^{\top }{\Phi }_{2}J\left( \psi \right) {z}_{2}
+$$
+
+$$
++ {\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} + {\xi }_{1}^{\top }{\xi }_{1} + \frac{1}{4}{\varepsilon }_{\eta }{}^{2} + \frac{1}{2}{a}_{1}{}^{2} \tag{32}
+$$
+
+In view of (22),(23),(25) and (27), $\parallel J\left( \psi \right) \parallel = 1$ and Young's inequality, it follows that
+
+$$
+{z}_{2}^{\top }M{\dot{z}}_{2} = {z}_{2}^{\top }M\left\lbrack {{M}^{-1}\left( {M{\dot{\beta }}_{v} + {Dv} - \tau - {\tau }_{d}}\right) }\right\rbrack
+$$
+
+$$
+= {z}_{2}^{\top }\left\lbrack {M{\dot{\beta }}_{v} + {F}_{2} - \left( {{k}_{2}{z}_{2} + {\dot{\beta }}_{v} + \frac{{S}_{2}^{T}{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}}\right) }\right.
+$$
+
+$$
+\left. {-{\Phi }_{2}R\left( \psi \right) {\xi }_{1} - {\tau }_{d}}\right\rbrack
+$$
+
+$$
+\leq {z}_{2}^{\top }\left( {M - I}\right) {\dot{\beta }}_{v} + \frac{{S}_{2}^{\top }{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{\top }{z}_{2} + \frac{1}{2}{a}_{2}{}^{2} + {z}_{2}{}^{\top }{z}_{2}
+$$
+
+$$
++ \frac{1}{2}{\varepsilon }_{v}{}^{2} - {k}_{2}{z}_{2}{}^{\top }{z}_{2} - \frac{{S}_{2}{}^{\top }{S}_{2}{\widehat{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2} + {\Phi }_{2}R\left( \psi \right) {z}_{2}{}^{\top }{\xi }_{1} - {z}_{2}{}^{\top }{\tau }_{d}
+$$
+
+$$
+\leq {z}_{2}^{\top }\left( {M - I}\right) {\dot{\beta }}_{v} + \frac{{S}_{2}^{\top }{S}_{2}{\widetilde{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{\top }{z}_{2} + \frac{1}{2}{a}_{2}{}^{2} + {z}_{2}{}^{\top }{z}_{2}
+$$
+
+$$
++ \frac{1}{2}{\varepsilon }_{v}{}^{2} - {k}_{2}{z}_{2}{}^{\top }{z}_{2} + {\Phi }_{2}R\left( \psi \right) {z}_{2}{}^{\top }{\xi }_{1} - {z}_{2}{}^{\top }{\tau }_{d} \tag{33}
+$$
+
+It is worth noticing that
+
+$$
+{z}_{2}\left( {M - I}\right) {\dot{\beta }}_{v} \leq {\begin{Vmatrix}\left( M - I\right) {t}_{v}{}^{-1}\end{Vmatrix}}_{F}^{2}{\begin{Vmatrix}{z}_{2}\end{Vmatrix}}^{2} + \frac{1}{4}{\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} \tag{34}
+$$
+
+$$
+- {z}_{2}{\tau }_{d} \leq {z}_{2}^{\top }{z}_{2} + \frac{{\tau }_{d}^{\top }{\tau }_{d}}{4} \tag{35}
+$$
+
+Note that $I$ is the identity matrix. Then (33) becomes
+
+$$
+{z}_{2}^{\top }M{\dot{z}}_{2} \leq {\begin{Vmatrix}\left( M - I\right) {t}_{v}{}^{-1}\end{Vmatrix}}_{F}^{2}{\begin{Vmatrix}{z}_{2}\end{Vmatrix}}^{2} + 2{z}_{2}{}^{\top }{z}_{2} - {k}_{2}{z}_{2}{}^{\top }{z}_{2}
+$$
+
+$$
+- \frac{{S}_{2}^{\top }{S}_{2}{\widetilde{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{\top }{z}_{2} + \frac{1}{4}{\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} + {\Phi }_{2}R\left( \psi \right) {z}_{2}{}^{\top }{\xi }_{1}
+$$
+
+$$
++ \frac{{\tau }_{d}{}^{\top }{\tau }_{d}}{4} + \frac{1}{2}{a}_{2}{}^{2} + \frac{1}{2}{\varepsilon }_{v}{}^{2} \tag{36}
+$$
+
+Incorporating adaptive law (28) and ${\widetilde{\theta }}_{2} = {\widehat{\theta }}_{2} - {\theta }_{2}$ ,(37) and (38) is obtained.
+
+$$
+\frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\dot{\widehat{\theta }}}_{1} \leq \frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }\left( {\frac{{\gamma }_{1}{S}_{1}^{\top }{S}_{1}{\xi }_{1}^{\top }{\xi }_{1}}{2{a}_{1}{}^{2}} - {\varsigma }_{1}{\widehat{\theta }}_{1}}\right)
+$$
+
+$$
+\leq \frac{{S}_{1}^{\top }{S}_{1}{\widetilde{\theta }}_{1}^{\top }{\xi }_{1}^{\top }{\xi }_{1}}{2{a}_{1}{}^{2}} - \frac{{\varsigma }_{1}{\widetilde{\theta }}_{1}^{\top }{\widehat{\theta }}_{1}}{{\gamma }_{1}} \tag{37}
+$$
+
+$$
+{\widetilde{\theta }}_{1}^{\top }{\widehat{\theta }}_{1} \leq {\widetilde{\theta }}_{1}^{\top }\left( {{\widetilde{\theta }}_{1} + {\theta }_{1}}\right)
+$$
+
+$$
+\leq {\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1} + {\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1}^{\top }
+$$
+
+$$
+\leq 2{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1} + \frac{1}{4}{\theta }_{1}^{2} \tag{38}
+$$
+
+Substituting (38) into (37), one gets
+
+$$
+\frac{1}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\dot{\widehat{\theta }}}_{1} \leq \frac{{S}_{1}^{\top }{S}_{1}{\widetilde{\theta }}_{1}^{\top }{\xi }_{1}^{\top }{\xi }_{1}}{2{a}_{1}{}^{2}} - \frac{2{\varsigma }_{1}}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1} - \frac{{\varsigma }_{1}}{4{\gamma }_{1}}{\theta }_{1}{}^{2} \tag{39}
+$$
+
+As the same as above steps, another gets:
+
+$$
+\frac{1}{{\gamma }_{2}}{\widetilde{\theta }}_{2}^{\top }{\dot{\widehat{\theta }}}_{2} \leq \frac{{S}_{2}{}^{\top }{S}_{2}{\widetilde{\theta }}_{2}^{\top }{z}_{2}{}^{\top }{z}_{2}}{2{a}_{2}{}^{2}} - \frac{2{\varsigma }_{2}}{{\gamma }_{2}}{\widetilde{\theta }}_{2}^{\top }{\widetilde{\theta }}_{2} - \frac{{\varsigma }_{2}}{4{\gamma }_{2}}{\theta }_{2}{}^{2} \tag{40}
+$$
+
+Using (21) and Young’s inequality, ${q}_{v}^{\top }{\dot{q}}_{v}$ follows that
+
+$$
+{q}_{v}^{\top }{\dot{q}}_{v} \leq - \mathop{\sum }\limits_{{i = u,v,r}}\left( {\frac{{q}_{i}{}^{2}}{{t}_{i}} - \frac{{B}_{i}^{2}{q}_{i}{\bar{B}}_{i}^{2}}{{2b}{\bar{B}}_{i}} - \frac{b}{2}}\right)
+$$
+
+$$
+\leq - \mathop{\sum }\limits_{{i = u,v,r}}\left\lbrack {\left( {\frac{1}{{t}_{i}} - \frac{{\bar{B}}_{i}^{2}}{2b}}\right) {q}_{i}{}^{2} + \left( {1 - \frac{{B}_{i}^{2}}{{\bar{B}}_{i}^{2}}}\right) \frac{{\bar{B}}_{i}^{2}{q}_{i}{}^{2}}{2b} - \frac{b}{2}}\right\rbrack
+$$
+
+$$
+\leq - \mathop{\sum }\limits_{{i = u,v,r}}\left\lbrack {\left( {\frac{1}{{t}_{i}} - \frac{{\bar{B}}_{i}^{2}}{2b}}\right) {q}_{i}{}^{2}}\right\rbrack + \frac{3b}{2} \tag{41}
+$$
+
+Submitting (32) (36) (39) (40) and (41) into (31), the time
+
+$$
+\dot{V} \leq - \left( {{k}_{1} - I}\right) {\xi }_{1}^{\top }{\xi }_{1} - \left( {{k}_{2} - {2I}}\right) {z}_{2}^{\top }{z}_{2} + {\begin{Vmatrix}\left( M - I\right) {t}_{v}{}^{-1}\end{Vmatrix}}_{F}^{2}{\begin{Vmatrix}{z}_{2}\end{Vmatrix}}^{2}
+$$
+
+$$
++ \frac{5}{4}{\begin{Vmatrix}{q}_{v}\end{Vmatrix}}^{2} - \mathop{\sum }\limits_{{i = u,v,r}}\left\lbrack {\left( {\frac{1}{{t}_{i}} - \frac{{\bar{B}}_{i}^{2}}{2b}}\right) {q}_{i}{}^{2}}\right\rbrack - \frac{2{\varsigma }_{1}}{{\gamma }_{1}}{\widetilde{\theta }}_{1}^{\top }{\widetilde{\theta }}_{1}
+$$
+
+$$
+- \frac{2{\varsigma }_{2}}{{\gamma }_{2}}{\widetilde{\theta }}_{2}^{\top }{\widetilde{\theta }}_{2} - {\xi }_{1}^{\top }{\Phi }_{2}J\left( \psi \right) {z}_{2} + {\xi }_{1}{\Phi }_{2}R\left( \psi \right) {z}_{2}^{\top } - \frac{{S}_{1}^{\top }{S}_{1}{\widetilde{\theta }}_{1}}{2{a}_{1}{}^{2}}{\xi }_{1}{}^{\top }{\xi }_{1}
+$$
+
+$$
++ \frac{{S}_{1}^{\top }{S}_{1}{\widetilde{\theta }}_{1}^{\top }{\xi }_{1}^{\top }{\xi }_{1}}{2{a}_{1}{}^{2}} - \frac{{S}_{2}{}^{T}{S}_{2}{\widetilde{\theta }}_{2}}{2{a}_{2}{}^{2}}{z}_{2}{}^{\top }{z}_{2} + \frac{{S}_{2}{}^{T}{S}_{2}{\widetilde{\theta }}_{2}^{\top }{\xi }_{2}{}^{\top }{\xi }_{2}}{2{a}_{2}{}^{2}}
+$$
+
+$$
++ \frac{1}{4}{\varepsilon }_{\eta }{}^{2} + \frac{1}{2}{a}_{1}{}^{2} + \frac{{\tau }_{d}{}^{\top }{\tau }_{d}}{4} + \frac{1}{2}{a}_{2}{}^{2} + \frac{1}{2}{\varepsilon }_{\upsilon }{}^{2} - \frac{{\varsigma }_{1}}{4{\gamma }_{1}}{\theta }_{1}{}^{2}
+$$
+
+$$
+- \frac{{\varsigma }_{2}}{4{\gamma }_{2}}{\theta }_{2}{}^{2} + \frac{3b}{2}
+$$
+
+$$
+\leq - {2aV} + \varrho \tag{42}
+$$
+
+where $a = {\lambda }_{\min }\left\{ {-\left( {{k}_{1} - I}\right) , - \left( {{k}_{2} - {2I}}\right) + {\begin{Vmatrix}\left( M - I\right) {t}^{-1}{}_{v}\end{Vmatrix}}_{F}}\right.$ , $\left\{ {5/4\underset{i = u,v,r}{-\sum }\left\lbrack {1/{t}_{i} - {\bar{B}}_{i}/{2b}}\right\rbrack }\right\} ,\left. {-2{\varsigma }_{1}/{\gamma }_{2} - 2{\varsigma }_{2}/{\gamma }_{2}}\right\} ,\varrho = 1/\left( {4{\varepsilon }_{\eta }^{2}}\right)$ $+ 1/\left( {2{a}_{1}^{2}}\right) + {\tau }_{d}^{\top }{\tau }_{d}/4 + 1/\left( {2{a}_{2}^{2}}\right) + 1/\left( {2{\varepsilon }_{v}^{2}}\right) - {\varsigma }_{1}{\theta }_{1}^{2}/\left( {4{\gamma }_{1}}\right)$ $- {\varsigma }_{2}{\theta }_{2}^{2}/\left( {4{\gamma }_{2}}\right) + {3b}/2$ .
+
+By integrating both sides of equation (42), we obtain:
+
+$$
+V\left( t\right) \leq \left( {V\left( 0\right) - \frac{\varrho }{2a}}\right) {e}^{\left( -2at\right) } + \frac{\varrho }{2a} \tag{43}
+$$
+
+According to the closed-loop gain shaping algorithm [18], all errors variables in closed-loop system decrease to the compact set $\Omega \mathrel{\text{ := }} \left\{ {\left. \left( {{\xi }_{1},{z}_{2},{q}_{v},{\widetilde{\theta }}_{1},{\widehat{\widetilde{\theta }}}_{2}}\right) \right| \;\parallel {\xi }_{1}\parallel \leq {C}_{0},{C}_{0} > \sqrt{\varrho /a}}\right\}$ as $t \rightarrow \infty$ by choosing appropriate parameters. ${C}_{0} > \sqrt{\varrho /a}$ is a positive constant. Thus, the closed-loop control system is SGUUB stable under the proposed control scheme, given the positive constant ${C}_{0}$ , where all signal errors in the closed-loop system can be made arbitrarily small.
+
+§ V. SIMULATION
+
+In this section, to verify the effectiveness of the proposed prescribed-time algorithm, a simulation example for a supply ship (length: ${76.2}\mathrm{\;m}$ , mass: ${4.591} \times {10}^{6}\mathrm{\;{kg}}$ ) equipped with a DP system is executed and compared with the Optimum-seeking Guidance scheme (OSG) in [19] and robust control scheme in [20]. The ship mathematic model parameters are presented in TABLE I. In the modeling of ship Dynamic Positioning (DP) systems, it is essential to precisely characterize and predict the impacts of environmental disturbances, such as wind, waves, and ocean currents on the ship's performance. For the sake of simplifying the ship model and facilitating the design and testing of control algorithms, these environmental disturbances are approximated and modeled using a sine-cosine function (44).
+
+$$
+\left\{ \begin{array}{l} {\tau }_{du} = 2\left( {1 + {35}\sin \left( {{0.2t} + {15}\cos \left( {0.5t}\right) }\right) }\right) \left( \mathrm{N}\right) \\ {\tau }_{dv} = 2\left( {1 + {30}\cos \left( {{0.4t} + {20}\cos \left( {0.1t}\right) }\right) }\right) \left( \mathrm{N}\right) \\ {\tau }_{dr} = 3\left( {1 + {30}\cos \left( {{0.3t} + {10}\sin \left( {0.5t}\right) }\right) }\right) \left( {\mathrm{N} \cdot \mathrm{r}}\right. \end{array}\right. \tag{44}
+$$
+
+$$
+{k}_{1} = \operatorname{diag}\left\lbrack {{0.2},{0.38},{0.20}}\right\rbrack ,{k}_{2} = \operatorname{diag}\left\lbrack {{44},{12.8},{78.1}}\right\rbrack ;
+$$
+
+$$
+{t}_{v} = {0.05} \times I;{a}_{1} = {a}_{2} = {80};{\gamma }_{1} = {\gamma }_{2} = {0.5};{\varsigma }_{1} = {\varsigma }_{2} = {0.5}\text{ ; }
+$$
+
+$$
+{T}_{j\upsilon } = \left\lbrack {{T}_{ju},{T}_{jv},{T}_{jr}}\right\rbrack = \left\lbrack {{80s},{80s},{90s}}\right\rbrack ;
+$$
+
+$$
+{T}_{fv} = \left\lbrack {{T}_{fu},{T}_{fv},{T}_{fr}}\right\rbrack = \left\lbrack {{80s},{80s},{90s}}\right\rbrack ; \tag{45}
+$$
+
+TABLE I
+
+MODEL PARAMETERS
+
+max width=
+
+Indexes Values Indexes Values
+
+1-4
+${X}_{\dot{u}}$ $- {0.72} \times {10}^{6}$ ${X}_{\dot{u}}$ ${5.0242} \times {10}^{4}$
+
+1-4
+${Y}_{v}$ $- {3.6921} \times {10}^{6}$ ${Y}_{v}$ ${2.7229} \times {10}^{6}$
+
+1-4
+${Y}_{\dot{r}}$ $- {1.0234} \times {10}^{6}$ ${Y}_{r}$ $- {4.3933} \times {10}^{6}$
+
+1-4
+${I}_{z} - {N}_{\dot{r}}$ ${3.7454} \times {10}^{9}$ ${Y}_{\left| v\right| v}$ ${1.7860} \times {10}^{4}$
+
+1-4
+${X}_{\left| u\right| u}$ ${1.0179} \times {10}^{3}$ ${Y}_{\left| v\right| r}$ $- {3.0068} \times {10}^{5}$
+
+1-4
+${N}_{v}$ $- {4.3821} \times {10}^{6}$ ${N}_{r}$ ${4.1894} \times {10}^{6}$
+
+1-4
+${N}_{\left| v\right| v}$ $- {2.4684} \times {10}^{5}$ ${N}_{\left| v\right| r}$ ${6.5759} \times {10}^{6}$
+
+1-4
+
+ < g r a p h i c s >
+
+Fig. 1. Trajectory of the ship in ${xy}$ -plane.
+
+In this simulation, the desired attitude is set to ${\eta }_{d} =$ $\left\lbrack \begin{array}{lll} 0\mathrm{m}, & 0\mathrm{m}, & 0\mathrm{{deg}} \end{array}\right\rbrack$ . The initial states are set to $\eta \left( 0\right) =$ $\left\lbrack {{12}\mathrm{\;m},{14}\mathrm{\;m},{10}\mathrm{{deg}}}\right\rbrack ,v\left( 0\right) = \left\lbrack {0\mathrm{\;m}/\mathrm{s},{14}\mathrm{\;m}/\mathrm{s},{10}\mathrm{{deg}}/\mathrm{s}}\right\rbrack .$ The concrete parameters values setting follows (45). Besides, the RBF-NNs for ${F}_{1}$ and ${F}_{1}$ consisted of 25 nodes with centers spaced in $\left\lbrack {-{2.5}\mathrm{\;m}/\mathrm{s},{2.5}\mathrm{\;m}/\mathrm{s}}\right\rbrack$ for $x,y,u$ and $r$ , $\left\lbrack {-{0.16}\mathrm{\;m}/\mathrm{s},{0.16}\mathrm{\;m}/\mathrm{s}}\right\rbrack$ for $\psi$ and $r$ , respectively. For the comparison algorithms, corresponding parameters refers to [19] and [20].
+
+Fig. 1 exhibits simulation results under the proposed algorithm, OSG and robust control making the ship stay at the desired attitude in the ${xy}$ -plant. It is clear that the proposed algorithm provides a more satisfactory trajectory accuracy compared to the algorithms for comparison. Fig. 2 illustrates that the ship attitude $x,y$ and $\psi$ are stabilized to the desired attitude near the prescribed-time ${T}_{j\upsilon }$ . The proposed scheme achieves faster stabilization compared to the schemes for comparison. The velocities of surge, sway and yaw are showed in Fig. 3. The proposed scheme has a improved convergence performance. Fig. 4 illustrates the fluctuation of the values of the three input signals over time. It is apparent that prior to system stabilization, the proposed scheme exhibits superior convergence performance of ${\tau }_{r}$ compared to other schemes. Once the system has stabilized, the values of ${\tau }_{u}$ and ${\tau }_{v}$ in the proposed scheme converge more rapidly towards zero, further outperforming the other schemes in convergence efficiency. Finally, it can be seen that the constructed new error is successfully confined within the boundaries and converges stably to 0 at the moment of settling time ${T}_{fu},{T}_{fv},{T}_{fr}$ as shown in Fig. 5. Additionally, Fig. 6 and Fig. 7 illustrate the fitting performance between the estimated and true values of the adaptive parameters ${\theta }_{1}$ and ${\theta }_{2}$ , respectively, representing the approximation capability of the RBF-NNs for the system uncertainties terms. It can be observed that, within the permissible margin of error, the RBF-NNs successfully approximate uncertainties terms described by (13) and (17).
+
+ < g r a p h i c s >
+
+Fig. 2. Ship’s actual position(x, y)and heading $\psi$ .
+
+ < g r a p h i c s >
+
+Fig. 3. Ship’s surge velocity $u$ , sway velocity $v$ and yaw rate $r$ .
+
+ < g r a p h i c s >
+
+Fig. 4. Ship’s surge force ${\tau }_{u}$ , sway force ${\tau }_{v}$ and yaw force ${\tau }_{r}$ .
+
+ < g r a p h i c s >
+
+Fig. 5. The new error ${\xi }_{1}$ for the simulation with the proposed scheme.
+
+In summary, the NNs-based prescribed-time control scheme proposed in this paper demonstrates superior performance and robustness compared to the schemes for comparison. By introducing FTFBs, FTTFBs and constructing new error functions, the control laws are made more concise. Finally, the proposed scheme is validated through simulations to demonstrate its effectiveness on DP ships.
+
+ < g r a p h i c s >
+
+Fig. 6. The estimation performance of ${\theta }_{1}$ .
+
+ < g r a p h i c s >
+
+Fig. 7. The estimation performance of ${\theta }_{2}$ .
+
+§ VI. CONCLUTION
+
+In this paper, a novel NNs-based control scheme is proposed for the ship DP system under model uncertain and unknown environmental disturbances, making the new dynamic errors converging within fixed boundaries. The prescribed-time performance of the algorithm is validated through by a simulation example and two comparative simulations with satisfactory results. Consequently, the prescribed-time control algorithm proposed in this paper can be applied to ships performing DP tasks, enabling the ship's dynamic system to achieve more precise time-based prescribed performance.
+
+Given the presence of multiple dynamic actuators in engineering practices related to marine equipment, future research on the proposed algorithm could focus on the issue of actuator control allocation. In addition, the integration of event-triggered control, fault-tolerant control, and blind zone constraints could further enhance the development of this control algorithm toward more advanced and precise control techniques.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bmvHIfgK1y/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bmvHIfgK1y/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..450681637b2dfa001f6a89c93fe126bb70bd2c9b
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bmvHIfgK1y/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,319 @@
+# Modeling and analysis of UAV charging scheduling in fixed/mobile charging station systems
+
+${1}^{\text{st }}$ Zeyu Guo
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+eyuguoll@163.com
+
+${2}^{\text{nd }}$ Sining Zhang
+
+China north vehicle research institute
+
+Beijing, China
+
+13426157603@163.com
+
+${3}^{\text{rd }}$ Jiahe Wang
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+213220649@seu.edu.cn
+
+${4}^{\text{th }}$ Xinyuan Huang
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+213220199@seu.edu.cn
+
+${5}^{\text{th }}$ Ruixu Hu
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+220231953@seu.edu.cn
+
+${6}^{\text{th }}$ Wenying Xu
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+wyxu@seu.edu.cn
+
+${Abstract}$ -This paper proposes two novel mathematical models for optimizing the charging schedules of Unmanned Aerial Vehicles (UAVs) within systems featuring either fixed or mobile charging stations. The primary objective is to minimize the total charging time for all UAVs. Initially, the fixed charging station (FCS) system is modeled, followed by a comparison of four different algorithms. Subsequently, the model is extended to consider a mobile charging station (MCS) system, where the station can relocate as necessary. In this scenario, an algorithm is proposed to optimize the charging station's position to enhance charging efficiency. Finally, a numerical example is presented to compare the performance of different algorithms in both fixed and mobile charging station systems. The simulation results demonstrate that the proposed algorithm effectively improves charging efficiency by strategically positioning the charging station.
+
+Index Terms-Unmanned aerial vehicles, charging scheduling, fixed charging station, mobile charging station
+
+## I. INTRODUCTION
+
+In recent years, unmanned aerial vehicles have been widely used in military, civil and commercial fields, showing great potential for application. In the military field, UAVs are widely used for tasks such as dangerous reconnaissance [1], search and rescue [2], and logistics distribution [3] due to their flexible deployment capability. Compared with ordinary vehicles, UAVs can quickly reach areas with complex road conditions, greatly improving the efficiency of rescue and cargo transportation missions [4]. Due to the limitations of battery technology and longer mission times, the endurance issue of UAVs has become a major barrier to their development. To overcome this problem, there has been some research to improve the battery technology of UAVs. Reference [5] summarizes recent advancements in battery technology. Another way to enhance endurance is to plan the scheduling of UAVs such as path planning and task assignment, which is a strategy used in many studies [6]-[9]. However, these studies have some shortcomings. On one hand, some research imposes restrictions on the charging scenarios, such as requiring mobile charging vehicles to travel to the drones for power supply [8]. In certain situations, UAVs may need to perform hazardous tasks in remote areas where charging vehicles cannot reach. This requires UAVs to travel to the charging vehicle location for energy replenishment. On the other hand, these studies do not pay enough attention to the queuing problem caused by the increased demand for charging, as their focus is on the path and task planning of drones. Therefore, it is essential to study the queuing problem in the scenario where UAVs need to travel to charging stations to recharge their batteries.
+
+The charging scheduling problem for commercial electric vehicles shares similarities with that of drones [10]. The former has conducted thorough research on the queuing problem in the charging process, providing methods and insights for the latter. Based on the mobility of charging stations, these studies can be categorized into fixed charging stations (FCS) and mobile charging stations (MCS). For FCS, the studies mainly address the layout of charging stations and the scheduling and management of charging vehicles. Zhu et al. [11] proposed a charging scheduling strategy that determines the charging sequence based on the electric vehicles' charging needs rather than their arrival times at the charging stations. Hamed et al. [12] considered different charging needs (day and night) and various charging scenarios, formulating the problem as a mixed-integer linear problem with multiple constraints. To address the shortcomings of FCS, such as range anxiety and lengthy charging times, some studies have begun to utilize MCS [13], [14]. Li et al. [13] proposed a framework for optimizing mobile charging vehicle operations and developed a variant of a Mixed-Integer Linear Programming (MILP) model. Inspired by the above research, we consider both fixed and mobile scenarios of charging stations, and build the FCS and MCS systems based on the scenario in which UAVs need to travel to charging stations to charge.
+
+The contributions of this paper are summarized as follows.
+
+- Considering the scenario of UAVs traveling to charging stations and the queuing problem caused by limited charging capacity, we model the fixed charging station system and use four different algorithms to solve the problem.
+
+- Based on the FCS system model, the MCS system model is established by adding the mobility of the charging station, and an algorithm to optimize the location of the charging station is proposed. Simulation results indicate that, compared to the FCS, the MCS system model effectively reduces the total flight distance of drones caused by charging.
+
+The rest of the paper is organized as follows. Section II introduces the system model. In Section III, optimization algorithms are introduced. Numerical experiments and analysis are conducted in Section IV. Finally, Section V concludes this paper.
+
+## II. System Model
+
+In this section, a fixed charging station system is presented to solve the queuing problem encountered in UAV charging. Then, considering the practical needs of mobile charging stations, this paper establishes a mobile charging station system. This system involves the selection of charging station location and the scheduling of UAVs.
+
+## A. Fixed Charging Station System Model
+
+1) FCS System: The fixed charging station system consists of a fixed charging station and several UAVs operating tasks in the vicinity of the FCS. Each UAV is designated by the index $i, i \in \{ 1,2,3,\ldots , N\}$ . The UAVs can communicate with the FCS, sending their current data to the FCS. The FCS is equipped with the capability to collect and process various data from UAVs, enabling it to calculate charging sequence schemes. Data related to UAVs and the charging station are presented in Table I. The FCS has sufficient energy but a fixed number of charging ports. When all charging ports are occupied by UAVs, the remaining UAVs need to queue up in the charging sequence and wait their turn. The number of charging ports is denoted as $M$ and the coordinates of the FCS are located at the origin.
+
+The scheduling process of the FCS system is shown in Fig. 1. A complete charging scheduling process involves: at regular time intervals $T$ , the FCS collects relevant data from each UAV, confirms the set of UAVs capable of reaching the FCS, and proceeds with the solution of the scheduling scheme. UAVs unable to reach the FCS do not participate in the schedule and terminate their tasks. At the same time, the participating UAVs move to the FCS for charging. Upon arrival at the station, each UAV lines up at the charging port in the charging sequence provided by the FCS. After finish charging, they will autonomously return to the mission site to continue their missions. When all the UAVs involved in the scheduling return to their mission sites, the scheduling process for this round ends.
+
+TABLE I: Description of the symbol
+
+| Symbol | Description | Unit |
| ${C}_{i}$ | The battery capacity of the UAV $i$ | Wh |
| ${I}_{i}$ | The initial charge of the UAV $i$ | Wh |
| ${v}_{i}$ | The flying speed of the UAV $i$ | km/h |
| ${x}_{i}$ | The abscissa of the UAV $i$ | $\mathrm{{km}}$ |
| ${y}_{i}$ | The ordinate of the UAV $i$ | km |
| $P{f}_{i}$ | The mobile power of the UAV $i$ | W |
| $P{c}_{i}$ | The charging power of the UAV $i$ | W |
| $N$ | The total number of UAVs | / |
| $M$ | The number of charging ports for the charging station | / |
+
+Send data and Charging sequence Charging Waiting UAV Base station go to charge Return
+
+Fig. 1: Scheduling process of FCS system
+
+During the simulation, it is observed that the time required to solve the scheduling solution is significantly shorter than the arrival time of the UAVs. Therefore, it is reasonable to allow the UAVs to travel to the FCS at the beginning of the scheduling process.
+
+2) Problem Formulation: We choose the optimization objective as the total duration from the start of the charging schedule until all UAVs return to their respective task points to resume task execution. This aims to minimize the time wasted by UAVs due to charging.
+
+To calculate the charging sequence, the first step is to assess the eligibility of UAVs for scheduling. The time at which UAV $i$ arrives at the charging location is represented as:
+
+$$
+{t}_{i} = \frac{{S}_{i}}{{v}_{i}} \tag{1}
+$$
+
+where ${v}_{i}$ represents the flying speed of UAV $i.{S}_{i}$ is the distance from UAV $i$ to the charging station, where ${S}_{i} =$ $\sqrt{{x}_{i}^{2} + {y}_{i}^{2}}$ . The condition that the remaining battery level of UAV $i$ needs to satisfy upon reaching the charging location can be represented as:
+
+$$
+{I}_{i} - {t}_{i}P{f}_{i} \geq 0, \tag{2}
+$$
+
+where ${I}_{i}$ is the initial battery level of UAV $i$ and $P{f}_{i}$ is the flight power of UAV $i$ .If condition (2) is satisfied, UAV $i$ participates in the current round of scheduling; otherwise, it does not participate. Then, update the value of parameter $N$ to represent the total number of UAVs participating in the scheduling.
+
+At each moment, the number of UAVs currently charging must not exceed the number of charging ports available at the charging point. This constraint can be represented as:
+
+$$
+\mathop{\sum }\limits_{{i = 1}}^{N}{\chi }_{i} \leq M \tag{3}
+$$
+
+where ${\chi }_{i}$ is a binary function. When UAV $i$ is charging, ${\chi }_{i}$ equals 1 ; otherwise, it equals 0 .
+
+The scheduling solution provided by the model is the charging sequence of the UAVs, which is a permutation of the numbers from 1 to $N$ .This permutation is denoted as $p$ , $p \in {P}_{N}.{P}_{N}$ is the set consisting of all permutations containing the numbers 1 to $N.p\left\lbrack j\right\rbrack$ is the $j$ -th element in the permutation $p$ , representing the UAV index positioned at the $j$ -th slot in the permutation. This implies that for $p\left\lbrack j\right\rbrack = k, j$ represents the charging order of the UAV with index $k$ . For consistency, we denote in the permutation $p$ that $p\left\lbrack {j}_{i}\right\rbrack = i$ , where ${j}_{i}$ represents the charging order of the UAV with index $i$ . After UAVs reach the charging location, they charge sequentially according to the charging order. If there are no available charging ports, they have to wait. $T{w}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right)$ is the waiting time of UAV $i$ in the permutation $p$ . Note that this refers solely to the waiting time of UAV $i$ in the context of permutation $p$ , and not as a complex function. Similarly, $T{c}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right)$ represents the charging time of UAV $i$ in the permutation $p.{t}_{i}$ is denoted as ${t}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right)$ for consistency. The total duration of charging and movement for UAV $i$ in the permutation $p$ can be represented as:
+
+$$
+{t}_{{sum}_{i}}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) = 2{t}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) + T{w}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) + T{c}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) , \tag{4}
+$$
+
+where ${t}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right)$ needs to be calculated twice because the UAV needs to travel to the charging location and then return to the task location. After determining the charging schedule time for each UAV, taking the maximum value gives the total duration of the scheduling, denoted as:
+
+$$
+{TI}\left( p\right) = \mathop{\max }\limits_{{i \in N}}{t}_{{su}{m}_{i}}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) , \tag{5}
+$$
+
+where ${TI}\left( p\right)$ is the total duration of the scheduling under the permutation $p$ . The UAV charging scheduling problem can be formulated as:
+
+$$
+\min \;{TI}\left( p\right)
+$$
+
+$$
+\text{s.t.}\;{C1} : \;{I}_{i} - {t}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) P{f}_{i} \geq 0
+$$
+
+$$
+{C2} : \;\mathop{\sum }\limits_{{i = 1}}^{N}{\chi }_{i} \leq M \tag{6}
+$$
+
+$$
+{C3} : \;p \in {P}_{N},
+$$
+
+where ${C1}$ requires that the UAV $i$ must have sufficient power to reach the charging location. ${C2}$ requires that the number of UAVs charging simultaneously does not exceed the number of charging ports available at the charging location. ${C3}$ represents the solution of the problem as a charging order arrangement.
+
+## B. Mobile Charging Station System Model
+
+1) MCS System: The mobile charging station system consists of a mobile charging station and several UAVs operating tasks in the vicinity of the MCS. The only difference between the FCS and MCS systems is that the charging location in the former is fixed, while that in the latter is mobile. Therefore, the scheduling problem is divided into two parts: the selection of MCS location and the scheduling of UAVs.
+
+system.pdf UAV MCS Charging location Travel to the charging location Send data (9) Collect data and calculate location
+
+Fig. 2: Scheduling process of MCS system
+
+The scheduling process of the MCS system is shown in Fig. 2. A complete charging scheduling process for the MCS system involves: at regular time intervals $T$ , the MCS gathers relevant data from the UAVs to determine the MCS location. Once the location $L$ for the MCS is determined, both the MCS and UAVs capable of reaching location $L$ proceed to $L$ simultaneously. The charging sequence scheme is calculated immediately after selecting location $L$ . The round of charging scheduling process is considered complete when all UAVs involved in the charging schedule have returned to their respective mission points. At the beginning of each scheduling round, the coordinates of the MCS are initialized to the origin.
+
+2) Problem Formulation: For selecting the MCS location, the ideal location $L$ is characterized by the charging station being able to accommodate more UAVs for scheduling, while the total distance for UAVs to reach the location is shorter. We assume that the selected location is $L = \left( {{x}_{L},{y}_{L}}\right)$ ; the distance from UAV $i$ to location $L$ is ${S}_{i}$ , where ${S}_{i} =$ $\sqrt{{\left( {x}_{i} - {x}_{L}\right) }^{2} + {\left( {y}_{i} - {y}_{L}\right) }^{2}}$ . For the FCS system, the values of ${x}_{L}$ and ${y}_{L}$ are always 0 . Define the variable ${\beta }_{i}$ to indicate whether UAV $i$ participates in scheduling, as follows:
+
+$$
+{\beta }_{i} = \left\{ \begin{array}{ll} 1 & \text{ if }{I}_{i} - {t}_{i}P{f}_{i} \geq 0 \\ 0 & \text{ otherwise,} \end{array}\right. \tag{7}
+$$
+
+where ${I}_{i} - {t}_{i}P{f}_{i} \geq 0$ indicates that UAV $i$ has enough power to reach the charging location. Then, the location selection problem can be formulated as:
+
+$$
+\mathop{\min }\limits_{{{x}_{L},{y}_{L}}} - \mathop{\sum }\limits_{{i = 1}}^{N}{\beta }_{i} + \alpha \mathop{\sum }\limits_{{i = 1}}^{N}{S}_{i}{\beta }_{i} \tag{8}
+$$
+
+where $\alpha = {0.01},\mathop{\sum }\limits_{{i = 1}}^{N}{\beta }_{i}$ and $\mathop{\sum }\limits_{{i = 1}}^{N}{S}_{i}{\beta }_{i}$ represent the total number of UAVs participating in scheduling and the sum of the distances from these UAVs to the charging location. Setting a small value for $\alpha$ ensures that the number of UAVs participating in scheduling is prioritized before optimizing the total distance.
+
+The scheduling after selecting the charging station location is the same as that in the FCS system. Therefore, it will not be elaborated upon further.
+
+Algorithm 1 Selection of MCS location
+
+---
+
+## Initialization:
+
+ 1: The abscissa of the UAV $i{x}_{i}$ , the ordinate of the UAV
+
+ $i{y}_{i}$ , the movement speed of the UAV $i{v}_{i}$ , the battery
+
+ capacity of the UAV $i{C}_{i}$ , the initial charge of the UAV $i$
+
+ ${I}_{i}$ , the mobile power of the UAV ${iP}{f}_{i}$
+
+## Iteration:
+
+ 1: Through the battery capacity, the movement speed, the
+
+ battery capacity, the initial charge and the mobile power
+
+ of UAVs, calculate the movement radius ${r}_{i}$ of UAVs
+
+$$
+{r}_{i} = \frac{{C}_{i} - {I}_{i}}{P{f}_{i}}{v}_{i}
+$$
+
+2: Take the coordinate $\left( {{x}_{i},{y}_{i}}\right)$ of UAVs as the center of the
+
+ circle, and the movement radius ${r}_{i}$ as the radius of the
+
+ circle to make a circular area
+
+3: Find the area with the most overlapping regions to get the
+
+ set of candidate coordinates
+
+ 4: Through traversal or heuristics, select the coordinate point
+
+ that makes the total distance of the objective function the
+
+ shortest $\mathop{\min }\limits_{{{x}_{L},{y}_{L}}} - \mathop{\sum }\limits_{{i = 1}}^{N}{\beta }_{i} + \alpha \mathop{\sum }\limits_{{i = 1}}^{N}{S}_{i}{\beta }_{i}$
+
+Output: The location $L$
+
+---
+
+## C. Assumptions
+
+1) Measurements of the initial power of the UAV, the coordinates of the UAV relative to the FCS/MCS, and the power consumed by the UAV are accurate.
+
+2) The communication time between the drone and the FCS/MCS is neglected.
+
+3) The time required to change the charging interface for the UAV is neglected.
+
+4) The time for the UAV to reach the charging location is always greater than the time required by the FCS/MCS to determine the scheduling plan.
+
+5) After selecting the charging location, the MCS always arrives at the charging site before the UAVs.
+
+Assumptions 1)-3) are common and weak assumptions on the UAV charging service problem. Simulation results indicate that the computation time for the MCS/FCS scheduling scheme ranges from a few seconds to several minutes, which is significantly shorter than the time required for the drone to reach the charging station. Since the MCS is located roughly in the center of the swarm, the charging location will be relatively close to the initial MCS location. The MCS moves faster than UAVs, which ensures that it reaches the charging location faster. Therefore, Assumptions 4)-5) are reasonable.
+
+## III. OPTIMIZATION ALGORITHMS
+
+## A. Scheduling of UAVs
+
+It can be seen that the optimization problem we have established is a variant of the traveling salesman problem. Therefore, heuristic algorithms can be used to solve it. We select simulated annealing algorithm (SA), genetic algorithm (GA), tabu search algorithm (TS), and particle swarm optimization algorithm (PSO). By comparing the performance of four heuristic algorithms, we chose the simulated annealing algorithm. Details can be found in Section IV.
+
+TABLE II: The parameter settings for each algorithm
+
+| Heuristics | Parameter settings |
| PSO | The individual learning factor is 0.5 ; the social learning factor is 0.3 ; the inertia factor is 1 ; the number of particles is 30 ; the maximum inertia factor is 1 ; the minimum inertia factor is 0.8 . |
| GA | The number of immunized individuals is 30 ; the crossover probability is 0.95 ; the mutation probability is 0.1 . |
| TS | The number of neighborhood solutions: $N \times \left( {N - 1}\right) /2$ for $N \leq {10},{50}$ otherwise; the number of candidate solutions is 25; the taboo length is ${\left( N \times \left( N - 1\right) /2\right) }^{0.5}$ . |
| SA | The initial temperature is ${33} \times N$ ; the number of cycles in the inner layer is ${26} \times N$ ; the temperature drop rate is 0.98 ; accept the new solution with probability ${e}^{-{18} \times {\Delta E}/{T}_{\text{init }}}$ . |
+
+## B. Selection of MCS location
+
+Based on the objectives of maximizing the number of UAVs involved in scheduling and minimizing the total distance traveled by the UAVs, an algorithm can be designed to solve the problem. First, the circles representing the reachable areas of UAVs are calculated based on UAVs data. Then, the region with the highest overlap of these circles is identified, representing the area accessible to all UAVs. Within this area, the point that minimizes the total distance for UAVs to reach the charging location is determined as location $L$ .
+
+0.9 8 9 10 The number of UAV PSO 0.7 Relative error(%) 0.5 0.3 0.1 3 6
+
+Fig. 3: Comparison of relative errors among four algorithms
+
+3.4 4.5 Scheduling duration(hours) 400 500 600 200 300 (b) ${20}\mathrm{{UAVs}}$ (c) 30 UAVs Scheduling duration(hours) 2.2 1.8 100 200 600 (a) ${10}\mathrm{{UAVs}}$
+
+Fig. 4: Comparison of scheduling duration for SA, PSO, TS, and GA with different numbers of UAVs
+
+TABLE III: Comparison of results between FCS and MCS
+
+| System Model | The number of UAVs | Calculation duration | Scheduling duration | The total distance traveled by the UAV | Location coordinates of the charging vehicle |
| FCS | 10 | 1.604189s | ${1.5147}\mathrm{\;h}$ | ${48.1646}\mathrm{\;{km}}$ | $\left\lbrack {0,0}\right\rbrack$ |
| 15 | 3.705172s | ${3.5737}\mathrm{\;h}$ | 62.4658km | $\left\lbrack {0,0}\right\rbrack$ |
| 20 | 5.776012s | 2.2079h | 99.3726km | $\left\lbrack {0,0}\right\rbrack$ |
| 25 | 9.441099s | 3.7836h | 93.0275km | $\left\lbrack {0,0}\right\rbrack$ |
| 30 | 12.593925s | 3.1608h | 144.8695km | $\left\lbrack {0,0}\right\rbrack$ |
| $\mathbf{{MCS}}$ | 10 | 1.815212s | 1.4937h | 34.6202km | $\left\lbrack {-4,4}\right\rbrack$ |
| 15 | 3.766389s | 3.4914h | ${33.117}\mathrm{\;{km}}$ | [3,2] |
| 20 | 6.124744s | 1.9614h | 77.5759km | [4, 4] |
| 25 | 9.470387s | ${3.7803}\mathrm{\;h}$ | 73.9708km | $\left\lbrack {0,2}\right\rbrack$ |
| 30 | 13.224203s | 3.1273h | 119.9724km | $\left\lbrack {3,2}\right\rbrack$ |
+
+The total distance traveled by the UAV (km) 160 The number of UAVs (a) The total distance traveled by UAVs 20 25 30 The number of UAVs (b) Scheduling duration FCS 140 MCS 120 100 80 60 20 15 FCS MCS 3.5 Scheduling duration (hours) 2.5 15
+
+Fig. 5: Comparison of UAV distance traveled and scheduling duration for FCS and MCS
+
+To simplify the problem, only integer points on a coordinate grid (points where both the horizontal and vertical coordinates are integers) are considered. Initially, set the value of each integer point to 0 . Each time a circular area covers a point, increment the value of that point by 1 . By selecting the integer points with the highest values, a finite set of integer points can be obtained as candidate locations. We designate this set as $G$ , where the elements in $G$ are the coordinates of points, with elements in the form such as(1,2). Subsequently, by traversing and solving or employing a heuristic algorithm, the location $L$ can be determined.
+
+The pseudo code for the algorithm addressing this problem is provided in Algorithm 1.
+
+## IV. NUMERICAL EXPERIMENTS AND ANALYSIS
+
+For testing the FCS system model, we set the parameter $M = 2$ . Based on simulation experiments, it is found that, even with 30 UAVs, the scheduling time remains within 4 hours. Therefore, we set the scheduling cycle parameter $T = 4$ (which can be adjusted according to the number of drones and the number of charging ports at the charging station). Since the optimal solution can be obtained through exhaustive search when the number of UAVs is 10 or fewer, this part compares the relative error of the solution results of four heuristic algorithms against the optimal solution with the number of UAVs ranging from 3 to 10 . The relative error is defined as the difference between the solution obtained by the algorithm and the optimal solution, divided by the optimal solution. The algorithm parameters are set as shown in Table II.
+
+The comparison results of the algorithms are shown in Fig. 3, where the relative errors of all four heuristic algorithms are less than ${0.9}\%$ . This indicates the effectiveness of the heuristic algorithms in solving this problem. It is noteworthy that the relative error of SA is generally lower. To compare the performance of heuristic algorithms with a larger number of UAVs, we test the four algorithms with 10,20, and 30 UAVs, as shown in Fig. 4. Fig. 4 shows that SA consistently outperforms the other three algorithms. This is the reason for selecting SA for this problem.
+
+For testing the MCS system model, we set the parameter $M = 4$ and employ SA for solution. We compare the results of the MCS and FCS systems under the same drone data, as shown in Table III and Fig. 5. It can be seen that, compared to the FCS system, the MCS system results in shorter scheduling times and a significant reduction in the total distance traveled by UAVs. The time required to solve the MCS system is similar to that of the FCS system. This indicates that the MCS system provides a better charging scheduling solution while maintaining similar solution speed, demonstrating the effectiveness and superiority of this model.
+
+## V. Conclusion
+
+A scheduling system with fixed charging station is described. Through simulation, it is concluded that the simulated annealing algorithm is the most suitable method among the four algorithms for solving this problem. Compared to GA, PSO, and TS, SA demonstrates superior performance in solving large-scale drone problems. Based on FCS system, a scheduling system is proposed that replaces fixed charging station with a mobile charging station. For the MCS system, a two-stage optimization model is established, including the selection of MCS location and the scheduling of drone charging. By comparing the results of two models, it is validated that the MCS system model can provide a more optimal scheduling solution to meet practical needs. The future work is to study large scheduling systems with relay charging platforms and to design efficient algorithms for determining charging locations.
+
+## ACKNOWLEDGMENT
+
+This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 62173087. Corresponding author: Zeyu Guo.
+
+## REFERENCES
+
+[1] R. Masroor, M. Naeem, and W. Ejaz, "Efficient deployment of UAVs for disaster management: A multi-criterion optimization approach," Comput. Commun., vol. 177, pp. 185-194, September 2021.
+
+[2] B. Lee, S. Kwon, P. Park, et al., "Active power management system for an unmanned aerial vehicle powered by solar cells, a fuel cell, and batteries," IEEE Trans. Aerosp. Electron. Syst., vol. 50, no. 4, pp. 3167- 3177, October 2014.
+
+[3] G. Wang and X. Bai, "Comparation of UAV path planning for logistics distribution," in Proc. Int. Conf. Intelligent Transportation Engineering, Beijing, China, October 2021, pp. 223-238.
+
+[4] Y. Chen, M. Chen, Z. Chen, L. Cheng, Y. Yang, and H. Li, "Delivery path planning of heterogeneous robot system under road network constraints," Comput. Electr. Eng., vol. 92, p. 107197, June 2021.
+
+[5] N. A. Khofiyah, W. Sutopo, and B. D. A. Nugroho, "Technical feasibility of lithium battery to support unmanned aerial vehicle (UAV): A technical review," in Proc. Int. Conf. Industrial Engineering and Operations Management, Bangkok, Thailand, March 2019, pp. 3591-3601.
+
+[6] K. Yu, A. K. Budhiraja, and P. Tokekar, "Algorithms for routing of unmanned aerial vehicles with mobile recharging stations," in Proc. 2018 IEEE Int. Conf. Robotics and Automation (ICRA), Brisbane, QLD, Australia, May 2018, pp. 5720-5725.
+
+[7] B. Li, S. Patankar, B. Moridian, et al., "Planning large-scale search and rescue using team of UAVs and charging stations," in Proc. 2018 IEEE Int. Symp. Safety, Security, and Rescue Robotics (SSRR), Philadelphia, PA, USA, August 2018, pp. 1-8.
+
+[8] W. Qin, T. Zhang, Z. Shi, H. Huang, H. He, and W. Li, "Scheduling and routing of mobile charging vehicles for unmanned aerial vehicles charging," in Proc. 2021 IEEE 5th Conf. Energy Internet Energy System Integr. (EI2), Chengdu, China, October 2021, pp. 715-720.
+
+[9] Y. Wang and Z. Su, "An envy-free online UAV charging scheme with vehicle-mounted mobile wireless chargers," China Commun., vol. 20, no. 8, pp. 89-102, August 2023.
+
+[10] T. Erdelić and T. Carić, "A survey on the electric vehicle routing problem: variants and solution approaches," J. Adv. Transp., vol. 2019, no. 1, pp. 5075671, May 2019.
+
+[11] M. Zhu, X. Y. Liu, L. Kong, R. Shen, W. Shu, and M. Y. Wu, "The charging-scheduling problem for electric vehicle networks," in Proc. 2014 IEEE Wireless Communications and Networking Conf. (WCNC), Istanbul, Turkey, April 2014, pp. 3178-3183.
+
+[12] M. M. Hamed, D. M. Kabtawi, A. Al-Assaf, O. Albatayneh, and E. S. Gharaibeh, "Random parameters modeling of charging-power demand for the optimal location of electric vehicle charge facilities," J. Clean. Prod., p. 136022, February 2023.
+
+[13] H. Li, D. Son, and B. Jeong, "Electric vehicle charging scheduling with mobile charging stations," J. Clean. Prod., vol. 434, p. 140162, January 2024.
+
+[14] S. Afshar, P. Macedo, F. Mohamed, and V. Disfani, "Mobile charging stations for electric vehicles-A review," Renewable and Sustainable Energy Reviews, vol. 152, pp. 111654, December 2021.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bmvHIfgK1y/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bmvHIfgK1y/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b8209fe69aafb3707f3c4637db2d1166847bd31d
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/bmvHIfgK1y/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,366 @@
+§ MODELING AND ANALYSIS OF UAV CHARGING SCHEDULING IN FIXED/MOBILE CHARGING STATION SYSTEMS
+
+${1}^{\text{ st }}$ Zeyu Guo
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+eyuguoll@163.com
+
+${2}^{\text{ nd }}$ Sining Zhang
+
+China north vehicle research institute
+
+Beijing, China
+
+13426157603@163.com
+
+${3}^{\text{ rd }}$ Jiahe Wang
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+213220649@seu.edu.cn
+
+${4}^{\text{ th }}$ Xinyuan Huang
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+213220199@seu.edu.cn
+
+${5}^{\text{ th }}$ Ruixu Hu
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+220231953@seu.edu.cn
+
+${6}^{\text{ th }}$ Wenying Xu
+
+School of Mathematics
+
+Southeast University
+
+Nanjing, China
+
+wyxu@seu.edu.cn
+
+${Abstract}$ -This paper proposes two novel mathematical models for optimizing the charging schedules of Unmanned Aerial Vehicles (UAVs) within systems featuring either fixed or mobile charging stations. The primary objective is to minimize the total charging time for all UAVs. Initially, the fixed charging station (FCS) system is modeled, followed by a comparison of four different algorithms. Subsequently, the model is extended to consider a mobile charging station (MCS) system, where the station can relocate as necessary. In this scenario, an algorithm is proposed to optimize the charging station's position to enhance charging efficiency. Finally, a numerical example is presented to compare the performance of different algorithms in both fixed and mobile charging station systems. The simulation results demonstrate that the proposed algorithm effectively improves charging efficiency by strategically positioning the charging station.
+
+Index Terms-Unmanned aerial vehicles, charging scheduling, fixed charging station, mobile charging station
+
+§ I. INTRODUCTION
+
+In recent years, unmanned aerial vehicles have been widely used in military, civil and commercial fields, showing great potential for application. In the military field, UAVs are widely used for tasks such as dangerous reconnaissance [1], search and rescue [2], and logistics distribution [3] due to their flexible deployment capability. Compared with ordinary vehicles, UAVs can quickly reach areas with complex road conditions, greatly improving the efficiency of rescue and cargo transportation missions [4]. Due to the limitations of battery technology and longer mission times, the endurance issue of UAVs has become a major barrier to their development. To overcome this problem, there has been some research to improve the battery technology of UAVs. Reference [5] summarizes recent advancements in battery technology. Another way to enhance endurance is to plan the scheduling of UAVs such as path planning and task assignment, which is a strategy used in many studies [6]-[9]. However, these studies have some shortcomings. On one hand, some research imposes restrictions on the charging scenarios, such as requiring mobile charging vehicles to travel to the drones for power supply [8]. In certain situations, UAVs may need to perform hazardous tasks in remote areas where charging vehicles cannot reach. This requires UAVs to travel to the charging vehicle location for energy replenishment. On the other hand, these studies do not pay enough attention to the queuing problem caused by the increased demand for charging, as their focus is on the path and task planning of drones. Therefore, it is essential to study the queuing problem in the scenario where UAVs need to travel to charging stations to recharge their batteries.
+
+The charging scheduling problem for commercial electric vehicles shares similarities with that of drones [10]. The former has conducted thorough research on the queuing problem in the charging process, providing methods and insights for the latter. Based on the mobility of charging stations, these studies can be categorized into fixed charging stations (FCS) and mobile charging stations (MCS). For FCS, the studies mainly address the layout of charging stations and the scheduling and management of charging vehicles. Zhu et al. [11] proposed a charging scheduling strategy that determines the charging sequence based on the electric vehicles' charging needs rather than their arrival times at the charging stations. Hamed et al. [12] considered different charging needs (day and night) and various charging scenarios, formulating the problem as a mixed-integer linear problem with multiple constraints. To address the shortcomings of FCS, such as range anxiety and lengthy charging times, some studies have begun to utilize MCS [13], [14]. Li et al. [13] proposed a framework for optimizing mobile charging vehicle operations and developed a variant of a Mixed-Integer Linear Programming (MILP) model. Inspired by the above research, we consider both fixed and mobile scenarios of charging stations, and build the FCS and MCS systems based on the scenario in which UAVs need to travel to charging stations to charge.
+
+The contributions of this paper are summarized as follows.
+
+ * Considering the scenario of UAVs traveling to charging stations and the queuing problem caused by limited charging capacity, we model the fixed charging station system and use four different algorithms to solve the problem.
+
+ * Based on the FCS system model, the MCS system model is established by adding the mobility of the charging station, and an algorithm to optimize the location of the charging station is proposed. Simulation results indicate that, compared to the FCS, the MCS system model effectively reduces the total flight distance of drones caused by charging.
+
+The rest of the paper is organized as follows. Section II introduces the system model. In Section III, optimization algorithms are introduced. Numerical experiments and analysis are conducted in Section IV. Finally, Section V concludes this paper.
+
+§ II. SYSTEM MODEL
+
+In this section, a fixed charging station system is presented to solve the queuing problem encountered in UAV charging. Then, considering the practical needs of mobile charging stations, this paper establishes a mobile charging station system. This system involves the selection of charging station location and the scheduling of UAVs.
+
+§ A. FIXED CHARGING STATION SYSTEM MODEL
+
+1) FCS System: The fixed charging station system consists of a fixed charging station and several UAVs operating tasks in the vicinity of the FCS. Each UAV is designated by the index $i,i \in \{ 1,2,3,\ldots ,N\}$ . The UAVs can communicate with the FCS, sending their current data to the FCS. The FCS is equipped with the capability to collect and process various data from UAVs, enabling it to calculate charging sequence schemes. Data related to UAVs and the charging station are presented in Table I. The FCS has sufficient energy but a fixed number of charging ports. When all charging ports are occupied by UAVs, the remaining UAVs need to queue up in the charging sequence and wait their turn. The number of charging ports is denoted as $M$ and the coordinates of the FCS are located at the origin.
+
+The scheduling process of the FCS system is shown in Fig. 1. A complete charging scheduling process involves: at regular time intervals $T$ , the FCS collects relevant data from each UAV, confirms the set of UAVs capable of reaching the FCS, and proceeds with the solution of the scheduling scheme. UAVs unable to reach the FCS do not participate in the schedule and terminate their tasks. At the same time, the participating UAVs move to the FCS for charging. Upon arrival at the station, each UAV lines up at the charging port in the charging sequence provided by the FCS. After finish charging, they will autonomously return to the mission site to continue their missions. When all the UAVs involved in the scheduling return to their mission sites, the scheduling process for this round ends.
+
+TABLE I: Description of the symbol
+
+max width=
+
+Symbol Description Unit
+
+1-3
+${C}_{i}$ The battery capacity of the UAV $i$ Wh
+
+1-3
+${I}_{i}$ The initial charge of the UAV $i$ Wh
+
+1-3
+${v}_{i}$ The flying speed of the UAV $i$ km/h
+
+1-3
+${x}_{i}$ The abscissa of the UAV $i$ $\mathrm{{km}}$
+
+1-3
+${y}_{i}$ The ordinate of the UAV $i$ km
+
+1-3
+$P{f}_{i}$ The mobile power of the UAV $i$ W
+
+1-3
+$P{c}_{i}$ The charging power of the UAV $i$ W
+
+1-3
+$N$ The total number of UAVs /
+
+1-3
+$M$ The number of charging ports for the charging station /
+
+1-3
+
+ < g r a p h i c s >
+
+Fig. 1: Scheduling process of FCS system
+
+During the simulation, it is observed that the time required to solve the scheduling solution is significantly shorter than the arrival time of the UAVs. Therefore, it is reasonable to allow the UAVs to travel to the FCS at the beginning of the scheduling process.
+
+2) Problem Formulation: We choose the optimization objective as the total duration from the start of the charging schedule until all UAVs return to their respective task points to resume task execution. This aims to minimize the time wasted by UAVs due to charging.
+
+To calculate the charging sequence, the first step is to assess the eligibility of UAVs for scheduling. The time at which UAV $i$ arrives at the charging location is represented as:
+
+$$
+{t}_{i} = \frac{{S}_{i}}{{v}_{i}} \tag{1}
+$$
+
+where ${v}_{i}$ represents the flying speed of UAV $i.{S}_{i}$ is the distance from UAV $i$ to the charging station, where ${S}_{i} =$ $\sqrt{{x}_{i}^{2} + {y}_{i}^{2}}$ . The condition that the remaining battery level of UAV $i$ needs to satisfy upon reaching the charging location can be represented as:
+
+$$
+{I}_{i} - {t}_{i}P{f}_{i} \geq 0, \tag{2}
+$$
+
+where ${I}_{i}$ is the initial battery level of UAV $i$ and $P{f}_{i}$ is the flight power of UAV $i$ .If condition (2) is satisfied, UAV $i$ participates in the current round of scheduling; otherwise, it does not participate. Then, update the value of parameter $N$ to represent the total number of UAVs participating in the scheduling.
+
+At each moment, the number of UAVs currently charging must not exceed the number of charging ports available at the charging point. This constraint can be represented as:
+
+$$
+\mathop{\sum }\limits_{{i = 1}}^{N}{\chi }_{i} \leq M \tag{3}
+$$
+
+where ${\chi }_{i}$ is a binary function. When UAV $i$ is charging, ${\chi }_{i}$ equals 1 ; otherwise, it equals 0 .
+
+The scheduling solution provided by the model is the charging sequence of the UAVs, which is a permutation of the numbers from 1 to $N$ .This permutation is denoted as $p$ , $p \in {P}_{N}.{P}_{N}$ is the set consisting of all permutations containing the numbers 1 to $N.p\left\lbrack j\right\rbrack$ is the $j$ -th element in the permutation $p$ , representing the UAV index positioned at the $j$ -th slot in the permutation. This implies that for $p\left\lbrack j\right\rbrack = k,j$ represents the charging order of the UAV with index $k$ . For consistency, we denote in the permutation $p$ that $p\left\lbrack {j}_{i}\right\rbrack = i$ , where ${j}_{i}$ represents the charging order of the UAV with index $i$ . After UAVs reach the charging location, they charge sequentially according to the charging order. If there are no available charging ports, they have to wait. $T{w}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right)$ is the waiting time of UAV $i$ in the permutation $p$ . Note that this refers solely to the waiting time of UAV $i$ in the context of permutation $p$ , and not as a complex function. Similarly, $T{c}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right)$ represents the charging time of UAV $i$ in the permutation $p.{t}_{i}$ is denoted as ${t}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right)$ for consistency. The total duration of charging and movement for UAV $i$ in the permutation $p$ can be represented as:
+
+$$
+{t}_{{sum}_{i}}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) = 2{t}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) + T{w}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) + T{c}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) , \tag{4}
+$$
+
+where ${t}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right)$ needs to be calculated twice because the UAV needs to travel to the charging location and then return to the task location. After determining the charging schedule time for each UAV, taking the maximum value gives the total duration of the scheduling, denoted as:
+
+$$
+{TI}\left( p\right) = \mathop{\max }\limits_{{i \in N}}{t}_{{su}{m}_{i}}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) , \tag{5}
+$$
+
+where ${TI}\left( p\right)$ is the total duration of the scheduling under the permutation $p$ . The UAV charging scheduling problem can be formulated as:
+
+$$
+\min \;{TI}\left( p\right)
+$$
+
+$$
+\text{ s.t. }\;{C1} : \;{I}_{i} - {t}_{i}\left( {p\left\lbrack {j}_{i}\right\rbrack }\right) P{f}_{i} \geq 0
+$$
+
+$$
+{C2} : \;\mathop{\sum }\limits_{{i = 1}}^{N}{\chi }_{i} \leq M \tag{6}
+$$
+
+$$
+{C3} : \;p \in {P}_{N},
+$$
+
+where ${C1}$ requires that the UAV $i$ must have sufficient power to reach the charging location. ${C2}$ requires that the number of UAVs charging simultaneously does not exceed the number of charging ports available at the charging location. ${C3}$ represents the solution of the problem as a charging order arrangement.
+
+§ B. MOBILE CHARGING STATION SYSTEM MODEL
+
+1) MCS System: The mobile charging station system consists of a mobile charging station and several UAVs operating tasks in the vicinity of the MCS. The only difference between the FCS and MCS systems is that the charging location in the former is fixed, while that in the latter is mobile. Therefore, the scheduling problem is divided into two parts: the selection of MCS location and the scheduling of UAVs.
+
+ < g r a p h i c s >
+
+Fig. 2: Scheduling process of MCS system
+
+The scheduling process of the MCS system is shown in Fig. 2. A complete charging scheduling process for the MCS system involves: at regular time intervals $T$ , the MCS gathers relevant data from the UAVs to determine the MCS location. Once the location $L$ for the MCS is determined, both the MCS and UAVs capable of reaching location $L$ proceed to $L$ simultaneously. The charging sequence scheme is calculated immediately after selecting location $L$ . The round of charging scheduling process is considered complete when all UAVs involved in the charging schedule have returned to their respective mission points. At the beginning of each scheduling round, the coordinates of the MCS are initialized to the origin.
+
+2) Problem Formulation: For selecting the MCS location, the ideal location $L$ is characterized by the charging station being able to accommodate more UAVs for scheduling, while the total distance for UAVs to reach the location is shorter. We assume that the selected location is $L = \left( {{x}_{L},{y}_{L}}\right)$ ; the distance from UAV $i$ to location $L$ is ${S}_{i}$ , where ${S}_{i} =$ $\sqrt{{\left( {x}_{i} - {x}_{L}\right) }^{2} + {\left( {y}_{i} - {y}_{L}\right) }^{2}}$ . For the FCS system, the values of ${x}_{L}$ and ${y}_{L}$ are always 0 . Define the variable ${\beta }_{i}$ to indicate whether UAV $i$ participates in scheduling, as follows:
+
+$$
+{\beta }_{i} = \left\{ \begin{array}{ll} 1 & \text{ if }{I}_{i} - {t}_{i}P{f}_{i} \geq 0 \\ 0 & \text{ otherwise, } \end{array}\right. \tag{7}
+$$
+
+where ${I}_{i} - {t}_{i}P{f}_{i} \geq 0$ indicates that UAV $i$ has enough power to reach the charging location. Then, the location selection problem can be formulated as:
+
+$$
+\mathop{\min }\limits_{{{x}_{L},{y}_{L}}} - \mathop{\sum }\limits_{{i = 1}}^{N}{\beta }_{i} + \alpha \mathop{\sum }\limits_{{i = 1}}^{N}{S}_{i}{\beta }_{i} \tag{8}
+$$
+
+where $\alpha = {0.01},\mathop{\sum }\limits_{{i = 1}}^{N}{\beta }_{i}$ and $\mathop{\sum }\limits_{{i = 1}}^{N}{S}_{i}{\beta }_{i}$ represent the total number of UAVs participating in scheduling and the sum of the distances from these UAVs to the charging location. Setting a small value for $\alpha$ ensures that the number of UAVs participating in scheduling is prioritized before optimizing the total distance.
+
+The scheduling after selecting the charging station location is the same as that in the FCS system. Therefore, it will not be elaborated upon further.
+
+Algorithm 1 Selection of MCS location
+
+§ INITIALIZATION:
+
+1: The abscissa of the UAV $i{x}_{i}$ , the ordinate of the UAV
+
+ $i{y}_{i}$ , the movement speed of the UAV $i{v}_{i}$ , the battery
+
+ capacity of the UAV $i{C}_{i}$ , the initial charge of the UAV $i$
+
+ ${I}_{i}$ , the mobile power of the UAV ${iP}{f}_{i}$
+
+§ ITERATION:
+
+ 1: Through the battery capacity, the movement speed, the
+
+ battery capacity, the initial charge and the mobile power
+
+ of UAVs, calculate the movement radius ${r}_{i}$ of UAVs
+
+$$
+{r}_{i} = \frac{{C}_{i} - {I}_{i}}{P{f}_{i}}{v}_{i}
+$$
+
+2: Take the coordinate $\left( {{x}_{i},{y}_{i}}\right)$ of UAVs as the center of the
+
+ circle, and the movement radius ${r}_{i}$ as the radius of the
+
+ circle to make a circular area
+
+3: Find the area with the most overlapping regions to get the
+
+ set of candidate coordinates
+
+ 4: Through traversal or heuristics, select the coordinate point
+
+ that makes the total distance of the objective function the
+
+ shortest $\mathop{\min }\limits_{{{x}_{L},{y}_{L}}} - \mathop{\sum }\limits_{{i = 1}}^{N}{\beta }_{i} + \alpha \mathop{\sum }\limits_{{i = 1}}^{N}{S}_{i}{\beta }_{i}$
+
+Output: The location $L$
+
+§ C. ASSUMPTIONS
+
+1) Measurements of the initial power of the UAV, the coordinates of the UAV relative to the FCS/MCS, and the power consumed by the UAV are accurate.
+
+2) The communication time between the drone and the FCS/MCS is neglected.
+
+3) The time required to change the charging interface for the UAV is neglected.
+
+4) The time for the UAV to reach the charging location is always greater than the time required by the FCS/MCS to determine the scheduling plan.
+
+5) After selecting the charging location, the MCS always arrives at the charging site before the UAVs.
+
+Assumptions 1)-3) are common and weak assumptions on the UAV charging service problem. Simulation results indicate that the computation time for the MCS/FCS scheduling scheme ranges from a few seconds to several minutes, which is significantly shorter than the time required for the drone to reach the charging station. Since the MCS is located roughly in the center of the swarm, the charging location will be relatively close to the initial MCS location. The MCS moves faster than UAVs, which ensures that it reaches the charging location faster. Therefore, Assumptions 4)-5) are reasonable.
+
+§ III. OPTIMIZATION ALGORITHMS
+
+§ A. SCHEDULING OF UAVS
+
+It can be seen that the optimization problem we have established is a variant of the traveling salesman problem. Therefore, heuristic algorithms can be used to solve it. We select simulated annealing algorithm (SA), genetic algorithm (GA), tabu search algorithm (TS), and particle swarm optimization algorithm (PSO). By comparing the performance of four heuristic algorithms, we chose the simulated annealing algorithm. Details can be found in Section IV.
+
+TABLE II: The parameter settings for each algorithm
+
+max width=
+
+Heuristics Parameter settings
+
+1-2
+PSO The individual learning factor is 0.5 ; the social learning factor is 0.3 ; the inertia factor is 1 ; the number of particles is 30 ; the maximum inertia factor is 1 ; the minimum inertia factor is 0.8 .
+
+1-2
+GA The number of immunized individuals is 30 ; the crossover probability is 0.95 ; the mutation probability is 0.1 .
+
+1-2
+TS The number of neighborhood solutions: $N \times \left( {N - 1}\right) /2$ for $N \leq {10},{50}$ otherwise; the number of candidate solutions is 25; the taboo length is ${\left( N \times \left( N - 1\right) /2\right) }^{0.5}$ .
+
+1-2
+SA The initial temperature is ${33} \times N$ ; the number of cycles in the inner layer is ${26} \times N$ ; the temperature drop rate is 0.98 ; accept the new solution with probability ${e}^{-{18} \times {\Delta E}/{T}_{\text{ init }}}$ .
+
+1-2
+
+§ B. SELECTION OF MCS LOCATION
+
+Based on the objectives of maximizing the number of UAVs involved in scheduling and minimizing the total distance traveled by the UAVs, an algorithm can be designed to solve the problem. First, the circles representing the reachable areas of UAVs are calculated based on UAVs data. Then, the region with the highest overlap of these circles is identified, representing the area accessible to all UAVs. Within this area, the point that minimizes the total distance for UAVs to reach the charging location is determined as location $L$ .
+
+ < g r a p h i c s >
+
+Fig. 3: Comparison of relative errors among four algorithms
+
+ < g r a p h i c s >
+
+Fig. 4: Comparison of scheduling duration for SA, PSO, TS, and GA with different numbers of UAVs
+
+TABLE III: Comparison of results between FCS and MCS
+
+max width=
+
+System Model The number of UAVs Calculation duration Scheduling duration The total distance traveled by the UAV Location coordinates of the charging vehicle
+
+1-6
+5*FCS 10 1.604189s ${1.5147}\mathrm{\;h}$ ${48.1646}\mathrm{\;{km}}$ $\left\lbrack {0,0}\right\rbrack$
+
+2-6
+ 15 3.705172s ${3.5737}\mathrm{\;h}$ 62.4658km $\left\lbrack {0,0}\right\rbrack$
+
+2-6
+ 20 5.776012s 2.2079h 99.3726km $\left\lbrack {0,0}\right\rbrack$
+
+2-6
+ 25 9.441099s 3.7836h 93.0275km $\left\lbrack {0,0}\right\rbrack$
+
+2-6
+ 30 12.593925s 3.1608h 144.8695km $\left\lbrack {0,0}\right\rbrack$
+
+1-6
+5*$\mathbf{{MCS}}$ 10 1.815212s 1.4937h 34.6202km $\left\lbrack {-4,4}\right\rbrack$
+
+2-6
+ 15 3.766389s 3.4914h ${33.117}\mathrm{\;{km}}$ [3,2]
+
+2-6
+ 20 6.124744s 1.9614h 77.5759km [4, 4]
+
+2-6
+ 25 9.470387s ${3.7803}\mathrm{\;h}$ 73.9708km $\left\lbrack {0,2}\right\rbrack$
+
+2-6
+ 30 13.224203s 3.1273h 119.9724km $\left\lbrack {3,2}\right\rbrack$
+
+1-6
+
+ < g r a p h i c s >
+
+Fig. 5: Comparison of UAV distance traveled and scheduling duration for FCS and MCS
+
+To simplify the problem, only integer points on a coordinate grid (points where both the horizontal and vertical coordinates are integers) are considered. Initially, set the value of each integer point to 0 . Each time a circular area covers a point, increment the value of that point by 1 . By selecting the integer points with the highest values, a finite set of integer points can be obtained as candidate locations. We designate this set as $G$ , where the elements in $G$ are the coordinates of points, with elements in the form such as(1,2). Subsequently, by traversing and solving or employing a heuristic algorithm, the location $L$ can be determined.
+
+The pseudo code for the algorithm addressing this problem is provided in Algorithm 1.
+
+§ IV. NUMERICAL EXPERIMENTS AND ANALYSIS
+
+For testing the FCS system model, we set the parameter $M = 2$ . Based on simulation experiments, it is found that, even with 30 UAVs, the scheduling time remains within 4 hours. Therefore, we set the scheduling cycle parameter $T = 4$ (which can be adjusted according to the number of drones and the number of charging ports at the charging station). Since the optimal solution can be obtained through exhaustive search when the number of UAVs is 10 or fewer, this part compares the relative error of the solution results of four heuristic algorithms against the optimal solution with the number of UAVs ranging from 3 to 10 . The relative error is defined as the difference between the solution obtained by the algorithm and the optimal solution, divided by the optimal solution. The algorithm parameters are set as shown in Table II.
+
+The comparison results of the algorithms are shown in Fig. 3, where the relative errors of all four heuristic algorithms are less than ${0.9}\%$ . This indicates the effectiveness of the heuristic algorithms in solving this problem. It is noteworthy that the relative error of SA is generally lower. To compare the performance of heuristic algorithms with a larger number of UAVs, we test the four algorithms with 10,20, and 30 UAVs, as shown in Fig. 4. Fig. 4 shows that SA consistently outperforms the other three algorithms. This is the reason for selecting SA for this problem.
+
+For testing the MCS system model, we set the parameter $M = 4$ and employ SA for solution. We compare the results of the MCS and FCS systems under the same drone data, as shown in Table III and Fig. 5. It can be seen that, compared to the FCS system, the MCS system results in shorter scheduling times and a significant reduction in the total distance traveled by UAVs. The time required to solve the MCS system is similar to that of the FCS system. This indicates that the MCS system provides a better charging scheduling solution while maintaining similar solution speed, demonstrating the effectiveness and superiority of this model.
+
+§ V. CONCLUSION
+
+A scheduling system with fixed charging station is described. Through simulation, it is concluded that the simulated annealing algorithm is the most suitable method among the four algorithms for solving this problem. Compared to GA, PSO, and TS, SA demonstrates superior performance in solving large-scale drone problems. Based on FCS system, a scheduling system is proposed that replaces fixed charging station with a mobile charging station. For the MCS system, a two-stage optimization model is established, including the selection of MCS location and the scheduling of drone charging. By comparing the results of two models, it is validated that the MCS system model can provide a more optimal scheduling solution to meet practical needs. The future work is to study large scheduling systems with relay charging platforms and to design efficient algorithms for determining charging locations.
+
+§ ACKNOWLEDGMENT
+
+This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 62173087. Corresponding author: Zeyu Guo.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/d9xHa3zSc0/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/d9xHa3zSc0/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..1bd96a4ae85cdeb44cc79d919e63421063ec77ce
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/d9xHa3zSc0/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,359 @@
+# Improved Catch Fish Optimization Algorithm with Personalized Fishing Strategy for Global Optimization
+
+${1}^{\text{st }}$ Bowen Xue
+
+School of Electrical and Information
+
+Engineering
+
+Northeast Petroleum University
+
+Daqing, China
+
+xuebowen@stu.nepu.edu.cn
+
+${2}^{\text{nd }}$ Heming Jia*
+
+School of Information Engineering
+
+Sanming University
+
+Sanming, China
+
+jiaheming@fjsmu.edu.cn
+
+${3}^{\text{rd }}$ Honghua Rao
+
+School of Electrical and Information
+
+Engineering
+
+Northeast Petroleum University
+
+Daqing, China
+
+20200862235@fjsmu.edu.cn
+
+${4}^{\text{th }}$ Jinrui Zhang
+
+School of Information Engineering
+
+Sanming University
+
+Sanming, China
+
+ruiruiz2308@163.com
+
+${5}^{\text{th }}$ Yilong Du
+
+School of Information and Electrical Engineering Heilongjiang Bayi Agricultural University
+
+Daqing, China
+
+wy15093488812@163.com
+
+${6}^{\text{th}}$ Zekai Ai
+
+College of Design and Engineering
+
+National University of Singapore
+
+Singapore
+
+aizekai@u.nus.edu
+
+Abstract-Catch Fish Optimization Algorithm (CFOA) is a new meta-heuristic optimization algorithm with human behavior. In this algorithm, search agents simulate the process of rural fishermen fishing in the pond. Therefore, the CFOA generally consists of two phases of the update: the exploration phase and the exploitation phase. However, it still falls under the local optimum and has a low convergence rate. To this end, we propose an improved catch fish optimization algorithm(ICFOA) based on personalized fishing strategies. First, the adaptive Gaussian perturbation is adopted to the exploration stage process to increase the global search capability, expand the search range, and improve efficiency while avoiding falling into the local optima. Then, based on the personalized fishing strategy, the personal position of fishermen is updated by randomly selecting "freehand fishing" factors or "using fishing net" factors to accelerate the algorithm's convergence speed. Furthermore, comparative experiments were performed using the CEC2020 test suite to compare the performance of ICFOA and other excellent meta-heuristics. Finally, Wilcoxon's rank-sum test was used to verify the validity of our statistical experimental results. Moreover, the performance of ICFOA in reducer design also indicates that ICFOA can get the optimal solution in solving practical engineering optimization problems. The results show that ICFOA has a more competitive performance than the original $\mathbf{{CFOA}.}$
+
+Keywords-Catch Fish Optimization Algorithm, adaptive Gaussian perturbation, Personalized Fishing Strategy
+
+## I. INTRODUCTION
+
+In the current era of rapid technological advancement, optimization problems hold a critical position across various domains, including engineering design, economic management, and computer science. Examples include the Crayfish Optimization Algorithm (COA) [1], Whale Optimization Algorithm (WOA) [2], and Grey Wolf Optimization (GWO) [3]. COA excels in exploration but may converge slowly. WOA balances exploration and exploitation well but can get trapped in local optima. GWO is strong in convergence but requires careful parameter tuning. Human behavior-based optimization algorithms are a class of optimization techniques designed to tackle complex optimization problems by emulating human or other biological behaviors and decision-making processes. By mimicking natural phenomena such as evolution, foraging, and social interactions, these algorithms can effectively search and optimize complex solution spaces.
+
+In solving problems related to economic scheduling, functional optimization, and engineering design, human behavior-based optimization algorithms are especially adept at avoiding local optima and discovering global optima or solutions close to the global optimum. For instance, the Human Behavior-Based Optimization (HBBO) [4] algorithm models human behavior patterns, particularly focusing on how humans learn and solve problems through interaction and communication. This algorithm integrates multiple human behavioral traits, such as experiential learning, imitation, social interaction, and collaboration, to achieve efficient search and optimization in complex problems.
+
+Heming Jia et al. [5] In 2024, an innovative optimization algorithm inspired by human behavior, namely the catch fish optimization algorithm(CFOA). The main inspiration for the CFOA comes from the fishing practices of the fishermen. The CFOA contains updated rules based on different fishing practices. As intelligent humans, fishermen often use a variety of ways to find fish, such as sharing fishing experiences, using different fishing tools, etc., so their location update rules are based on individuals and teams. Furthermore, as capture rates decline, fishermen will choose whether to change their fishing strategy. Experimental results show that the proposed algorithm outperforms others in finding the optimal solution and convergence speed. However, as stated by the NFL Theorem [6], given the diversity and complexity of optimization problems, no universal algorithm can be directly applied to address all types of optimization challenges. This reality requires the exploration and adoption of more rigorous and targeted strategies to continuously improve and optimize the algorithm design.
+
+Just as the problems faced by many optimization algorithms, the original CFOA algorithm found it difficult to completely avoid the limitations of low convergence efficiency and easily fall into local optimal solutions in specific optimization tasks. Given this, the optimization and upgrading of the CFOA algorithm can not only improve the efficiency of its algorithm but also broaden its scope of application. Therefore, this paper presents an improved CFOA algorithm (ICFOA) based on a personalized fishing strategy. PFS greatly enhances the solving performance of CFOA in complex optimization problems. At the same time, the position of the fishermen is updated based on the personalized fishing strategy, which not only makes the algorithm more detailed and comprehensive when searching for the solution space but also enhances the algorithm's ability to escape local optima and find the global optimal solution. Finally, to test ICFOA, to test the improved optimization algorithm, this paper utilizes ten commonly used benchmark functions and chooses the optimization algorithm (CFOA) for evaluation. and five representative meta-heuristic algorithms for comparative experiments to validate the effectiveness and advantages of ICFOA.
+
+---
+
+* Corresponding author.
+
+This work is supported by the Natural Science Foundation of Fujian Province under Grant 2021J011128.
+
+---
+
+The remainder of this paper is structured as follows: Section II provides the concept of the original CFOA, Section III details the proposed algorithm ICFOA, Sections IV and V demonstrate the experiment analysis in comparison with several popular metaheuristics under the CEC2020 test suite and Wilcoxon's rank-sum test, and Section VI concludes.
+
+## II. CATCH FISH OPTIMIZATION ALGORITHM
+
+The CFOA simulates the fishing behavior of village fishermen. To catch fish more easily, fishermen choose different fishing methods to catch fish. Similar to other metaheuristic algorithms (MAs), CFOA consists of three distinct stages: initialization, exploration, and exploitation.
+
+## 1) Initialization phase
+
+The matrix $F$ represents the location data of $N$ search agents in a $d$ -dimensional space, and the formula is shown below:
+
+$$
+F = {\left\lbrack \begin{matrix} {F}_{1,1} & {F}_{1,2} & \cdots & {F}_{1, n} \\ {F}_{2,1} & {F}_{2,2} & \cdots & {F}_{2, n} \\ \vdots & \vdots & \ddots & \vdots \\ {F}_{n,1} & {F}_{n,2} & \cdots & {F}_{n, n} \end{matrix}\right\rbrack }_{N \times d} \tag{1}
+$$
+
+$$
+{F}_{i, j} = l{b}_{j} + \left( {u{b}_{j} - l{b}_{j}}\right) \times \text{ rand } \tag{2}
+$$
+
+The matrix $\mathrm{F}$ represents the position information of $\mathrm{N}$ search agents within a d-dimensional space. Its initialization formula is as follows: $\mathrm{{Fi}},\mathrm{j}$ denotes the position of the ith agent in the jth dimension, where ubj and lbj represent the maximum and minimum limits of the jth dimension, respectively, rand is a random number in the interval(0,1).
+
+Using the current position data of each fisherman, we apply the fitness evaluation function fobj to determine their fitness scores, yielding the following fitness matrix:
+
+$$
+f = \operatorname{fobj}\left( F\right) = \left\lbrack \begin{matrix} {f}_{1} \\ {f}_{2} \\ \vdots \\ {f}_{N} \end{matrix}\right\rbrack \tag{3}
+$$
+
+In the above formula, ${f}_{1}$ represents the fitness value of the first fisherman, ${f}_{2}$ denotes the fitness value of the second fisherman, and so on. We use a value of 0.5 to evenly distribute the balance between exploitation and exploration across iterations. In the initial part of the phase (when $\mathrm{{EFs}}/\mathrm{{MaxEFs}} < {0.5}$ ), individuals focus on global exploration, while during the latter part of the phase (when $\mathrm{{EFs}}/$ MaxEFs $\geq {0.5}$ ), they shift towards exploitation.
+
+## 2) Individual and group fishing (exploration phase)
+
+When fishermen explore, initially mainly through independent search and using group encirclement as an aid. As the exploration proceeds, the environmental advantages gradually shift from the fish side to the fishermen. In addition, continuous capture will lead to a decrease in fish population and capture rate. Fishermen will shift from independent exploration to mainly relying on collective encirclement, with personal strengths as assistance. The transformation in this mode is modeled using the capture rate parameter, expressed as $\delta$ .
+
+$$
+\delta = {\left( 1 - \frac{3 \times {EFs}}{2 \times {MaxEFs}}\right) }^{\frac{3 \times {EFs}}{2 \times {MaxEFs}}} \tag{4}
+$$
+
+where ${EFs}$ and ${MaxEFs}$ indicate the current number and maximum number of estimates, respectively.
+
+## a) Individual fishing(when ${EFs}/{MaxEFs} < {0.5}$ )
+
+Fishermen disturb the water to float the fish, determine the position of the fish and adjust the direction of exploration. The update formula is as follows:
+
+$$
+{Exp} = \frac{{f}_{i} - {f}_{\text{pos }}}{{f}_{\max } - {f}_{\min }} \tag{5}
+$$
+
+$$
+R = \text{ Dis } \times \sqrt{\left| \text{ Exp }\right| } \times \left( {1 - \frac{\text{ EFs }}{\text{ Max EFs }}}\right) \tag{6}
+$$
+
+$$
+{F}_{i, j}^{T + 1} = {F}_{i, j}^{T} + \left( {{F}_{{pos}, j}^{T} - {F}_{i, j}^{T}}\right) \times \text{ Exp } \tag{7}
+$$
+
+$$
++ \operatorname{rand} \times s \times R
+$$
+
+In the formula mentioned above, ${Exp}$ represents the empirical analysis value obtained by the $i$ -th fisherman using any other fisherman $p$ (where ${pos} = 1,2\cdots$ or $N, p \neq i$ ) as the reference object, with values ranging from -1 to $1.{f}_{\max }$ and ${f}_{\min }$ represent the lowest and highest fitness values, respectively, following the Tth complete position update. $T$ is the number of iterations fishermen’s positions. ${F}_{i, j}^{T}$ and ${F}_{i, j}^{T + 1}$ are position of the ${i}^{\text{th }}$ fisherman in $j$ -dimension after the iterations of ${T}^{\text{th }}$ and ${\left( T + I\right) }^{\text{th }}$ . Dis denotes the Euclidean distance between the $i$ -th individual and the reference point, while $s$ is a random unit vector in $d$ dimensions.
+
+## b) group fishing (when ${EFs}/{MaxEFs} \geq {0.5}$ )
+
+Fishermen utilize nets to enhance their fishing efficiency and collaborate with each other. They organize into random groups of 3-4 members to collectively encircle potential targets. By leveraging their individual mobility, they can explore the area more comprehensively and accurately. The corresponding formula are outlined below:
+
+$$
+\text{Centre} = \operatorname{mean}\left( {F}_{c}^{T}\right) \tag{8}
+$$
+
+$$
+{F}_{c, i, j}^{T + l} = {F}_{c, i, j}^{T} + {r}_{2} \times \left( {{\text{ Centre }}_{c} - {F}_{c, i, j}^{T}}\right) + {\left( 1 - \frac{2 \times {EFs}}{MaxEFs}\right) }^{2} \times {r}_{3} \tag{9}
+$$
+
+Where $c$ represents a cluster of 3 to 4 individuals whose positions remain unaltered. Centre ${}_{c}$ is the target point for group $c$ ’s encirclement. ${F}_{c, i, j}^{T + 1}$ and ${F}_{c, i, j}^{T}$ are the position of the ${i}^{\text{th }}$ fisherman in group $c$ in the $j$ -dimension after the ${\left( T + I\right) }^{\text{th }}$ and ${T}^{\text{th }}$ updates. ${r}_{2}$ represents the speed at which a fisherman moves toward the center, varying individually and falling within the range of $\left( {0,1}\right) .{r}_{3}$ is the offset of the move, ranging from(-1,1), and decreases progressively as EFs increase.
+
+3) Collective capture (exploitation phase)
+
+All fishermen searched under a uniform strategy, purposefully bringing hidden fish to the same location and around. The position of the fishermen during the trapping process is updated as follows:
+
+$$
+\sigma = \sqrt{\left( \frac{2\left( {1 - \frac{EFs}{MaxEFs}}\right) }{\left( {\left( 1 - \frac{EFs}{MaxEFs}\right) }^{2} + 1\right) }\right) } \tag{10}
+$$
+
+$$
+{F}_{i}^{T + 1} = \text{ Gbest } + {GD}\left( {0,\frac{{r}_{4} \times \sigma \times \left| {\text{ mean }\left( F\right) - \text{ Gbest }}\right| }{3}}\right) \tag{11}
+$$
+
+Within this group, GD is a Gaussian distribution function with a mean $\mu$ of 0, and its overall variance $\sigma$ decreases from 1 to 0 as the number of evaluations increases. The position of the ${i}^{\text{th }}$ fisherman after the ${\left( T + I\right) }^{\text{th }}$ update. Mean(F)signifies the matrix of mean values for each dimension at the center of the fishermen's positions, while Gbest indicates the global optimum.
+
+## III. Proposed Algorithm
+
+## A. Adaptive Gaussian Perturbation (AGP)
+
+Adaptive Gaussian Perturbation dynamically enables the optimization algorithm to flexibly balance exploration and exploitation in different iteration stages as follows:
+
+$$
+{\sigma }_{p} = \left( {1 - \frac{EFs}{MaxEFs}}\right) \cdot \operatorname{std}\left( {F}_{i}^{T}\right) \cdot \exp \left( {-\frac{EFs}{{MaxEFs}/{10}}}\right) \tag{12}
+$$
+
+$$
+{F}_{i}^{T + 1} = {F}_{i}^{T} + \mathcal{N}\left( {0,{\sigma }_{p}^{2}}\right) \tag{13}
+$$
+
+where ${\sigma }_{p}$ is the perturbation strength of the current iteration, The std is the standard deviation of the current solution, $\mathcal{N}\left( {0,{\sigma }_{p}^{2}}\right)$ is a normally distributed random variable with mean 0 and variance ${\sigma }_{p}^{2}$ .
+
+## B. Personalized Fishing Strategy (PFS)
+
+PFS is a widely adopted and effective strategy that enhances an algorithm's optimization capability within the search space. Overall, PFS enhances the exploitation capability and accelerates the convergence speed of metaheuristic algorithms (MAs). The principle of PFS is to generate random actions $\alpha$ and $\beta$ based on the original action step, update the position according to different movements, and change according to the capture parameter $\delta$ , making the whole fishing process more closely related.
+
+$$
+{F}_{i}^{T + 1} = {F}_{i}^{T} + \text{ step } \cdot {kd} \tag{14}
+$$
+
+$$
+{kd} = {0.5} + {0.5} * \text{rand} \tag{15}
+$$
+
+where step represents action choice and kd indicates the action skill proficiency of fishermen, whose value is $\left\lbrack {{0.1},1}\right\rbrack$ . The personalized fishing strategy includes two factors, "freehand capture" and "using tools", which are used to improve the global search capability of the algorithm. The updated formula is shown as follows:
+
+$$
+\begin{array}{l} \text{ step } = \left\{ \begin{array}{l} \alpha = 2 * \left( {1 - \frac{EFs}{MaxEFs}}\right) * \left( {{0.4} * \delta }\right) * \text{ rand } > {0.5} \\ \beta = 1 * \left( {1 - \frac{EFs}{MaxEFs}}\right) \text{ otherwise } \end{array}\right. \end{array} \tag{16}
+$$
+
+## C. Details of ICFOA
+
+CFOA is widely used and easy to implement for optimization tasks. However, its search capabilities (exploration and exploitation) are limited when tackling complex problems, making convergence within a finite number of iterations difficult or even unattainable. To address these challenges, this paper integrates PFS to enhance ICFOA for global optimization. The PFS improves exploitation efficiency and accelerates the convergence rate of the conventional CFOA.
+
+The pseudo-code of ICFOA is shown in Algorithm 1, and Fig. 1 illustrates the flowchart of ICFOA.
+
+Algorithm 1 the pseudo-code of ICFOA
+
+---
+
+ Initialization parameters
+
+ Initialize the population Fisher
+
+ While (EFs $\leq$ MaxEFs)
+
+ Reckon the values of fit and get the globally optimal
+
+solution(Gbest)
+
+ if EFs/MaxEFs $< {0.5}$
+
+ Reckon the values of $\delta$ by Eq. (4)
+
+ Randomly shuffle the order of each fisherman
+
+ If $\mathrm{p} < \delta$
+
+ Using Eq. (7) to update new position of fisher
+
+ Else
+
+ Randomly group the fisherman
+
+ Using Eq. (9) to update new position of fisher
+
+ End
+
+ Else
+
+ Using Eq. (13) to update new position of fisher
+
+ End
+
+ End
+
+ Output the globally optimal solution
+
+---
+
+
+
+Fig. 1. Flowchart of the proposed algorithm ICFOA
+
+## D. Computation Complexity of ICFOA
+
+Initialization and position updates are the core components of ICFOA. The computational complexity of initialization is $\mathrm{O}\left( {N \times \text{Dim }}\right)$ , where $N$ represents the population size and Dim denotes the dimensionality. The complexity of fishing with bare hands and using fishing nets varies and can reach up to $\mathrm{O}\left( {T \times N \times \text{Dim }}\right)$ , where $T$ is the maximum number of evaluations.Both the PFS and AGP methods have a complexity of $\mathrm{O}\left( {T \times N \times \text{Dim }}\right)$ . Therefore, the overall computational complexity of ICFOA is $\mathrm{O}\left( {\left( {4 \times T + 1}\right) \times N \times \text{Dim}}\right)$ .
+
+## IV. RESULTS OF GLOBAL OPTIMIZATION EXPERIMENTS
+
+This section uses 10 benchmark functions in CEC2020 [7]to evaluate the optimized performance of the proposed working ICFOA. First, the definitions of the 10 benchmark test functions are introduced. Second, the experimental setup and the comparison groups are described in detail, including other well-known MAs.
+
+## A. Definition of 10 Benchmark Functions
+
+Benchmark functions are essential for assessing the performance of various algorithms. This paper selects 10 representative benchmark functions, categorized into: (1) unimodal functions (F1-F4) and (2) multimodal functions (F5-F10). Table 1 provides a detailed description of these functions, and D denotes the dimensionality. Unimodal functions have a single global optimal solution, making them suitable for evaluating the exploitation capability of metaheuristic algorithms (MAs). Conversely, multimodal functions possess multiple local optima and a single global optimum. providing a basis for assessing the exploration capability of MAs and their ability to escape local optima.
+
+TABLE I. DEFINITION OF 10 BENCHMARK FUNCTIONS
+
+| $\mathbf{{No}}$ | Property | D | Range | ${\mathbf{f}}_{\text{min }}$ |
| F1 | Unimodal Function | | | 100 |
| F2 | Basic Functions | 100 | | 1100 |
| F3 | | 700 |
| F4 | | 1900 |
| F5 | Hybrid Functions | ${\left\lbrack -{100},{100}\right\rbrack }^{\mathrm{D}}$ | 1700 |
| F6 | | 1600 |
| F7 | | 2100 |
| F8 | Composition Functions | | 2000 |
| F9 | | 2400 |
| F10 | | 2500 |
+
+## B. Experimental Configuration
+
+We utilize aforementioned functions to evaluate the performance of ICFOA. To ensure the experiment's representativeness, we compare the enhanced algorithm with the basic CFOA and five widely used metaheuristic algorithms, including ROA [8], AOA [9], SFO [10], SHO [11], and SCA [12]. To ensure an unbiased comparison, we define the maximum number of evaluations as $T = {100},{000}$ , the group size as $N = {30}$ , and the dimensionality as $\operatorname{Dim} = {10}$ . Furthermore, each test are conducted independently 30 times, with the best results emphasized in bold.
+
+TABLE II. ALGORITHM PARAMETER SETTINGS
+
+| $\mathbf{{Algorithm}}$ | Parameters |
| ICFOA | $\alpha = {0.4}$ , |
| CFOA | $\alpha = {0.4},\beta = {0.5}$ |
| ROA | c=0.2 |
| AOA | $\alpha = 5;\mu = {0.5}$ ; |
| SCSO | S=2, R=[-1,1]; |
| SHO | u=0.03, v=0.03, l=0.03 |
| SCA | B=3 |
+
+## C. Statistical Analysis of 10 Benchmark Functions
+
+This part compares ICFOA with five foundational algorithms across 10 benchmark functions, focusing on the optimal value (Best), mean value (Mean), and standard deviation (Std) [13]. Table 3 provides the details of the experimental outcomes. From the table, it is evident that ICFOA performs well on most functions, often achieving the minimum Best, Mean, and Std values. In particular, for F1- F6, ICFOA consistently attains the theoretical optimal solution, whereas CFOA solely approximates it, highlighting ICFOA's superior exploitation capability. For F2, ICFOA demonstrates better global optimization performance than other prominent algorithms. Despite ICFOA finds solely a suboptimal solution for $\mathrm{F}3$ , its precision in convergence surpasses that of the others. For F4, F5, and F6, ICFOA reaches the theoretical optimum. However, for F4 and F9, ICFOA's performance is slightly inferior to the SFO algorithm.
+
+Considering that MAs are stochastic algorithms, this paper employs the Wilcoxon rank-sum test to enhance the statistical analysis and assess the significance of the results. Notably, if the $p$ value is below 0.06, there is a significant difference between the two data groups. Conversely, when $\mathrm{p}$ value is at least 0.06 , this indicates minimal difference between the data sets. Additionally, "NAN" is used in this paper to represent cases where no significant difference exists between the groups. The detailed results of the Wilcoxon rank-sum test are displayed in Table 4. The results show that functions F4, F6, and F10 contain 'NAN,' because the optimization results of ICFOA, CFOA, and SHO all achieve the theoretical optimal solution, leading to minimal differences among the three data groups. For other functions, The performance of ICFOA differs significantly from the other algorithms. However, the Wilcoxon rank-sum test solely measures the statistical difference between algorithms, not their overall performance. Consequently, by integrating the insights from Tables 3 and 4, it is evident that the improvements to CFOA presented in this paper are highly effective.
+
+TABLE III. STATISTICS ABOUT THE 10 TEST FUNCTIONS IN THE CEC2020
+
+| FUNCTIONS | ICFOA | CFOA | ROA | AOA | SFO | SHO | SCA |
| F1 | Best | 123.9163085 | 1303.684774 | 2435.264611 | 2484.316844 | 2898.991125 | 27552.63251 | 17016.33261 |
| Mean | 4591.728112 | 202423937.5 | 26245.21152 | 26812.31168 | 28990.51251 | 28799.22518 | 18504.45167 |
| Std | 4137.802543 | 68441680.22 | 10449416987 | 9083474822 | 1924069.487 | 4948400618 | 13440911059 |
| Best | 12244.11095 | 16605.71202 | 29808.77688 | 28990.18679 | 35461.62181 | 34698.94999 | 31568.08871 |
| F2 | Mean | 18161.78534 | 23564.86621 | 31381.50853 | 30430.72471 | 35488.16642 | 36163.21061 | 32488.08215 |
| Std | 3545.047924 | 4192.739178 | 114655.1359 | 9077.919527 | 24045.26547 | 103551.2026 | 5910.424614 |
| Best | 894.5056084 | 1683.533974 | 3908.81377 | 3806.349814 | 4217.387789 | 4066.605567 | 3489.622851 |
| F3 | Mean | 927.3464718 | 2120.641269 | 4011.883834 | 3909.740683 | 4221.83685 | 4204.438691 | 3765.087815 |
| Std | 40.84472928 | 270.9768306 | 56.03266113 | 53.70544871 | 30.76279062 | 75.21084023 | 180.6559676 |
| Best | 1911.571228 | 1972.715291 | 1926.112879 | 1913.360194 | 1942.664109 | 1926.352858 | 5871.613809 |
| F4 | Mean | 1903.184835 | 2003.719718 | 1938.664563 | 1911.360491 | 1908.059367 | 1933.542475 | 33306.19437 |
| Std | 1.974573432 | 20.75907556 | 48.00546352 | ${7.62}\mathrm{E} + {00}$ | 5.236162188 | 10.3265855 | 41611.3348 |
| Best | 771627.8334 | 2622829.749 | 648036492.7 | 897453834.9 | 1910249217 | 1782936241 | 274232285.4 |
| F5 | Mean | 1572147.454 | 4807902.622 | 1050612701 | 1321917258 | 3048989087 | 2526379882 | 446370493.4 |
| Std | 631467.6545 | 1843676.178 | 335444387.9 | 299582965.4 | 550649621.7 | 386586151.4 | 120087797.6 |
| Best | 3770.079006 | 6119.393228 | 20830.74823 | 19888.73262 | 41146.71723 | 33215.10053 | 12668.3214 |
| F6 | Mean | 4385.997062 | 6830.04364 | 25109.30245 | 25960.73394 | 46688.73657 | 40988.94545 | 15436.56211 |
| Std | 283.7781916 | 495.5286566 | 5039.633742 | 4268.600462 | 1947.267797 | 4555.865169 | 1484.988875 |
| Best | 548601.3933 | 1444325.148 | 202662857.8 | 192645390.1 | 469380299.5 | 583159047.5 | 78210737.68 |
| F7 | Mean | 1424855.595 | 2559045.072 | 334117181.3 | 402478936.1 | 470079347.8 | 842041986.2 | 142352823.7 |
| Std | 432943.2661 | 961142.875 | 80378422.23 | 174899077.8 | 5862372.585 | 141415505.3 | 45868952.47 |
| Best | 15713.23631 | 19186.2209 | 31516.95746 | 31589.84888 | 37352.98584 | 36694.26248 | 32989.46542 |
| F8 | Mean | 21014.047 | 25657.00443 | 33549.03571 | 32922.86116 | 37361.35642 | 38213.17431 | 34579.26347 |
| Std | 3786.519778 | 4731.682899 | 6231.863806 | 7026.180794 | 5623.056834 | 7575.350727 | 6658.448304 |
| Best | 3382.896377 | 4089.081127 | 7958.084137 | 9625.401505 | 1353.90174 | 10910.13186 | 6418.969662 |
| F9 | Mean | 3423.654041 | 4254.403066 | 9835.102409 | 11574.95115 | 13419.51066 | 13443.41938 | 6865.220732 |
| Std | 22.39302112 | 114.1072971 | 1617.806356 | 1166.977396 | 254.2600235 | 1589.567702 | 212.5751176 |
| Best | 3390.009962 | 3725.611385 | 21898.12822 | 25910.18877 | 35026.3259 | 30898.44744 | 15251.35182 |
| F10 | Mean | 3539.668413 | 3814.755832 | 26384.31184 | 29091.06983 | 35026.85245 | 32999.06763 | 17488.29627 |
| Std | 36.32133408 | 81.68764222 | 2479.008781 | 1985.004222 | 52.42536743 | 890.9886491 | 1429.3475 |
+
+D. Convergence Analysis of ICFOA and Comparative Algorithms
+
+The convergence curve is a crucial metric for assessing an algorithm's performance. Figure 2 displays the convergence curves of these algorithms on several benchmark functions. The results indicate that ICFOA demonstrates a notably faster convergence speed on the unimodal functions F1 and F2. For F5, although ICFOA does
+
+not achieve the best performance, its results are still very close to the optimal solution, underscoring ICFOA's strong exploitation capability. In the cases of functions F6 and F7, ICFOA performs similarly to the ROA algorithm but with a quicker convergence rate. For F9, the ICFOA, CFOA, and SCA algorithms all converge rapidly. Overall, it is evident that ICFOA exhibits excellent convergence capabilities across different types of functions.
+
+TABLE IV. STATISTICAL RESULTS OF ALGORITHMS ON 10 BENCHMARK FUNCTIONS USING WILCOXON RANK-SUM TEST
+
+| Function | | ICFOA vs. |
| CFOA | ROA | $\mathbf{{AOA}}$ | SCA | SFO | SHO | SCA |
| F1 | ${3.34} \times {10}^{-{01}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
| F2 | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
| F3 | ${4.57} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
| F4 | NaN | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | $\mathbf{{NaN}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
| F5 | ${4.83} \times {10}^{-{01}}$ | ${1.07} \times {10}^{-{07}}$ | ${3.02} \times {10}^{-{11}}$ | ${3.02} \times {10}^{-{11}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
| F6 | ${4.62} \times {10}^{-{10}}$ | ${3.02} \times {10}^{-{11}}$ | ${3.02} \times {10}^{-{11}}$ | ${3.02} \times {10}^{-{11}}$ | ${1.21} \times {10}^{-{12}}$ | $\mathbf{{NaN}}$ | ${1.21} \times {10}^{-{12}}$ |
| F7 | ${6.10} \times {10}^{-{03}}$ | ${3.11} \times {10}^{-{01}}$ | ${7.98} \times {10}^{-{02}}$ | ${3.02} \times {10}^{-{11}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
| F8 | ${1.07} \times {10}^{-{07}}$ | ${3.02} \times {10}^{-{11}}$ | ${1.22} \times {10}^{-{02}}$ | ${3.02} \times {10}^{-{11}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
| F9 | ${1.21} \times {10}^{-{12}}$ | ${4.57} \times {10}^{-{12}}$ | ${1.22} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
| F10 | $\mathbf{{NaN}}$ | ${1.21} \times {10}^{-{12}}$ | $\mathbf{{NaN}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ | ${1.21} \times {10}^{-{12}}$ |
+
+
+
+
+
+Fig. 2. Convergence of ICFOA and Comparison Algorithm on Some Functions
+
+## V. REDUCER STRUCTURE MODEL DESIGN PROBLEM
+
+The model for the reducer design problem is illustrated in Figure 3. The primary objective is to minimize the mass of the reducer while satisfying the given constraints. This problem involves seven decision variables and eleven constraints. For detailed descriptions of the variables $\mathrm{x}1$ to x11), refer to reference [14]. The mathematical formulation of the problem is identical to that in reference [14].
+
+Table 5 presents the comparison results of ICFOA, CFOA, ROA, AOA, SFO, SHO, and SCA in solving the reducer design problem. It's evident that ICFOA produces strong results while effectively meeting the constraints.
+
+TABLE V. COMPARISON RESULTS OF DIFFERENT OPTIMIZATION ALGORITHMS
+
+| $\mathbf{{Algorithm}}$ | ICFOA | CFOA | ROA | AOA | SFO | SHO |
| ${x}_{1}$ | 0.501 | 0.531 | 0.54 | 0.57 | 0.568 | 0.525 |
| ${x}_{2}$ | 1.231 | 1.262 | 1.257 | 1.27 | 1.241 | 1.239 |
| ${x}_{3}$ | 0.515 | 0.540 | 0.563 | 0.54 | 0.517 | 0.528 |
| ${x}_{4}$ | 1.096 | 1.149 | 1.167 | 1.14 | 1.246 | 1.200 |
| ${x}_{5}$ | 0.517 | 0.558 | 0.631 | 0.64 | 0.534 | 0.781 |
| ${x}_{6}$ | 0.486 | 0.511 | 0.538 | 0.54 | 0.941 | 1.160 |
| ${x}_{7}$ | 0.503 | 0.510 | 0.528 | 0.50 | 0.525 | 0.564 |
| ${x}_{8}$ | 0.346 | 0.351 | 0.472 | 0.27 | 0.332 | 0.340 |
| ${x}_{9}$ | 0.342 | 0.344 | 0.356 | 0.36 | 0.336 | 0.319 |
| cost | 23.01 | 23.20 | 23.38 | 23.55 | 23.45 | 23.92 |
+
+
+
+Fig. 3. Reducer structure model
+
+## VI. CONCLUSION
+
+Building on CFOA, this paper introduces a strategy called Personalized Fishing Strategy (PFS) to propose an improved CFOA (ICFOA). Although CFOA has been applied to solve various design problems, ICFOA addresses some of its limitations, for example, a tendency to become trapped in algorithm stagnation and local optima. By incorporating PFS, ICFOA enhances global search capability, thereby improving its ability to escape local optima. To assess the effectiveness of ICFOA, this paper compared it with five other well-established algorithms using 10 benchmark functions. The results demonstrate that ICFOA performs exceptionally well across most benchmark functions and practical engineering problems.
+
+## REFERENCES
+
+[1] Jia, Heming, Honghua Rao, Changsheng Wen, and Seyedali Mirjalili. Crayfish optimization algorithm. Artificial Intelligence Review. 56(Suppl 2), pp. 1919-1979.
+
+[2] Abdel-Basset, M., Shawky, L. A., Chakrabortty, R. K., & Ryan, M. J. (2022). An Improved Whale Optimization Algorithm for Solving Engineering Optimization Problems. Knowledge-Based Systems, 239, 107886.
+
+[3] Gupta, S., Saini, A., & Deep, K. (2020). Modified Grey Wolf Optimizer for Global Optimization. Applied Soft Computing, 96, 106529.
+
+[4] Ahmadi, Seyed-Alireza. " Human Behavior-Based Optimization: a Novel Metaheuristic Approach to Solve Complex Optimization Problems. " Neural Computing and Applications, vol. 28, no. S1, Springer Science and Business Media LLC, May 2016, pp. 233-44, doi:10.1007/s00521-016-2334-4.
+
+[5] Jia, H., Wen, Q., Wang, Y. et al. Catch fish optimization algorithm: a new human behavior algorithm for solving clustering problems. Cluster Comput (2024).
+
+[6] D. H. Wolpert, and W. G. Macready, "No free lunch for optimization," IEEE T. Evolut. Comput., vol. 1, pp. 68-79, 1997.
+
+[7] Tian, Y., Cheng, R., Zhang, X., & Jin, Y. (2020). PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC) (pp. 415-422). IEEE. doi: 10.1109/CEC48606.2020.9185783.
+
+[8] Heming Jia, Xiaoxu Peng, Chunbo Lang. Remora optimization algorithm.Expert Systems With Applications. 2021, 185, 115665.
+
+[9] Abualigah L, Diabat A, Mirjalili S, et al. The arithmetic optimization algorithm. Computer methods in applied mechanics and engineering, 2021, 376: 113609.
+
+[10] Gomes G F, da Cunha S S, Ancelotti A C. A sunflower optimization (SFO) algorithm applied to damage identification on laminated composite plates. Engineering with Computers, 2019, 35: 619-626.
+
+[11] Zhao S, Zhang T, Ma S, et al. Sea-horse optimizer: A novel nature-inspired meta-heuristic for global optimization problems[J]. Applied Intelligence, 2023, 53(10): 11833-11860
+
+[12] Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems[J]. Knowledge-based systems, 2016, 96: 120-133.
+
+[13] C. Wen, H. Jia, D. Wu, H. Rao, S. Li, Q. Liu and L. Abualigah, "Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem," Mathematics, vol. 10, pp. 3604, 2022.
+
+[14] Radzevich S P. Gear Design Simplified. Boca Raton: CRC Press, 2018: 15-54.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/d9xHa3zSc0/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/d9xHa3zSc0/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5a1af2ac98bb28ca9d14e547b7dca8127d8e3859
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/d9xHa3zSc0/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,553 @@
+§ IMPROVED CATCH FISH OPTIMIZATION ALGORITHM WITH PERSONALIZED FISHING STRATEGY FOR GLOBAL OPTIMIZATION
+
+${1}^{\text{ st }}$ Bowen Xue
+
+School of Electrical and Information
+
+Engineering
+
+Northeast Petroleum University
+
+Daqing, China
+
+xuebowen@stu.nepu.edu.cn
+
+${2}^{\text{ nd }}$ Heming Jia*
+
+School of Information Engineering
+
+Sanming University
+
+Sanming, China
+
+jiaheming@fjsmu.edu.cn
+
+${3}^{\text{ rd }}$ Honghua Rao
+
+School of Electrical and Information
+
+Engineering
+
+Northeast Petroleum University
+
+Daqing, China
+
+20200862235@fjsmu.edu.cn
+
+${4}^{\text{ th }}$ Jinrui Zhang
+
+School of Information Engineering
+
+Sanming University
+
+Sanming, China
+
+ruiruiz2308@163.com
+
+${5}^{\text{ th }}$ Yilong Du
+
+School of Information and Electrical Engineering Heilongjiang Bayi Agricultural University
+
+Daqing, China
+
+wy15093488812@163.com
+
+${6}^{\text{ th }}$ Zekai Ai
+
+College of Design and Engineering
+
+National University of Singapore
+
+Singapore
+
+aizekai@u.nus.edu
+
+Abstract-Catch Fish Optimization Algorithm (CFOA) is a new meta-heuristic optimization algorithm with human behavior. In this algorithm, search agents simulate the process of rural fishermen fishing in the pond. Therefore, the CFOA generally consists of two phases of the update: the exploration phase and the exploitation phase. However, it still falls under the local optimum and has a low convergence rate. To this end, we propose an improved catch fish optimization algorithm(ICFOA) based on personalized fishing strategies. First, the adaptive Gaussian perturbation is adopted to the exploration stage process to increase the global search capability, expand the search range, and improve efficiency while avoiding falling into the local optima. Then, based on the personalized fishing strategy, the personal position of fishermen is updated by randomly selecting "freehand fishing" factors or "using fishing net" factors to accelerate the algorithm's convergence speed. Furthermore, comparative experiments were performed using the CEC2020 test suite to compare the performance of ICFOA and other excellent meta-heuristics. Finally, Wilcoxon's rank-sum test was used to verify the validity of our statistical experimental results. Moreover, the performance of ICFOA in reducer design also indicates that ICFOA can get the optimal solution in solving practical engineering optimization problems. The results show that ICFOA has a more competitive performance than the original $\mathbf{{CFOA}.}$
+
+Keywords-Catch Fish Optimization Algorithm, adaptive Gaussian perturbation, Personalized Fishing Strategy
+
+§ I. INTRODUCTION
+
+In the current era of rapid technological advancement, optimization problems hold a critical position across various domains, including engineering design, economic management, and computer science. Examples include the Crayfish Optimization Algorithm (COA) [1], Whale Optimization Algorithm (WOA) [2], and Grey Wolf Optimization (GWO) [3]. COA excels in exploration but may converge slowly. WOA balances exploration and exploitation well but can get trapped in local optima. GWO is strong in convergence but requires careful parameter tuning. Human behavior-based optimization algorithms are a class of optimization techniques designed to tackle complex optimization problems by emulating human or other biological behaviors and decision-making processes. By mimicking natural phenomena such as evolution, foraging, and social interactions, these algorithms can effectively search and optimize complex solution spaces.
+
+In solving problems related to economic scheduling, functional optimization, and engineering design, human behavior-based optimization algorithms are especially adept at avoiding local optima and discovering global optima or solutions close to the global optimum. For instance, the Human Behavior-Based Optimization (HBBO) [4] algorithm models human behavior patterns, particularly focusing on how humans learn and solve problems through interaction and communication. This algorithm integrates multiple human behavioral traits, such as experiential learning, imitation, social interaction, and collaboration, to achieve efficient search and optimization in complex problems.
+
+Heming Jia et al. [5] In 2024, an innovative optimization algorithm inspired by human behavior, namely the catch fish optimization algorithm(CFOA). The main inspiration for the CFOA comes from the fishing practices of the fishermen. The CFOA contains updated rules based on different fishing practices. As intelligent humans, fishermen often use a variety of ways to find fish, such as sharing fishing experiences, using different fishing tools, etc., so their location update rules are based on individuals and teams. Furthermore, as capture rates decline, fishermen will choose whether to change their fishing strategy. Experimental results show that the proposed algorithm outperforms others in finding the optimal solution and convergence speed. However, as stated by the NFL Theorem [6], given the diversity and complexity of optimization problems, no universal algorithm can be directly applied to address all types of optimization challenges. This reality requires the exploration and adoption of more rigorous and targeted strategies to continuously improve and optimize the algorithm design.
+
+Just as the problems faced by many optimization algorithms, the original CFOA algorithm found it difficult to completely avoid the limitations of low convergence efficiency and easily fall into local optimal solutions in specific optimization tasks. Given this, the optimization and upgrading of the CFOA algorithm can not only improve the efficiency of its algorithm but also broaden its scope of application. Therefore, this paper presents an improved CFOA algorithm (ICFOA) based on a personalized fishing strategy. PFS greatly enhances the solving performance of CFOA in complex optimization problems. At the same time, the position of the fishermen is updated based on the personalized fishing strategy, which not only makes the algorithm more detailed and comprehensive when searching for the solution space but also enhances the algorithm's ability to escape local optima and find the global optimal solution. Finally, to test ICFOA, to test the improved optimization algorithm, this paper utilizes ten commonly used benchmark functions and chooses the optimization algorithm (CFOA) for evaluation. and five representative meta-heuristic algorithms for comparative experiments to validate the effectiveness and advantages of ICFOA.
+
+* Corresponding author.
+
+This work is supported by the Natural Science Foundation of Fujian Province under Grant 2021J011128.
+
+The remainder of this paper is structured as follows: Section II provides the concept of the original CFOA, Section III details the proposed algorithm ICFOA, Sections IV and V demonstrate the experiment analysis in comparison with several popular metaheuristics under the CEC2020 test suite and Wilcoxon's rank-sum test, and Section VI concludes.
+
+§ II. CATCH FISH OPTIMIZATION ALGORITHM
+
+The CFOA simulates the fishing behavior of village fishermen. To catch fish more easily, fishermen choose different fishing methods to catch fish. Similar to other metaheuristic algorithms (MAs), CFOA consists of three distinct stages: initialization, exploration, and exploitation.
+
+§ 1) INITIALIZATION PHASE
+
+The matrix $F$ represents the location data of $N$ search agents in a $d$ -dimensional space, and the formula is shown below:
+
+$$
+F = {\left\lbrack \begin{matrix} {F}_{1,1} & {F}_{1,2} & \cdots & {F}_{1,n} \\ {F}_{2,1} & {F}_{2,2} & \cdots & {F}_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ {F}_{n,1} & {F}_{n,2} & \cdots & {F}_{n,n} \end{matrix}\right\rbrack }_{N \times d} \tag{1}
+$$
+
+$$
+{F}_{i,j} = l{b}_{j} + \left( {u{b}_{j} - l{b}_{j}}\right) \times \text{ rand } \tag{2}
+$$
+
+The matrix $\mathrm{F}$ represents the position information of $\mathrm{N}$ search agents within a d-dimensional space. Its initialization formula is as follows: $\mathrm{{Fi}},\mathrm{j}$ denotes the position of the ith agent in the jth dimension, where ubj and lbj represent the maximum and minimum limits of the jth dimension, respectively, rand is a random number in the interval(0,1).
+
+Using the current position data of each fisherman, we apply the fitness evaluation function fobj to determine their fitness scores, yielding the following fitness matrix:
+
+$$
+f = \operatorname{fobj}\left( F\right) = \left\lbrack \begin{matrix} {f}_{1} \\ {f}_{2} \\ \vdots \\ {f}_{N} \end{matrix}\right\rbrack \tag{3}
+$$
+
+In the above formula, ${f}_{1}$ represents the fitness value of the first fisherman, ${f}_{2}$ denotes the fitness value of the second fisherman, and so on. We use a value of 0.5 to evenly distribute the balance between exploitation and exploration across iterations. In the initial part of the phase (when $\mathrm{{EFs}}/\mathrm{{MaxEFs}} < {0.5}$ ), individuals focus on global exploration, while during the latter part of the phase (when $\mathrm{{EFs}}/$ MaxEFs $\geq {0.5}$ ), they shift towards exploitation.
+
+§ 2) INDIVIDUAL AND GROUP FISHING (EXPLORATION PHASE)
+
+When fishermen explore, initially mainly through independent search and using group encirclement as an aid. As the exploration proceeds, the environmental advantages gradually shift from the fish side to the fishermen. In addition, continuous capture will lead to a decrease in fish population and capture rate. Fishermen will shift from independent exploration to mainly relying on collective encirclement, with personal strengths as assistance. The transformation in this mode is modeled using the capture rate parameter, expressed as $\delta$ .
+
+$$
+\delta = {\left( 1 - \frac{3 \times {EFs}}{2 \times {MaxEFs}}\right) }^{\frac{3 \times {EFs}}{2 \times {MaxEFs}}} \tag{4}
+$$
+
+where ${EFs}$ and ${MaxEFs}$ indicate the current number and maximum number of estimates, respectively.
+
+§ A) INDIVIDUAL FISHING(WHEN ${EFS}/{MAXEFS} < {0.5}$ )
+
+Fishermen disturb the water to float the fish, determine the position of the fish and adjust the direction of exploration. The update formula is as follows:
+
+$$
+{Exp} = \frac{{f}_{i} - {f}_{\text{ pos }}}{{f}_{\max } - {f}_{\min }} \tag{5}
+$$
+
+$$
+R = \text{ Dis } \times \sqrt{\left| \text{ Exp }\right| } \times \left( {1 - \frac{\text{ EFs }}{\text{ Max EFs }}}\right) \tag{6}
+$$
+
+$$
+{F}_{i,j}^{T + 1} = {F}_{i,j}^{T} + \left( {{F}_{{pos},j}^{T} - {F}_{i,j}^{T}}\right) \times \text{ Exp } \tag{7}
+$$
+
+$$
++ \operatorname{rand} \times s \times R
+$$
+
+In the formula mentioned above, ${Exp}$ represents the empirical analysis value obtained by the $i$ -th fisherman using any other fisherman $p$ (where ${pos} = 1,2\cdots$ or $N,p \neq i$ ) as the reference object, with values ranging from -1 to $1.{f}_{\max }$ and ${f}_{\min }$ represent the lowest and highest fitness values, respectively, following the Tth complete position update. $T$ is the number of iterations fishermen’s positions. ${F}_{i,j}^{T}$ and ${F}_{i,j}^{T + 1}$ are position of the ${i}^{\text{ th }}$ fisherman in $j$ -dimension after the iterations of ${T}^{\text{ th }}$ and ${\left( T + I\right) }^{\text{ th }}$ . Dis denotes the Euclidean distance between the $i$ -th individual and the reference point, while $s$ is a random unit vector in $d$ dimensions.
+
+§ B) GROUP FISHING (WHEN ${EFS}/{MAXEFS} \GEQ {0.5}$ )
+
+Fishermen utilize nets to enhance their fishing efficiency and collaborate with each other. They organize into random groups of 3-4 members to collectively encircle potential targets. By leveraging their individual mobility, they can explore the area more comprehensively and accurately. The corresponding formula are outlined below:
+
+$$
+\text{ Centre } = \operatorname{mean}\left( {F}_{c}^{T}\right) \tag{8}
+$$
+
+$$
+{F}_{c,i,j}^{T + l} = {F}_{c,i,j}^{T} + {r}_{2} \times \left( {{\text{ Centre }}_{c} - {F}_{c,i,j}^{T}}\right) + {\left( 1 - \frac{2 \times {EFs}}{MaxEFs}\right) }^{2} \times {r}_{3} \tag{9}
+$$
+
+Where $c$ represents a cluster of 3 to 4 individuals whose positions remain unaltered. Centre ${}_{c}$ is the target point for group $c$ ’s encirclement. ${F}_{c,i,j}^{T + 1}$ and ${F}_{c,i,j}^{T}$ are the position of the ${i}^{\text{ th }}$ fisherman in group $c$ in the $j$ -dimension after the ${\left( T + I\right) }^{\text{ th }}$ and ${T}^{\text{ th }}$ updates. ${r}_{2}$ represents the speed at which a fisherman moves toward the center, varying individually and falling within the range of $\left( {0,1}\right) .{r}_{3}$ is the offset of the move, ranging from(-1,1), and decreases progressively as EFs increase.
+
+3) Collective capture (exploitation phase)
+
+All fishermen searched under a uniform strategy, purposefully bringing hidden fish to the same location and around. The position of the fishermen during the trapping process is updated as follows:
+
+$$
+\sigma = \sqrt{\left( \frac{2\left( {1 - \frac{EFs}{MaxEFs}}\right) }{\left( {\left( 1 - \frac{EFs}{MaxEFs}\right) }^{2} + 1\right) }\right) } \tag{10}
+$$
+
+$$
+{F}_{i}^{T + 1} = \text{ Gbest } + {GD}\left( {0,\frac{{r}_{4} \times \sigma \times \left| {\text{ mean }\left( F\right) - \text{ Gbest }}\right| }{3}}\right) \tag{11}
+$$
+
+Within this group, GD is a Gaussian distribution function with a mean $\mu$ of 0, and its overall variance $\sigma$ decreases from 1 to 0 as the number of evaluations increases. The position of the ${i}^{\text{ th }}$ fisherman after the ${\left( T + I\right) }^{\text{ th }}$ update. Mean(F)signifies the matrix of mean values for each dimension at the center of the fishermen's positions, while Gbest indicates the global optimum.
+
+§ III. PROPOSED ALGORITHM
+
+§ A. ADAPTIVE GAUSSIAN PERTURBATION (AGP)
+
+Adaptive Gaussian Perturbation dynamically enables the optimization algorithm to flexibly balance exploration and exploitation in different iteration stages as follows:
+
+$$
+{\sigma }_{p} = \left( {1 - \frac{EFs}{MaxEFs}}\right) \cdot \operatorname{std}\left( {F}_{i}^{T}\right) \cdot \exp \left( {-\frac{EFs}{{MaxEFs}/{10}}}\right) \tag{12}
+$$
+
+$$
+{F}_{i}^{T + 1} = {F}_{i}^{T} + \mathcal{N}\left( {0,{\sigma }_{p}^{2}}\right) \tag{13}
+$$
+
+where ${\sigma }_{p}$ is the perturbation strength of the current iteration, The std is the standard deviation of the current solution, $\mathcal{N}\left( {0,{\sigma }_{p}^{2}}\right)$ is a normally distributed random variable with mean 0 and variance ${\sigma }_{p}^{2}$ .
+
+§ B. PERSONALIZED FISHING STRATEGY (PFS)
+
+PFS is a widely adopted and effective strategy that enhances an algorithm's optimization capability within the search space. Overall, PFS enhances the exploitation capability and accelerates the convergence speed of metaheuristic algorithms (MAs). The principle of PFS is to generate random actions $\alpha$ and $\beta$ based on the original action step, update the position according to different movements, and change according to the capture parameter $\delta$ , making the whole fishing process more closely related.
+
+$$
+{F}_{i}^{T + 1} = {F}_{i}^{T} + \text{ step } \cdot {kd} \tag{14}
+$$
+
+$$
+{kd} = {0.5} + {0.5} * \text{ rand } \tag{15}
+$$
+
+where step represents action choice and kd indicates the action skill proficiency of fishermen, whose value is $\left\lbrack {{0.1},1}\right\rbrack$ . The personalized fishing strategy includes two factors, "freehand capture" and "using tools", which are used to improve the global search capability of the algorithm. The updated formula is shown as follows:
+
+$$
+\begin{array}{l} \text{ step } = \left\{ \begin{array}{l} \alpha = 2 * \left( {1 - \frac{EFs}{MaxEFs}}\right) * \left( {{0.4} * \delta }\right) * \text{ rand } > {0.5} \\ \beta = 1 * \left( {1 - \frac{EFs}{MaxEFs}}\right) \text{ otherwise } \end{array}\right. \end{array} \tag{16}
+$$
+
+§ C. DETAILS OF ICFOA
+
+CFOA is widely used and easy to implement for optimization tasks. However, its search capabilities (exploration and exploitation) are limited when tackling complex problems, making convergence within a finite number of iterations difficult or even unattainable. To address these challenges, this paper integrates PFS to enhance ICFOA for global optimization. The PFS improves exploitation efficiency and accelerates the convergence rate of the conventional CFOA.
+
+The pseudo-code of ICFOA is shown in Algorithm 1, and Fig. 1 illustrates the flowchart of ICFOA.
+
+Algorithm 1 the pseudo-code of ICFOA
+
+ Initialization parameters
+
+ Initialize the population Fisher
+
+ While (EFs $\leq$ MaxEFs)
+
+ Reckon the values of fit and get the globally optimal
+
+solution(Gbest)
+
+ if EFs/MaxEFs $< {0.5}$
+
+ Reckon the values of $\delta$ by Eq. (4)
+
+ Randomly shuffle the order of each fisherman
+
+ If $\mathrm{p} < \delta$
+
+ Using Eq. (7) to update new position of fisher
+
+ Else
+
+ Randomly group the fisherman
+
+ Using Eq. (9) to update new position of fisher
+
+ End
+
+ Else
+
+ Using Eq. (13) to update new position of fisher
+
+ End
+
+ End
+
+ Output the globally optimal solution
+
+ < g r a p h i c s >
+
+Fig. 1. Flowchart of the proposed algorithm ICFOA
+
+§ D. COMPUTATION COMPLEXITY OF ICFOA
+
+Initialization and position updates are the core components of ICFOA. The computational complexity of initialization is $\mathrm{O}\left( {N \times \text{ Dim }}\right)$ , where $N$ represents the population size and Dim denotes the dimensionality. The complexity of fishing with bare hands and using fishing nets varies and can reach up to $\mathrm{O}\left( {T \times N \times \text{ Dim }}\right)$ , where $T$ is the maximum number of evaluations.Both the PFS and AGP methods have a complexity of $\mathrm{O}\left( {T \times N \times \text{ Dim }}\right)$ . Therefore, the overall computational complexity of ICFOA is $\mathrm{O}\left( {\left( {4 \times T + 1}\right) \times N \times \text{ Dim }}\right)$ .
+
+§ IV. RESULTS OF GLOBAL OPTIMIZATION EXPERIMENTS
+
+This section uses 10 benchmark functions in CEC2020 [7]to evaluate the optimized performance of the proposed working ICFOA. First, the definitions of the 10 benchmark test functions are introduced. Second, the experimental setup and the comparison groups are described in detail, including other well-known MAs.
+
+§ A. DEFINITION OF 10 BENCHMARK FUNCTIONS
+
+Benchmark functions are essential for assessing the performance of various algorithms. This paper selects 10 representative benchmark functions, categorized into: (1) unimodal functions (F1-F4) and (2) multimodal functions (F5-F10). Table 1 provides a detailed description of these functions, and D denotes the dimensionality. Unimodal functions have a single global optimal solution, making them suitable for evaluating the exploitation capability of metaheuristic algorithms (MAs). Conversely, multimodal functions possess multiple local optima and a single global optimum. providing a basis for assessing the exploration capability of MAs and their ability to escape local optima.
+
+TABLE I. DEFINITION OF 10 BENCHMARK FUNCTIONS
+
+max width=
+
+$\mathbf{{No}}$ Property D Range ${\mathbf{f}}_{\text{ min }}$
+
+1-5
+F1 Unimodal Function X X 100
+
+1-5
+F2 3*Basic Functions 9*100 X 1100
+
+1-1
+4-5
+F3 X 700
+
+1-1
+4-5
+F4 X 1900
+
+1-2
+4-5
+F5 3*Hybrid Functions ${\left\lbrack -{100},{100}\right\rbrack }^{\mathrm{D}}$ 1700
+
+1-1
+4-5
+F6 X 1600
+
+1-1
+4-5
+F7 X 2100
+
+1-2
+4-5
+F8 3*Composition Functions X 2000
+
+1-1
+4-5
+F9 X 2400
+
+1-1
+4-5
+F10 X 2500
+
+1-5
+
+§ B. EXPERIMENTAL CONFIGURATION
+
+We utilize aforementioned functions to evaluate the performance of ICFOA. To ensure the experiment's representativeness, we compare the enhanced algorithm with the basic CFOA and five widely used metaheuristic algorithms, including ROA [8], AOA [9], SFO [10], SHO [11], and SCA [12]. To ensure an unbiased comparison, we define the maximum number of evaluations as $T = {100},{000}$ , the group size as $N = {30}$ , and the dimensionality as $\operatorname{Dim} = {10}$ . Furthermore, each test are conducted independently 30 times, with the best results emphasized in bold.
+
+TABLE II. ALGORITHM PARAMETER SETTINGS
+
+max width=
+
+$\mathbf{{Algorithm}}$ Parameters
+
+1-2
+ICFOA $\alpha = {0.4}$ ,
+
+1-2
+CFOA $\alpha = {0.4},\beta = {0.5}$
+
+1-2
+ROA c=0.2
+
+1-2
+AOA $\alpha = 5;\mu = {0.5}$ ;
+
+1-2
+SCSO S=2, R=[-1,1];
+
+1-2
+SHO u=0.03, v=0.03, l=0.03
+
+1-2
+SCA B=3
+
+1-2
+
+§ C. STATISTICAL ANALYSIS OF 10 BENCHMARK FUNCTIONS
+
+This part compares ICFOA with five foundational algorithms across 10 benchmark functions, focusing on the optimal value (Best), mean value (Mean), and standard deviation (Std) [13]. Table 3 provides the details of the experimental outcomes. From the table, it is evident that ICFOA performs well on most functions, often achieving the minimum Best, Mean, and Std values. In particular, for F1- F6, ICFOA consistently attains the theoretical optimal solution, whereas CFOA solely approximates it, highlighting ICFOA's superior exploitation capability. For F2, ICFOA demonstrates better global optimization performance than other prominent algorithms. Despite ICFOA finds solely a suboptimal solution for $\mathrm{F}3$ , its precision in convergence surpasses that of the others. For F4, F5, and F6, ICFOA reaches the theoretical optimum. However, for F4 and F9, ICFOA's performance is slightly inferior to the SFO algorithm.
+
+Considering that MAs are stochastic algorithms, this paper employs the Wilcoxon rank-sum test to enhance the statistical analysis and assess the significance of the results. Notably, if the $p$ value is below 0.06, there is a significant difference between the two data groups. Conversely, when $\mathrm{p}$ value is at least 0.06, this indicates minimal difference between the data sets. Additionally, "NAN" is used in this paper to represent cases where no significant difference exists between the groups. The detailed results of the Wilcoxon rank-sum test are displayed in Table 4. The results show that functions F4, F6, and F10 contain 'NAN,' because the optimization results of ICFOA, CFOA, and SHO all achieve the theoretical optimal solution, leading to minimal differences among the three data groups. For other functions, The performance of ICFOA differs significantly from the other algorithms. However, the Wilcoxon rank-sum test solely measures the statistical difference between algorithms, not their overall performance. Consequently, by integrating the insights from Tables 3 and 4, it is evident that the improvements to CFOA presented in this paper are highly effective.
+
+TABLE III. STATISTICS ABOUT THE 10 TEST FUNCTIONS IN THE CEC2020
+
+max width=
+
+2|c|FUNCTIONS ICFOA CFOA ROA AOA SFO SHO SCA
+
+1-9
+4*F1 Best 123.9163085 1303.684774 2435.264611 2484.316844 2898.991125 27552.63251 17016.33261
+
+2-9
+ Mean 4591.728112 202423937.5 26245.21152 26812.31168 28990.51251 28799.22518 18504.45167
+
+2-9
+ Std 4137.802543 68441680.22 10449416987 9083474822 1924069.487 4948400618 13440911059
+
+2-9
+ Best 12244.11095 16605.71202 29808.77688 28990.18679 35461.62181 34698.94999 31568.08871
+
+1-9
+3*F2 Mean 18161.78534 23564.86621 31381.50853 30430.72471 35488.16642 36163.21061 32488.08215
+
+2-9
+ Std 3545.047924 4192.739178 114655.1359 9077.919527 24045.26547 103551.2026 5910.424614
+
+2-9
+ Best 894.5056084 1683.533974 3908.81377 3806.349814 4217.387789 4066.605567 3489.622851
+
+1-9
+3*F3 Mean 927.3464718 2120.641269 4011.883834 3909.740683 4221.83685 4204.438691 3765.087815
+
+2-9
+ Std 40.84472928 270.9768306 56.03266113 53.70544871 30.76279062 75.21084023 180.6559676
+
+2-9
+ Best 1911.571228 1972.715291 1926.112879 1913.360194 1942.664109 1926.352858 5871.613809
+
+1-9
+3*F4 Mean 1903.184835 2003.719718 1938.664563 1911.360491 1908.059367 1933.542475 33306.19437
+
+2-9
+ Std 1.974573432 20.75907556 48.00546352 ${7.62}\mathrm{E} + {00}$ 5.236162188 10.3265855 41611.3348
+
+2-9
+ Best 771627.8334 2622829.749 648036492.7 897453834.9 1910249217 1782936241 274232285.4
+
+1-9
+3*F5 Mean 1572147.454 4807902.622 1050612701 1321917258 3048989087 2526379882 446370493.4
+
+2-9
+ Std 631467.6545 1843676.178 335444387.9 299582965.4 550649621.7 386586151.4 120087797.6
+
+2-9
+ Best 3770.079006 6119.393228 20830.74823 19888.73262 41146.71723 33215.10053 12668.3214
+
+1-9
+3*F6 Mean 4385.997062 6830.04364 25109.30245 25960.73394 46688.73657 40988.94545 15436.56211
+
+2-9
+ Std 283.7781916 495.5286566 5039.633742 4268.600462 1947.267797 4555.865169 1484.988875
+
+2-9
+ Best 548601.3933 1444325.148 202662857.8 192645390.1 469380299.5 583159047.5 78210737.68
+
+1-9
+3*F7 Mean 1424855.595 2559045.072 334117181.3 402478936.1 470079347.8 842041986.2 142352823.7
+
+2-9
+ Std 432943.2661 961142.875 80378422.23 174899077.8 5862372.585 141415505.3 45868952.47
+
+2-9
+ Best 15713.23631 19186.2209 31516.95746 31589.84888 37352.98584 36694.26248 32989.46542
+
+1-9
+3*F8 Mean 21014.047 25657.00443 33549.03571 32922.86116 37361.35642 38213.17431 34579.26347
+
+2-9
+ Std 3786.519778 4731.682899 6231.863806 7026.180794 5623.056834 7575.350727 6658.448304
+
+2-9
+ Best 3382.896377 4089.081127 7958.084137 9625.401505 1353.90174 10910.13186 6418.969662
+
+1-9
+3*F9 Mean 3423.654041 4254.403066 9835.102409 11574.95115 13419.51066 13443.41938 6865.220732
+
+2-9
+ Std 22.39302112 114.1072971 1617.806356 1166.977396 254.2600235 1589.567702 212.5751176
+
+2-9
+ Best 3390.009962 3725.611385 21898.12822 25910.18877 35026.3259 30898.44744 15251.35182
+
+1-9
+2*F10 Mean 3539.668413 3814.755832 26384.31184 29091.06983 35026.85245 32999.06763 17488.29627
+
+2-9
+ Std 36.32133408 81.68764222 2479.008781 1985.004222 52.42536743 890.9886491 1429.3475
+
+1-9
+
+D. Convergence Analysis of ICFOA and Comparative Algorithms
+
+The convergence curve is a crucial metric for assessing an algorithm's performance. Figure 2 displays the convergence curves of these algorithms on several benchmark functions. The results indicate that ICFOA demonstrates a notably faster convergence speed on the unimodal functions F1 and F2. For F5, although ICFOA does
+
+not achieve the best performance, its results are still very close to the optimal solution, underscoring ICFOA's strong exploitation capability. In the cases of functions F6 and F7, ICFOA performs similarly to the ROA algorithm but with a quicker convergence rate. For F9, the ICFOA, CFOA, and SCA algorithms all converge rapidly. Overall, it is evident that ICFOA exhibits excellent convergence capabilities across different types of functions.
+
+TABLE IV. STATISTICAL RESULTS OF ALGORITHMS ON 10 BENCHMARK FUNCTIONS USING WILCOXON RANK-SUM TEST
+
+max width=
+
+2*Function X 6|c|ICFOA vs.
+
+2-8
+ CFOA ROA $\mathbf{{AOA}}$ SCA SFO SHO SCA
+
+1-8
+F1 ${3.34} \times {10}^{-{01}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F2 ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F3 ${4.57} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F4 NaN ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ $\mathbf{{NaN}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F5 ${4.83} \times {10}^{-{01}}$ ${1.07} \times {10}^{-{07}}$ ${3.02} \times {10}^{-{11}}$ ${3.02} \times {10}^{-{11}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F6 ${4.62} \times {10}^{-{10}}$ ${3.02} \times {10}^{-{11}}$ ${3.02} \times {10}^{-{11}}$ ${3.02} \times {10}^{-{11}}$ ${1.21} \times {10}^{-{12}}$ $\mathbf{{NaN}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F7 ${6.10} \times {10}^{-{03}}$ ${3.11} \times {10}^{-{01}}$ ${7.98} \times {10}^{-{02}}$ ${3.02} \times {10}^{-{11}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F8 ${1.07} \times {10}^{-{07}}$ ${3.02} \times {10}^{-{11}}$ ${1.22} \times {10}^{-{02}}$ ${3.02} \times {10}^{-{11}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F9 ${1.21} \times {10}^{-{12}}$ ${4.57} \times {10}^{-{12}}$ ${1.22} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+F10 $\mathbf{{NaN}}$ ${1.21} \times {10}^{-{12}}$ $\mathbf{{NaN}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$ ${1.21} \times {10}^{-{12}}$
+
+1-8
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Fig. 2. Convergence of ICFOA and Comparison Algorithm on Some Functions
+
+§ V. REDUCER STRUCTURE MODEL DESIGN PROBLEM
+
+The model for the reducer design problem is illustrated in Figure 3. The primary objective is to minimize the mass of the reducer while satisfying the given constraints. This problem involves seven decision variables and eleven constraints. For detailed descriptions of the variables $\mathrm{x}1$ to x11), refer to reference [14]. The mathematical formulation of the problem is identical to that in reference [14].
+
+Table 5 presents the comparison results of ICFOA, CFOA, ROA, AOA, SFO, SHO, and SCA in solving the reducer design problem. It's evident that ICFOA produces strong results while effectively meeting the constraints.
+
+TABLE V. COMPARISON RESULTS OF DIFFERENT OPTIMIZATION ALGORITHMS
+
+max width=
+
+$\mathbf{{Algorithm}}$ ICFOA CFOA ROA AOA SFO SHO
+
+1-7
+${x}_{1}$ 0.501 0.531 0.54 0.57 0.568 0.525
+
+1-7
+${x}_{2}$ 1.231 1.262 1.257 1.27 1.241 1.239
+
+1-7
+${x}_{3}$ 0.515 0.540 0.563 0.54 0.517 0.528
+
+1-7
+${x}_{4}$ 1.096 1.149 1.167 1.14 1.246 1.200
+
+1-7
+${x}_{5}$ 0.517 0.558 0.631 0.64 0.534 0.781
+
+1-7
+${x}_{6}$ 0.486 0.511 0.538 0.54 0.941 1.160
+
+1-7
+${x}_{7}$ 0.503 0.510 0.528 0.50 0.525 0.564
+
+1-7
+${x}_{8}$ 0.346 0.351 0.472 0.27 0.332 0.340
+
+1-7
+${x}_{9}$ 0.342 0.344 0.356 0.36 0.336 0.319
+
+1-7
+cost 23.01 23.20 23.38 23.55 23.45 23.92
+
+1-7
+
+ < g r a p h i c s >
+
+Fig. 3. Reducer structure model
+
+§ VI. CONCLUSION
+
+Building on CFOA, this paper introduces a strategy called Personalized Fishing Strategy (PFS) to propose an improved CFOA (ICFOA). Although CFOA has been applied to solve various design problems, ICFOA addresses some of its limitations, for example, a tendency to become trapped in algorithm stagnation and local optima. By incorporating PFS, ICFOA enhances global search capability, thereby improving its ability to escape local optima. To assess the effectiveness of ICFOA, this paper compared it with five other well-established algorithms using 10 benchmark functions. The results demonstrate that ICFOA performs exceptionally well across most benchmark functions and practical engineering problems.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/gNLTnJB0Cm/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/gNLTnJB0Cm/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..c81ac69f6654e0bf930b71c69ccf2910941a6e58
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/gNLTnJB0Cm/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,383 @@
+# Segmentation Reconstruction and Prediction of AIS Trajectory Based on Broad Learning System
+
+Baohua He
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+hebaohua@dlou.edu.cn
+
+Yi Zuo
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+zuo@dlmu.edu.cn
+
+Weihong Wang
+
+Collaborative Innovation Center
+
+for Transport Studies
+
+Dalian Maritime University
+
+Dalian, China
+
+Licheng Zhao
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+zhaolichengzx@dlmu.edu.cn
+
+Tieshan Li
+
+School of Automatic Engineering
+
+University of Electronic Science
+
+Chengdu, China
+
+litieshan073@uestc.edu.cn
+
+C. L. Philip Chen
+
+Computer Science College
+
+South China University of
+
+Technology
+
+Guangzhou, China
+
+${Abstract}$ - With the widespread adoption of the automatic identification system (AIS), the collected AIS data has become vast volumes, and cause the analysis and processing of vessel trajectories with highly time consuming. Linear models (LMs) are simple and fast to be widely applied in trajectory reconstruction and prediction. However, existing studies generally pursue individual vessel trajectories independently, and the accuracy and stability of LMs are still unsuitable for generalized processing of a large number of trajectories. To address this limitation, we adopt broad learning system (BLS) to establish a prediction model for trajectories. This paper includes three parts of trajectory segmentation, feature extraction and model training. Firstly, K-means clustering is used to segment the trajectories, and divide them into small pieces based on navigation characteristics. Secondly, considering the time series analysis of trajectory data, the segmented trajectory is processed using first-order and second-order differencing to obtain training data. Finally, by training the time series data of different trajectories, a generalized trajectory prediction can be achieved.
+
+Keywords—AIS data, Trajectory prediction, Trajectory component, Segmentation, K-means, Broad learning system
+
+## I. INTRODUCTION
+
+The automatic identification system (AIS) is an important navigation service for information exchange between ships and between ships and shore stations, and has been widely adopted. The extensive collection of vessel navigation data via AIS enables effective prediction of future vessel trajectories, which is crucial for ensuring navigation safety and improving traffic efficiency [1-2]. However, the large volume of data collected by AIS, particularly dynamic information such as longitude, latitude, speed, and course, which need to be continuously recorded and stored, and present challenges for subsequent data processing and practical application. In particular, trajectory prediction based on AIS data requires a method that can quickly, efficiently, and accurately handle trajectory prediction [3-4].
+
+Several existing methods for trajectory prediction mostly include traditional regression analysis and classic machine learning. Among the traditional trajectory prediction approaches the primary approaches are linear regression analysis (LRA) and least squares method (LSM) [5-6]. In terms of machine learning-based models, the main techniques include support vector machine (SVM) and artificial neural network (ANN) for trajectory prediction [7-8]. In [5], this study presents a framework for vessel pattern recognition based on AIS data, and includes LRA to identify vessel movement patterns for route prediction. In [6], this study proposes an improved LSM for vessel trajectory prediction, so as to better handle the nonlinear and dynamic vessel movements, and enhance the accuracy of trajectory predictions based on AIS data. In [7], this study explores vessel trajectory prediction in the Northern South China Sea, and uses SVM to recognize navigation patterns in vessel trajectories, which provides high ability to outperform traditional linear models in complex maritime environments. In the study of [8], Tang et. al. focus on vessel trajectory prediction based on recurrent neural network (RNN), where the RNN can efficiently handle sequential data of vessel movements and also show its superior performance in trajectory prediction tasks than other models.
+
+This paper applies the broad learning system (BLS) [9] to the prediction of vessel trajectories, and the research purpose of this paper is mainly divided into three parts. Firstly, the K-means method is used to segment the AIS trajectories. The trajectories are divided into several segments based on the navigation characteristics, with the number of segments being an adjustable parameter. This step aims to reduce the complexity of trajectory data so as to enhance efficiency and accuracy of model training and prediction. Secondly, the first-order difference and the second-order difference of the trajectory points within each segment are calculated by the temporal sequence of AIS data. This step captures the dynamic relationships between trajectory points, and improves the model sensitivity to temporal data and predictive capabilities. Finally, BLS is used to establish the prediction model for the segmented trajectories. By training the temporal sequence data of different trajectories, the number of segments are optimized to achieve generalized trajectory prediction. In numerical experiments, this paper takes the AIS data of Dalian Port as an example to model and predict the trajectories of ships. The discussion of experimental results show that the BLS has higher accuracy and shorter computation time in trajectory prediction. This proves the effectiveness and superiority of BLS in AIS trajectory prediction, providing new ideas and methods for future maritime traffic management and trajectory prediction.
+
+The structure of this paper is organized as follows. Section 1 is the Introduction to introduces the significance of the research background and purpose. Section 2 is the Related Studies to provide an overview of the current researches related to trajectory prediction issues. Section 3 is the Methodology to explain the theoretical foundation of the algorithm proposed in this paper. Section 4 is the Experiments to present the results and analyses. Section 5 is the Conclusion of this study.
+
+## II. RELATED STUDIES
+
+## A. Trajectroy Prediction based on Linear Method
+
+Trajectory prediction is a critical component in various applications, such as maritime navigation, air traffic control, and autonomous driving. Accurate trajectory prediction can enhance safety, optimize routes, and improve efficiency. Among the numerous methods developed for trajectory prediction, linear methods remain popular due to their simplicity, computational efficiency, and ease of implementation.
+
+Linear regression analysis (LRA) is a statistical method used to model the relationship between a dependent variable and one or more independent variables [10]. In trajectory prediction of [5], LRA can be employed to predict the future positions of a moving object based on its past positions, and the basic form of LRA is represented by the Equation (1).
+
+$$
+\mathrm{y} = {a}_{1}{x}_{1} + {a}_{2}{x}_{2} + \cdots {a}_{n}{x}_{n} + \varepsilon \tag{1}
+$$
+
+where $y$ is the dependent variable (e.g., future position), $\left( {{x}_{1},{x}_{2},\ldots ,{x}_{n}}\right)$ are the independent variables (e.g., past positions), $\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right)$ are the coefficients, and $\varepsilon$ is the error term. In [11], Peterson and Hovem analyzed vessel traffic patterns using AIS data and establishes a trajectory prediction model based on LRA. This research explored the reliability of AIS data and the impact of human errors on data quality, and also discussed the application of LRA in data analysis, particularly for preliminary modeling in simple trajectory prediction. These studies cover various applications of LRA in vessel trajectory prediction from preliminary modeling to serving as benchmark models, and provide a comprehensive limitations of linear regression in AIS trajectory prediction [5-6, 10-11].
+
+Least squares method (LSM) is also a statistical technique, which can obtain a fitting line to the give data set by minimizing the square of differences among observed and predicted values.
+
+The LSM is a widely used prediction model in trajectory prediction due to the simplicity, efficiency, and effectiveness in minimizing prediction errors. For given the dataset $\left( {{x}_{1},{y}_{1}}\right) ,\left( {{x}_{2},{y}_{2}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right)$ , the basic form of LSM is represented by the Equation (2).
+
+$$
+{y}_{i} = {b}_{0} + {b}_{1}{x}_{i} \tag{2}
+$$
+
+where the coefficients ${b}_{0}$ and ${b}_{1}$ are calculated by minimizing the Equation (3) [6].
+
+$$
+\text{Minimize}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {y}_{i} - \left( {b}_{0} + {b}_{1}{x}_{i}\right) \right) }^{2} \tag{3}
+$$
+
+In most cases, LSM can be introduced as a hybrid approach with other methods such as support vector regression (SVR). In [6], LSM-SVR was used as advanced methodology for predicting ship trajectories using AIS data. The primary purpose was to enhance the accuracy and reliability of ship trajectory predictions, which are crucial for ensuring maritime safety and improving navigational efficiency. LSM-SVR is employed for its robust capabilities in handling nonlinear features from the AIS data due to its efficiency in regression and predictive modeling. By integrating these two methods, the study achieved more precise trajectory predictions. In [13], Liu et al. also proposed LSM-SVR for ship trajectory prediction based on AIS data. The proposed model was augmented with a selection mechanism that optimizes the prediction process by selecting relevant data inputs dynamically, and this hybrid approach was also tailored to improve prediction accuracy and computational efficiency in maritime applications. The purpose of this research is to develop a model that can provide accurate and real-time predictions of ship trajectories, which is essential for enhancing maritime safety and efficiency.
+
+## B. Trajectroy Prediction based on Neural Network
+
+Neural networks (NN) have been extensively studied and applied in various trajectory prediction tasks due to the flexibility and strong performance in capturing complex patterns in data. Several types of neural network models have also been utilized for ship trajectory prediction. In [8], a vessel trajectory prediction model is proposed by using Long Short-Term Memory (LSTM) neural networks leverages the sequence prediction capabilities of LSTM to accurately forecast future vessel positions based on historical AIS data. This approach captures temporal dependencies and dynamic changes in vessel trajectories, improving prediction accuracy over traditional methods. The LSTM model is trained on large AIS datasets, ensuring robustness and reliability in real-world maritime navigation applications, enhancing safety and efficiency.
+
+Recently, attention mechanism based models were also included in trajectory prediction. In [14], this paper presented a novel approach to predicting ship trajectories in nearby port waters using an attention mechanism model. Accurate prediction of ship movements is essential for ensuring maritime safety, optimizing port operations, and managing traffic efficiently. Traditional methods, which often rely on physical models or statistical techniques, may not fully capture the complex interactions and dynamic environment of port waters. In [15], this study explored the application of an encoder-decoder model integrated with an attention mechanism for predicting ship trajectories based on AIS data. The encoder-decoder architecture, commonly used in natural language processing tasks, is well-suited for handling sequential data, making it an effective choice for modeling the movement patterns of ships over time. The model's attention mechanism plays a crucial role in enhancing prediction accuracy by allowing the system to selectively focus on the most relevant segments of the input sequence. This capability is particularly valuable in complex maritime environments, where various factors such as traffic density, navigational patterns, and environmental conditions can influence a ship's trajectory. In [16], the proposed model was based on an encoder-decoder architecture, which processes sequential data from AIS records. The introduction of a dual-attention mechanism allows the model to focus on both temporal and spatial aspects of the ship's trajectory. The first attention mechanism is applied to the temporal sequence of past positions, enabling the model to prioritize certain time steps that are more indicative of future movements. The second attention mechanism focuses on the spatial relationship between the ship and its surrounding environment, taking into account factors such as nearby vessels, navigational constraints, and environmental conditions. This dual-attention approach allows the model to dynamically adjust its focus, leading to more accurate trajectory predictions.
+
+## III. Methodogical design
+
+## A. Overview
+
+This paper explores the application of BLS for the segmentation, reconstruction, and prediction of AIS trajectories, where AIS data provides critical information for monitoring and predicting the movements of vessels. The main processes of this study is shown in Figure 1 and listed as follows.
+
+
+
+Fig. 1. Overview of the proposed method
+
+(1) AIS Data Segmentation
+
+- Data Collection: AIS data is collected, capturing ship positions, speeds, and timestamps.
+
+- Trajectory Segmentation: The continuous AIS data is segmented into smaller trajectory segments based on specific criteria, such as time intervals or changes in ship direction.
+
+- Feature Extraction: Features like speed, heading, and positional changes are extracted from each segment to characterize the trajectory.
+
+## (2) Trajectory Reconstruction
+
+- Reconstruction Algorithm: A reconstruction algorithm is applied to the segmented data to ensure that the trajectory segments are accurately aligned and represent the actual movement patterns of the vessel.
+
+- Interpolation and Smoothing: Missing data points are interpolated, and the trajectory is smoothed to eliminate noise and improve the quality of the data.
+
+(3) Training Data Construction
+
+- Input-Output Pairing: The segments are paired with their corresponding future positions to create training datasets.
+
+- Normalization: Data normalization is performed to ensure consistency across all features, facilitating efficient learning by the BLS model.
+
+(4) Prediction Model Establishment
+
+- Broad Learning System (BLS): The BLS model is established with a flat network structure, which efficiently maps the input features to the predicted ship trajectories.
+
+- Training and Validation: The model is trained and validated using the constructed datasets, with the objective of minimizing prediction error.
+
+- Real Prediction: Once trained, the BLS model is capable of making real-time predictions of ship trajectories based on incoming AIS data.
+
+## B. Trajectory Segmentation Using K-Means (1) K-means (KM)
+
+$\mathrm{{KM}}$ is a partition-based clustering algorithm that divides a set of data points into $\mathrm{K}$ clusters, where each point belongs to the cluster with the nearest mean value. The objective of KM algorithm is to minimize the sum of squared distances between the data points and their respective centroids. The classic KM algorithm follows these steps:
+
+- Initialization: Choose $\mathrm{K}$ initial centroids, which can be selected randomly or based on some heuristic.
+
+- Assignment: Assign each data point to the nearest centroid based on the Euclidean distance.
+
+- Update: Recalculate the centroids as the mean of all data points assigned to each cluster.
+
+- Iteration: Repeat the assignment and update steps until convergence, which occurs when the centroids no longer change significantly.
+
+The K-means clustering can be expressed as:
+
+$$
+{\operatorname{argmin}}_{\left\{ {C}_{k}\right\} }\mathop{\sum }\limits_{{k = 1}}^{K}\mathop{\sum }\limits_{{{x}_{i} \in {C}_{k}}}{\left| \left| {x}_{i} - {\mu }_{k}\right| \right| }^{2} \tag{3}
+$$
+
+where ${x}_{i}$ denotes data point, ${\mu }_{k}$ denotes the mean value of cluster ${C}_{k}$ , and $\parallel \cdot {\parallel }^{2}$ denotes the Euclidean distance.
+
+## (2) Trajectory Segmentation Using K-Means
+
+In the context of maritime traffic, ship trajectories are often complex and vary significantly depending on factors such as speed, heading, and environmental conditions. To effectively analyze these trajectories, they can be segmented into distinct phases or patterns using KM clustering. The segmentation process typically involves the following steps (see Figure 2):
+
+- Data Preprocessing: Ship trajectory data $T$ in Eq. (4), usually recorded in the form of time-stamped latitude $\left( {\mathrm{{Lat}}}_{i}\right)$ and longitude $\left( {{Lo}{n}_{i}}\right)$ coordinates ${P}_{i}$ in Eq. (5) are first preprocessed. This process also involves filtering out noise, interpolating missing data points, and normalizing the data.
+
+$$
+T = \left\{ {{P}_{1},{P}_{2},\ldots {P}_{n}}\right\} \tag{4}
+$$
+
+$$
+{P}_{i} = \left( {{\text{ Lat }}_{i},{\text{ Lon }}_{i}}\right) \tag{5}
+$$
+
+- Feature Extraction: Key features ${F}_{j}$ in Eq. (6) such as latitude, longitude, speed $\left( {\mathrm{{Spe}}}_{j}\right)$ and course $\left( {\mathrm{{Cou}}}_{j}\right)$ are extracted from the trajectory data. These features are used as input for the KM clustering algorithm given in Section B(1).
+
+$$
+{F}_{j} = \left( {{\text{ Lat }}_{j},{\text{ Lon }}_{j},{\text{ Spe }}_{j},{\text{ Cou }}_{j}}\right) \tag{6}
+$$
+
+- Clustering: The KM algorithm is applied to the extracted features to segment the trajectory into $\mathrm{K}$ clusters $C$ in Eqs. (7) and (8). Each cluster represents a distinct phase of the ship's movement, such as cruising, turning, or slowing down.
+
+$$
+C = \left\{ {{C}_{1},{C}_{2},\ldots {C}_{K}}\right\} \tag{7}
+$$
+
+$$
+{C}_{k} = \left( {{F}_{k,1},{F}_{k,2},\ldots ,{F}_{k, m}}\right) \tag{8}
+$$
+
+- Analysis: The resulting segments are analyzed to identify patterns, such as common routes, frequent turning points, or areas where vessels tend to slow down.
+
+
+
+Fig. 2. Flowchart of trajectory segmentation based on KM clustering
+
+## C. Construction of Time-Differential Data
+
+Time-differential data construction is widely used to extract temporal features from sequential data by calculating the differences between successive data points over time. This approach is particularly useful in time-series analysis, i.e. ship trajectory prediction, where capturing the dynamics of movement is crucial.
+
+Given a time series data of $X$ as shown in Eq. (9), notation ${x}_{t}$ denotes the data point at time $t$ .
+
+$$
+X = \left\{ {{x}_{1},{x}_{2},\ldots {x}_{T}}\right\} \tag{9}
+$$
+
+The time-differential data can be constructed by computing the difference between successive points as shown in Eq. (10),
+
+$$
+\Delta {x}_{t} = {x}_{t} - {x}_{t - 1} \tag{10}
+$$
+
+where $\Delta {x}_{t}$ denotes the time difference of trajectory point at time $t,{x}_{t}$ denotes the point at time $t$ , and ${x}_{t - 1}$ denotes the point at time $t - 1$ , respectively. This process transforms the original time series into a new time-differential sequence as shown in Eq. (11),
+
+$$
+{\Delta X} = \left\{ {\Delta {x}_{2},\Delta {x}_{3},\ldots ,\Delta {x}_{T}}\right\} \tag{11}
+$$
+
+which represents the changes in the trajectory over time.
+
+The time-differential data is crucial in contexts where the rate of change is more informative than the original data points. In ship trajectory prediction, the differences in position over time provide insights into the velocity and acceleration of the vessel, which are key factors in predicting future positions.
+
+## D. Trajectory Prediction Using BLS
+
+## (1) Basic concept of BLS
+
+BLS is an emerging learning architecture designed to provide efficient and scalable learning by expanding the network width rather than its depth as shown in Figure 3. This approach is particularly effective for tasks where computational efficiency and rapid model updates are critical.
+
+
+
+Fig. 3. Flat structure between input and output layer of BLS
+
+BLS was first introduced by C. L. P. Chen and his colleagues as an alternative to deep learning models, which rely on increasing the number of layers to improve performance $\left\lbrack {{17},{18}}\right\rbrack$ Instead, BLS focuses on expanding the width of the network by increasing the number of feature nodes and enhancement nodes as shown in Figure 4. This allows for fast incremental learning and makes the model more adaptable to new data without the need for retraining from scratch. The architecture of BLS includes two main types of nodes: feature nodes and enhancement nodes. The feature nodes are directly connected to the input data, while the enhancement nodes are used to improve the learning capacity of the model. In feature nodes construction, given an input matrix $\mathbf{X} \in {\mathbf{R}}^{n \times m}$ , where $n$ is the length of input data, and $\mathrm{m}$ is the size of features. The feature nodes can be given as
+
+$$
+{Z}_{i} = \sigma \left( {X{W}_{i} + {b}_{i}}\right) \tag{11}
+$$
+
+
+
+Fig. 4. Feature and Enhancment layers of BLS
+
+where ${W}_{i}$ is the weight of feature layer, ${b}_{i}$ is the bias term, and $\sigma \left( \cdot \right)$ is the active function. In enhancement nodes construction, they are generated from the feature nodes to extract more complex relationships by Eq. (12),
+
+$$
+{H}_{j} = \varepsilon \left( {Z{V}_{j} + {c}_{j}}\right) \tag{12}
+$$
+
+where ${V}_{j}$ is the weight of enhancement layer, ${c}_{j}$ is the bias term, and $\varepsilon \left( \cdot \right)$ is also the active function. The final output of BLS is obtained by combining both of the feature and enhancement nodes. The learning process in BLS involves solving a linear system to determine the output weights, which can be done using a pseudoinverse operation as Eq. (13),
+
+$$
+O = Y{P}^{ + } \tag{13}
+$$
+
+where $O$ denotes the weight of output layer, $Y$ denotes the output, and ${P}^{ + }$ denotes the pseudoinverse of weights concatenated by the outputs of the feature and enhancement nodes as Eq. (14).
+
+$$
+{P}^{ + } = {\left( A{A}^{T} + \lambda I\right) }^{-1}{A}^{T} \tag{14}
+$$
+
+When given the trajectory data $\left( {{\text{ Lat }}_{i},{\text{ Lon }}_{i},{\text{ Spe }}_{i},{\text{ Cou }}_{i}}\right)$ as input, the BLS is trained by the sequence at time $t$ , and the future trajectory positon at time $t + 1$ can be obtains as shown in Fig. 5.
+
+
+
+$\left( {{\text{ Lat }}_{t - i},{\text{ Lon }}_{t - i},{\text{ Spe }}_{t - i},{\text{ Cou }}_{t - i}}\right)$ $\left( {{\text{ Lat }}_{t - 2},{\text{ Lon }}_{t - 2},{\text{ Spe }}_{t - 2},{\text{ Cou }}_{t - 2}}\right)$ $\left( {{\text{ Lat }}_{t - 1},{\text{ Lon }}_{t - 1},{\text{ Spe }}_{t - 1},{\text{ Cou }}_{t - 1}}\right)$
+
+Fig. 5. Trajectory prediction based on BLS
+
+## IV. EXPERIMENTS AND ANALYSES
+
+## A. Data Preparation
+
+The raw AIS data covers global waters and amounts to approximately 72GB, comprising about 1.3 billion AIS records. This vast dataset was imported into database for storage and retrieval. To simplify the data processing, Dalian Port waters were selected as the study area, resulting in the extraction of 4.08 million AIS records. From this subset, 389 AIS records of a specific ship from October 1, 2016, were further selected for detailed analysis.
+
+Given the characteristics of AIS data, clustering techniques were applied to the raw AIS data to enhance the fitting process. The main idea is to cluster the AIS data into different groups and then fit the data within each cluster separately, which helps in capturing the patterns more effectively. The analysis focuses on latitude and longitude data, and the comparison schemes are divided into three categories.
+
+A. Unprocessed raw data: The AIS data is used in its original, unmodified form. This raw data includes the direct measurements from the AIS system, such as timestamp, latitude, longitude, speed, and course. This serves as a baseline for comparison, allowing for the analysis of how processing techniques affect the data fitting and modeling performance.
+
+B. First-order difference data : The data is transformed using a first-order time difference as given in Section III.C, which calculates the difference between consecutive data points to emphasize changes over time.
+
+C. Second-order difference data : The data is transformed by first-order difference data, which is also calculated by the process of Section III.C.
+
+These three data categories (A, B, and C) allow for a comprehensive analysis of the AIS data, with each category providing different insights into the movement patterns of vessels.
+
+## B. Evaluation Index
+
+The root mean square error (RMSE) is a widely used measure of the differences between predicted values by a model and the actual observed values. It provides a standard deviation of the residuals (prediction errors), which helps in understanding how concentrated the data is around the best-fit line. The RMSE formula is expressed as Eq. (15)
+
+$$
+\text{ RMSE } = \sqrt{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {y}_{i} - {y}_{i}^{\prime }\right) }^{2}} \tag{14}
+$$
+
+where $n$ is the number of observations, ${y}_{i}$ is the actual observed value, and ${y}_{i}^{\prime }$ is the predicted value by the model.
+
+## C. Prediction Results
+
+(1) Data of Category A
+
+As shown in Figure 6, both of linear model (LM) model and broad learning system (BLS) model exhibit a decreasing trend in RMSE as the number of clusters increases. This indicates that both models improve in accuracy with more refined clustering.
+
+However, the RMSE of the BLS model is consistently lower than that of the LM model across all cluster counts, demonstrating that the BLS model achieves a higher fitting accuracy compared to the LM model.
+
+
+
+
+
+Fig. 6. Results comparison in category A
+
+## (2) Data of Category B
+
+As shown in Figure 7, both of LM model and BLS model show a consistent downward trend in RMSE as the number of clusters increases. This indicates that increasing the number of clusters improves the fitting accuracy for both models.
+
+However, in every K value, the RMSE of the BLS model is consistently lower than that of the LM model, indicating that the BLS model provides higher fitting accuracy.
+
+
+
+Fig. 7. Results comparison in category B
+
+## (3) Data of Category C
+
+As shown in Figure 8, both of LM model and BLS model exhibit a consistent downward trend in RMSE as the number of clusters increases, indicating improved fitting accuracy with more clustering.
+
+However, within each $\mathrm{K}$ value, the RMSE of the BLS model is consistently lower than that of the LM model, demonstrating that the BLS model achieves higher fitting accuracy.
+
+
+
+Fig. 8. Results comparison in category $\mathrm{C}$
+
+## V. EXPERIMENTS AND ANALYSES
+
+In this paper, we developed a BLS based trajectory prediction model, and applied the KM clustering algorithm to process data within each cluster of trajectory segmentation. The processed LM and BLS models were then used to calculate the case AIS data in Dalian Port. Using latitude and longitude data, three comparison scenarios were established: raw data, first-order difference data, and second-order difference data. The experimental results show that, compared to the segmented LM method, the BLS significantly improves accuracy, exhibits better stability, and has a faster runtime when processing AIS data. The comparative analysis of trajectory prediction methods using BLS provides a foundation for establishing predictive models for large-scale AIS trajectories in the future. It also offers a new predictive approach for maritime authorities to monitor maritime traffic, demonstrating significant exploratory value for practical applications.
+
+## ACKNOWLEDGMENT
+
+This work was supported in part by the National Natural Science Foundation of China (grant nos. 52131101 and 51939001), the Liao Ning Revitalization Talents Program (grant no. XLYC1807046), and the Science and Technology Fund for Distinguished Young Scholars of Dalian (grant no. 2021RJ08).
+
+## REFERENCES
+
+[1] G. Pallotta, M. Vespe, and K. Bryan, "Vessel pattern knowledge discovery from AIS data: A framework for anomaly detection and route prediction," Entropy, vol.15, no.6, pp.2218-2245, 2013.
+
+[2] E. Tu, G. Zhang, L. Rachmawati, E. Rajabally, and G. B. Huang, "Exploiting AIS data for intelligent maritime navigation: A comprehensive survey from data to methodology," IEEE Transactions on Intelligent Transportation Systems, vol. 19, no.5, pp.1559-1582, 2018.
+
+[3] Z. Zhang, G. Ni and Y. Xu, "Ship Trajectory Prediction based on LSTM Neural Network," 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 2020, pp. 1356-1364.
+
+[4] M. Liang, L. Weng, R. Gao, Y. Li, and L. Du, "Unsupervised maritime anomaly detection for intelligent situational awareness using AIS data," Knowledge-Based Systems, vol.284, 2024.
+
+[5] J. Jiang, Y. Zuo, Y. Xiao, W, Zhang, and T. Li, "STMGF-Net: A Spatiotemporal Multi-Graph Fusion Network for Vessel Trajectory Forecasting in Intelligent Maritime Navigation," IEEE Transactions on Intelligent Transportation Systems, October, 2024. (early access).
+
+[6] X. Luo, J. Wang, J. Li, H. Lu, Q. Lai, and X. Zhu, "Research on Ship Trajectory Prediction Using Extended Kalman Filter and Least-Squares Support Vector Regression Based on AIS Data," In: Zhang, Z. (eds) 2021 6th International Conference on Intelligent Transportation Engineering (ICITE 2021). ICITE 2021. Lecture Notes in Electrical Engineering, vol 901. Springer, Singapore, 2022.
+
+[7] X. Liu, W. He, J. Xie and X. Chu, "Predicting the Trajectories of Vessels Using Machine Learning," 2020 5th International Conference on Control, Robotics and Cybernetics (CRC), Wuhan, China, 2020, pp.66-70.
+
+[8] H. Tang, Y. Yin, and H. Shen, "A model for vessel trajectory prediction based on long short-term memory neural network," Journal of Marine Engineering & Technology, vol.21, no.3, pp.136-145, 2019.
+
+[9] C. L. P. Chen and Z. Liu, "Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture," in IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 1, pp. 10-24, 2018.
+
+[10] D. C. Montgomery, and G. C. Runger, "Applied Statistics and Probability for Engineers," John Wiley & Sons, 2014.
+
+[11] C. N. Burger, W. Kleynhans, and T. L. Grobler, "Extended linear regression model for vessel trajectory prediction with a-priori AIS information," Geo-Spatial Information Science, vol.27, no.1, pp.202-220, 2022.
+
+[12] H. Ma, Y. Zuo, and T. Li, "Vessel Navigation Behavior Analysis and Multiple-Trajectory Prediction Model Based on AIS Data," Journal of Advanced Transportation, vol.2022, article number 6622862, 2022.
+
+[13] J. Liu, G. Shi and K. Zhu, "Online Multiple Outputs Least-Squares Support Vector Regression Model of Ship Trajectory Prediction Based on Automatic Information System Data and Selection Mechanism," IEEE Access, vol.8, pp.154727-154745, 2020.
+
+[14] J. Jiang and Y. Zuo, "Prediction of Ship Trajectory in Nearby Port Waters Based on Attention Mechanism Model," Sustainability vol. 15, article number 7435, 2023.
+
+[15] L. Zhao, Y. Zuo, T. Li, and C.L.P. Chen, "Application of an Encoder-Decoder Model with Attention Mechanism for Trajectory Prediction Based on AIS Data: Case Studies from the Yangtze River of China and the Eastern Coast of the U.S.," Journal of Marine Science and Engineering vol.11, no.8, article number 1530, 2023.
+
+[16] L. Zhao, Y. Zuo, W. Zhang, T. Li, and C.L.P. Chen, "End-to-end model-based trajectory prediction for ro-ro ship route using dual-attention mechanism," Frontiers in Computation Neuroscience, vol.21, no.18, article number 1358437, 2024.
+
+[17] C.L.P. Chen, Z. Liu, C. Feng, and S. Liu, "Universal Approximation Capability of Broad Learning System and Its Structural Variations," IEEE Transactions on Neural Networks and Learning Systems, vol.29, no. 10, pp.4336-4349, 2018.
+
+[18] Z. Liu, C. L. P. Chen, C. Feng, and S. Liu, "Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need of Deep Architecture," IEEE Transactions on Neural Networks and Learning Systems, vol.29, no.1, pp.10-24, 2018.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/gNLTnJB0Cm/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/gNLTnJB0Cm/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2d4e1d0589ddeb7c1fabea7699fac420750988da
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/gNLTnJB0Cm/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,345 @@
+§ SEGMENTATION RECONSTRUCTION AND PREDICTION OF AIS TRAJECTORY BASED ON BROAD LEARNING SYSTEM
+
+Baohua He
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+hebaohua@dlou.edu.cn
+
+Yi Zuo
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+zuo@dlmu.edu.cn
+
+Weihong Wang
+
+Collaborative Innovation Center
+
+for Transport Studies
+
+Dalian Maritime University
+
+Dalian, China
+
+Licheng Zhao
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+zhaolichengzx@dlmu.edu.cn
+
+Tieshan Li
+
+School of Automatic Engineering
+
+University of Electronic Science
+
+Chengdu, China
+
+litieshan073@uestc.edu.cn
+
+C. L. Philip Chen
+
+Computer Science College
+
+South China University of
+
+Technology
+
+Guangzhou, China
+
+${Abstract}$ - With the widespread adoption of the automatic identification system (AIS), the collected AIS data has become vast volumes, and cause the analysis and processing of vessel trajectories with highly time consuming. Linear models (LMs) are simple and fast to be widely applied in trajectory reconstruction and prediction. However, existing studies generally pursue individual vessel trajectories independently, and the accuracy and stability of LMs are still unsuitable for generalized processing of a large number of trajectories. To address this limitation, we adopt broad learning system (BLS) to establish a prediction model for trajectories. This paper includes three parts of trajectory segmentation, feature extraction and model training. Firstly, K-means clustering is used to segment the trajectories, and divide them into small pieces based on navigation characteristics. Secondly, considering the time series analysis of trajectory data, the segmented trajectory is processed using first-order and second-order differencing to obtain training data. Finally, by training the time series data of different trajectories, a generalized trajectory prediction can be achieved.
+
+Keywords—AIS data, Trajectory prediction, Trajectory component, Segmentation, K-means, Broad learning system
+
+§ I. INTRODUCTION
+
+The automatic identification system (AIS) is an important navigation service for information exchange between ships and between ships and shore stations, and has been widely adopted. The extensive collection of vessel navigation data via AIS enables effective prediction of future vessel trajectories, which is crucial for ensuring navigation safety and improving traffic efficiency [1-2]. However, the large volume of data collected by AIS, particularly dynamic information such as longitude, latitude, speed, and course, which need to be continuously recorded and stored, and present challenges for subsequent data processing and practical application. In particular, trajectory prediction based on AIS data requires a method that can quickly, efficiently, and accurately handle trajectory prediction [3-4].
+
+Several existing methods for trajectory prediction mostly include traditional regression analysis and classic machine learning. Among the traditional trajectory prediction approaches the primary approaches are linear regression analysis (LRA) and least squares method (LSM) [5-6]. In terms of machine learning-based models, the main techniques include support vector machine (SVM) and artificial neural network (ANN) for trajectory prediction [7-8]. In [5], this study presents a framework for vessel pattern recognition based on AIS data, and includes LRA to identify vessel movement patterns for route prediction. In [6], this study proposes an improved LSM for vessel trajectory prediction, so as to better handle the nonlinear and dynamic vessel movements, and enhance the accuracy of trajectory predictions based on AIS data. In [7], this study explores vessel trajectory prediction in the Northern South China Sea, and uses SVM to recognize navigation patterns in vessel trajectories, which provides high ability to outperform traditional linear models in complex maritime environments. In the study of [8], Tang et. al. focus on vessel trajectory prediction based on recurrent neural network (RNN), where the RNN can efficiently handle sequential data of vessel movements and also show its superior performance in trajectory prediction tasks than other models.
+
+This paper applies the broad learning system (BLS) [9] to the prediction of vessel trajectories, and the research purpose of this paper is mainly divided into three parts. Firstly, the K-means method is used to segment the AIS trajectories. The trajectories are divided into several segments based on the navigation characteristics, with the number of segments being an adjustable parameter. This step aims to reduce the complexity of trajectory data so as to enhance efficiency and accuracy of model training and prediction. Secondly, the first-order difference and the second-order difference of the trajectory points within each segment are calculated by the temporal sequence of AIS data. This step captures the dynamic relationships between trajectory points, and improves the model sensitivity to temporal data and predictive capabilities. Finally, BLS is used to establish the prediction model for the segmented trajectories. By training the temporal sequence data of different trajectories, the number of segments are optimized to achieve generalized trajectory prediction. In numerical experiments, this paper takes the AIS data of Dalian Port as an example to model and predict the trajectories of ships. The discussion of experimental results show that the BLS has higher accuracy and shorter computation time in trajectory prediction. This proves the effectiveness and superiority of BLS in AIS trajectory prediction, providing new ideas and methods for future maritime traffic management and trajectory prediction.
+
+The structure of this paper is organized as follows. Section 1 is the Introduction to introduces the significance of the research background and purpose. Section 2 is the Related Studies to provide an overview of the current researches related to trajectory prediction issues. Section 3 is the Methodology to explain the theoretical foundation of the algorithm proposed in this paper. Section 4 is the Experiments to present the results and analyses. Section 5 is the Conclusion of this study.
+
+§ II. RELATED STUDIES
+
+§ A. TRAJECTROY PREDICTION BASED ON LINEAR METHOD
+
+Trajectory prediction is a critical component in various applications, such as maritime navigation, air traffic control, and autonomous driving. Accurate trajectory prediction can enhance safety, optimize routes, and improve efficiency. Among the numerous methods developed for trajectory prediction, linear methods remain popular due to their simplicity, computational efficiency, and ease of implementation.
+
+Linear regression analysis (LRA) is a statistical method used to model the relationship between a dependent variable and one or more independent variables [10]. In trajectory prediction of [5], LRA can be employed to predict the future positions of a moving object based on its past positions, and the basic form of LRA is represented by the Equation (1).
+
+$$
+\mathrm{y} = {a}_{1}{x}_{1} + {a}_{2}{x}_{2} + \cdots {a}_{n}{x}_{n} + \varepsilon \tag{1}
+$$
+
+where $y$ is the dependent variable (e.g., future position), $\left( {{x}_{1},{x}_{2},\ldots ,{x}_{n}}\right)$ are the independent variables (e.g., past positions), $\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right)$ are the coefficients, and $\varepsilon$ is the error term. In [11], Peterson and Hovem analyzed vessel traffic patterns using AIS data and establishes a trajectory prediction model based on LRA. This research explored the reliability of AIS data and the impact of human errors on data quality, and also discussed the application of LRA in data analysis, particularly for preliminary modeling in simple trajectory prediction. These studies cover various applications of LRA in vessel trajectory prediction from preliminary modeling to serving as benchmark models, and provide a comprehensive limitations of linear regression in AIS trajectory prediction [5-6, 10-11].
+
+Least squares method (LSM) is also a statistical technique, which can obtain a fitting line to the give data set by minimizing the square of differences among observed and predicted values.
+
+The LSM is a widely used prediction model in trajectory prediction due to the simplicity, efficiency, and effectiveness in minimizing prediction errors. For given the dataset $\left( {{x}_{1},{y}_{1}}\right) ,\left( {{x}_{2},{y}_{2}}\right) ,\ldots ,\left( {{x}_{n},{y}_{n}}\right)$ , the basic form of LSM is represented by the Equation (2).
+
+$$
+{y}_{i} = {b}_{0} + {b}_{1}{x}_{i} \tag{2}
+$$
+
+where the coefficients ${b}_{0}$ and ${b}_{1}$ are calculated by minimizing the Equation (3) [6].
+
+$$
+\text{ Minimize }\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {y}_{i} - \left( {b}_{0} + {b}_{1}{x}_{i}\right) \right) }^{2} \tag{3}
+$$
+
+In most cases, LSM can be introduced as a hybrid approach with other methods such as support vector regression (SVR). In [6], LSM-SVR was used as advanced methodology for predicting ship trajectories using AIS data. The primary purpose was to enhance the accuracy and reliability of ship trajectory predictions, which are crucial for ensuring maritime safety and improving navigational efficiency. LSM-SVR is employed for its robust capabilities in handling nonlinear features from the AIS data due to its efficiency in regression and predictive modeling. By integrating these two methods, the study achieved more precise trajectory predictions. In [13], Liu et al. also proposed LSM-SVR for ship trajectory prediction based on AIS data. The proposed model was augmented with a selection mechanism that optimizes the prediction process by selecting relevant data inputs dynamically, and this hybrid approach was also tailored to improve prediction accuracy and computational efficiency in maritime applications. The purpose of this research is to develop a model that can provide accurate and real-time predictions of ship trajectories, which is essential for enhancing maritime safety and efficiency.
+
+§ B. TRAJECTROY PREDICTION BASED ON NEURAL NETWORK
+
+Neural networks (NN) have been extensively studied and applied in various trajectory prediction tasks due to the flexibility and strong performance in capturing complex patterns in data. Several types of neural network models have also been utilized for ship trajectory prediction. In [8], a vessel trajectory prediction model is proposed by using Long Short-Term Memory (LSTM) neural networks leverages the sequence prediction capabilities of LSTM to accurately forecast future vessel positions based on historical AIS data. This approach captures temporal dependencies and dynamic changes in vessel trajectories, improving prediction accuracy over traditional methods. The LSTM model is trained on large AIS datasets, ensuring robustness and reliability in real-world maritime navigation applications, enhancing safety and efficiency.
+
+Recently, attention mechanism based models were also included in trajectory prediction. In [14], this paper presented a novel approach to predicting ship trajectories in nearby port waters using an attention mechanism model. Accurate prediction of ship movements is essential for ensuring maritime safety, optimizing port operations, and managing traffic efficiently. Traditional methods, which often rely on physical models or statistical techniques, may not fully capture the complex interactions and dynamic environment of port waters. In [15], this study explored the application of an encoder-decoder model integrated with an attention mechanism for predicting ship trajectories based on AIS data. The encoder-decoder architecture, commonly used in natural language processing tasks, is well-suited for handling sequential data, making it an effective choice for modeling the movement patterns of ships over time. The model's attention mechanism plays a crucial role in enhancing prediction accuracy by allowing the system to selectively focus on the most relevant segments of the input sequence. This capability is particularly valuable in complex maritime environments, where various factors such as traffic density, navigational patterns, and environmental conditions can influence a ship's trajectory. In [16], the proposed model was based on an encoder-decoder architecture, which processes sequential data from AIS records. The introduction of a dual-attention mechanism allows the model to focus on both temporal and spatial aspects of the ship's trajectory. The first attention mechanism is applied to the temporal sequence of past positions, enabling the model to prioritize certain time steps that are more indicative of future movements. The second attention mechanism focuses on the spatial relationship between the ship and its surrounding environment, taking into account factors such as nearby vessels, navigational constraints, and environmental conditions. This dual-attention approach allows the model to dynamically adjust its focus, leading to more accurate trajectory predictions.
+
+§ III. METHODOGICAL DESIGN
+
+§ A. OVERVIEW
+
+This paper explores the application of BLS for the segmentation, reconstruction, and prediction of AIS trajectories, where AIS data provides critical information for monitoring and predicting the movements of vessels. The main processes of this study is shown in Figure 1 and listed as follows.
+
+ < g r a p h i c s >
+
+Fig. 1. Overview of the proposed method
+
+(1) AIS Data Segmentation
+
+ * Data Collection: AIS data is collected, capturing ship positions, speeds, and timestamps.
+
+ * Trajectory Segmentation: The continuous AIS data is segmented into smaller trajectory segments based on specific criteria, such as time intervals or changes in ship direction.
+
+ * Feature Extraction: Features like speed, heading, and positional changes are extracted from each segment to characterize the trajectory.
+
+§ (2) TRAJECTORY RECONSTRUCTION
+
+ * Reconstruction Algorithm: A reconstruction algorithm is applied to the segmented data to ensure that the trajectory segments are accurately aligned and represent the actual movement patterns of the vessel.
+
+ * Interpolation and Smoothing: Missing data points are interpolated, and the trajectory is smoothed to eliminate noise and improve the quality of the data.
+
+(3) Training Data Construction
+
+ * Input-Output Pairing: The segments are paired with their corresponding future positions to create training datasets.
+
+ * Normalization: Data normalization is performed to ensure consistency across all features, facilitating efficient learning by the BLS model.
+
+(4) Prediction Model Establishment
+
+ * Broad Learning System (BLS): The BLS model is established with a flat network structure, which efficiently maps the input features to the predicted ship trajectories.
+
+ * Training and Validation: The model is trained and validated using the constructed datasets, with the objective of minimizing prediction error.
+
+ * Real Prediction: Once trained, the BLS model is capable of making real-time predictions of ship trajectories based on incoming AIS data.
+
+§ B. TRAJECTORY SEGMENTATION USING K-MEANS (1) K-MEANS (KM)
+
+$\mathrm{{KM}}$ is a partition-based clustering algorithm that divides a set of data points into $\mathrm{K}$ clusters, where each point belongs to the cluster with the nearest mean value. The objective of KM algorithm is to minimize the sum of squared distances between the data points and their respective centroids. The classic KM algorithm follows these steps:
+
+ * Initialization: Choose $\mathrm{K}$ initial centroids, which can be selected randomly or based on some heuristic.
+
+ * Assignment: Assign each data point to the nearest centroid based on the Euclidean distance.
+
+ * Update: Recalculate the centroids as the mean of all data points assigned to each cluster.
+
+ * Iteration: Repeat the assignment and update steps until convergence, which occurs when the centroids no longer change significantly.
+
+The K-means clustering can be expressed as:
+
+$$
+{\operatorname{argmin}}_{\left\{ {C}_{k}\right\} }\mathop{\sum }\limits_{{k = 1}}^{K}\mathop{\sum }\limits_{{{x}_{i} \in {C}_{k}}}{\left| \left| {x}_{i} - {\mu }_{k}\right| \right| }^{2} \tag{3}
+$$
+
+where ${x}_{i}$ denotes data point, ${\mu }_{k}$ denotes the mean value of cluster ${C}_{k}$ , and $\parallel \cdot {\parallel }^{2}$ denotes the Euclidean distance.
+
+§ (2) TRAJECTORY SEGMENTATION USING K-MEANS
+
+In the context of maritime traffic, ship trajectories are often complex and vary significantly depending on factors such as speed, heading, and environmental conditions. To effectively analyze these trajectories, they can be segmented into distinct phases or patterns using KM clustering. The segmentation process typically involves the following steps (see Figure 2):
+
+ * Data Preprocessing: Ship trajectory data $T$ in Eq. (4), usually recorded in the form of time-stamped latitude $\left( {\mathrm{{Lat}}}_{i}\right)$ and longitude $\left( {{Lo}{n}_{i}}\right)$ coordinates ${P}_{i}$ in Eq. (5) are first preprocessed. This process also involves filtering out noise, interpolating missing data points, and normalizing the data.
+
+$$
+T = \left\{ {{P}_{1},{P}_{2},\ldots {P}_{n}}\right\} \tag{4}
+$$
+
+$$
+{P}_{i} = \left( {{\text{ Lat }}_{i},{\text{ Lon }}_{i}}\right) \tag{5}
+$$
+
+ * Feature Extraction: Key features ${F}_{j}$ in Eq. (6) such as latitude, longitude, speed $\left( {\mathrm{{Spe}}}_{j}\right)$ and course $\left( {\mathrm{{Cou}}}_{j}\right)$ are extracted from the trajectory data. These features are used as input for the KM clustering algorithm given in Section B(1).
+
+$$
+{F}_{j} = \left( {{\text{ Lat }}_{j},{\text{ Lon }}_{j},{\text{ Spe }}_{j},{\text{ Cou }}_{j}}\right) \tag{6}
+$$
+
+ * Clustering: The KM algorithm is applied to the extracted features to segment the trajectory into $\mathrm{K}$ clusters $C$ in Eqs. (7) and (8). Each cluster represents a distinct phase of the ship's movement, such as cruising, turning, or slowing down.
+
+$$
+C = \left\{ {{C}_{1},{C}_{2},\ldots {C}_{K}}\right\} \tag{7}
+$$
+
+$$
+{C}_{k} = \left( {{F}_{k,1},{F}_{k,2},\ldots ,{F}_{k,m}}\right) \tag{8}
+$$
+
+ * Analysis: The resulting segments are analyzed to identify patterns, such as common routes, frequent turning points, or areas where vessels tend to slow down.
+
+ < g r a p h i c s >
+
+Fig. 2. Flowchart of trajectory segmentation based on KM clustering
+
+§ C. CONSTRUCTION OF TIME-DIFFERENTIAL DATA
+
+Time-differential data construction is widely used to extract temporal features from sequential data by calculating the differences between successive data points over time. This approach is particularly useful in time-series analysis, i.e. ship trajectory prediction, where capturing the dynamics of movement is crucial.
+
+Given a time series data of $X$ as shown in Eq. (9), notation ${x}_{t}$ denotes the data point at time $t$ .
+
+$$
+X = \left\{ {{x}_{1},{x}_{2},\ldots {x}_{T}}\right\} \tag{9}
+$$
+
+The time-differential data can be constructed by computing the difference between successive points as shown in Eq. (10),
+
+$$
+\Delta {x}_{t} = {x}_{t} - {x}_{t - 1} \tag{10}
+$$
+
+where $\Delta {x}_{t}$ denotes the time difference of trajectory point at time $t,{x}_{t}$ denotes the point at time $t$ , and ${x}_{t - 1}$ denotes the point at time $t - 1$ , respectively. This process transforms the original time series into a new time-differential sequence as shown in Eq. (11),
+
+$$
+{\Delta X} = \left\{ {\Delta {x}_{2},\Delta {x}_{3},\ldots ,\Delta {x}_{T}}\right\} \tag{11}
+$$
+
+which represents the changes in the trajectory over time.
+
+The time-differential data is crucial in contexts where the rate of change is more informative than the original data points. In ship trajectory prediction, the differences in position over time provide insights into the velocity and acceleration of the vessel, which are key factors in predicting future positions.
+
+§ D. TRAJECTORY PREDICTION USING BLS
+
+§ (1) BASIC CONCEPT OF BLS
+
+BLS is an emerging learning architecture designed to provide efficient and scalable learning by expanding the network width rather than its depth as shown in Figure 3. This approach is particularly effective for tasks where computational efficiency and rapid model updates are critical.
+
+ < g r a p h i c s >
+
+Fig. 3. Flat structure between input and output layer of BLS
+
+BLS was first introduced by C. L. P. Chen and his colleagues as an alternative to deep learning models, which rely on increasing the number of layers to improve performance $\left\lbrack {{17},{18}}\right\rbrack$ Instead, BLS focuses on expanding the width of the network by increasing the number of feature nodes and enhancement nodes as shown in Figure 4. This allows for fast incremental learning and makes the model more adaptable to new data without the need for retraining from scratch. The architecture of BLS includes two main types of nodes: feature nodes and enhancement nodes. The feature nodes are directly connected to the input data, while the enhancement nodes are used to improve the learning capacity of the model. In feature nodes construction, given an input matrix $\mathbf{X} \in {\mathbf{R}}^{n \times m}$ , where $n$ is the length of input data, and $\mathrm{m}$ is the size of features. The feature nodes can be given as
+
+$$
+{Z}_{i} = \sigma \left( {X{W}_{i} + {b}_{i}}\right) \tag{11}
+$$
+
+ < g r a p h i c s >
+
+Fig. 4. Feature and Enhancment layers of BLS
+
+where ${W}_{i}$ is the weight of feature layer, ${b}_{i}$ is the bias term, and $\sigma \left( \cdot \right)$ is the active function. In enhancement nodes construction, they are generated from the feature nodes to extract more complex relationships by Eq. (12),
+
+$$
+{H}_{j} = \varepsilon \left( {Z{V}_{j} + {c}_{j}}\right) \tag{12}
+$$
+
+where ${V}_{j}$ is the weight of enhancement layer, ${c}_{j}$ is the bias term, and $\varepsilon \left( \cdot \right)$ is also the active function. The final output of BLS is obtained by combining both of the feature and enhancement nodes. The learning process in BLS involves solving a linear system to determine the output weights, which can be done using a pseudoinverse operation as Eq. (13),
+
+$$
+O = Y{P}^{ + } \tag{13}
+$$
+
+where $O$ denotes the weight of output layer, $Y$ denotes the output, and ${P}^{ + }$ denotes the pseudoinverse of weights concatenated by the outputs of the feature and enhancement nodes as Eq. (14).
+
+$$
+{P}^{ + } = {\left( A{A}^{T} + \lambda I\right) }^{-1}{A}^{T} \tag{14}
+$$
+
+When given the trajectory data $\left( {{\text{ Lat }}_{i},{\text{ Lon }}_{i},{\text{ Spe }}_{i},{\text{ Cou }}_{i}}\right)$ as input, the BLS is trained by the sequence at time $t$ , and the future trajectory positon at time $t + 1$ can be obtains as shown in Fig. 5.
+
+ < g r a p h i c s >
+
+$\left( {{\text{ Lat }}_{t - i},{\text{ Lon }}_{t - i},{\text{ Spe }}_{t - i},{\text{ Cou }}_{t - i}}\right)$ $\left( {{\text{ Lat }}_{t - 2},{\text{ Lon }}_{t - 2},{\text{ Spe }}_{t - 2},{\text{ Cou }}_{t - 2}}\right)$ $\left( {{\text{ Lat }}_{t - 1},{\text{ Lon }}_{t - 1},{\text{ Spe }}_{t - 1},{\text{ Cou }}_{t - 1}}\right)$
+
+Fig. 5. Trajectory prediction based on BLS
+
+§ IV. EXPERIMENTS AND ANALYSES
+
+§ A. DATA PREPARATION
+
+The raw AIS data covers global waters and amounts to approximately 72GB, comprising about 1.3 billion AIS records. This vast dataset was imported into database for storage and retrieval. To simplify the data processing, Dalian Port waters were selected as the study area, resulting in the extraction of 4.08 million AIS records. From this subset, 389 AIS records of a specific ship from October 1, 2016, were further selected for detailed analysis.
+
+Given the characteristics of AIS data, clustering techniques were applied to the raw AIS data to enhance the fitting process. The main idea is to cluster the AIS data into different groups and then fit the data within each cluster separately, which helps in capturing the patterns more effectively. The analysis focuses on latitude and longitude data, and the comparison schemes are divided into three categories.
+
+A. Unprocessed raw data: The AIS data is used in its original, unmodified form. This raw data includes the direct measurements from the AIS system, such as timestamp, latitude, longitude, speed, and course. This serves as a baseline for comparison, allowing for the analysis of how processing techniques affect the data fitting and modeling performance.
+
+B. First-order difference data : The data is transformed using a first-order time difference as given in Section III.C, which calculates the difference between consecutive data points to emphasize changes over time.
+
+C. Second-order difference data : The data is transformed by first-order difference data, which is also calculated by the process of Section III.C.
+
+These three data categories (A, B, and C) allow for a comprehensive analysis of the AIS data, with each category providing different insights into the movement patterns of vessels.
+
+§ B. EVALUATION INDEX
+
+The root mean square error (RMSE) is a widely used measure of the differences between predicted values by a model and the actual observed values. It provides a standard deviation of the residuals (prediction errors), which helps in understanding how concentrated the data is around the best-fit line. The RMSE formula is expressed as Eq. (15)
+
+$$
+\text{ RMSE } = \sqrt{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {y}_{i} - {y}_{i}^{\prime }\right) }^{2}} \tag{14}
+$$
+
+where $n$ is the number of observations, ${y}_{i}$ is the actual observed value, and ${y}_{i}^{\prime }$ is the predicted value by the model.
+
+§ C. PREDICTION RESULTS
+
+(1) Data of Category A
+
+As shown in Figure 6, both of linear model (LM) model and broad learning system (BLS) model exhibit a decreasing trend in RMSE as the number of clusters increases. This indicates that both models improve in accuracy with more refined clustering.
+
+However, the RMSE of the BLS model is consistently lower than that of the LM model across all cluster counts, demonstrating that the BLS model achieves a higher fitting accuracy compared to the LM model.
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Fig. 6. Results comparison in category A
+
+§ (2) DATA OF CATEGORY B
+
+As shown in Figure 7, both of LM model and BLS model show a consistent downward trend in RMSE as the number of clusters increases. This indicates that increasing the number of clusters improves the fitting accuracy for both models.
+
+However, in every K value, the RMSE of the BLS model is consistently lower than that of the LM model, indicating that the BLS model provides higher fitting accuracy.
+
+ < g r a p h i c s >
+
+Fig. 7. Results comparison in category B
+
+§ (3) DATA OF CATEGORY C
+
+As shown in Figure 8, both of LM model and BLS model exhibit a consistent downward trend in RMSE as the number of clusters increases, indicating improved fitting accuracy with more clustering.
+
+However, within each $\mathrm{K}$ value, the RMSE of the BLS model is consistently lower than that of the LM model, demonstrating that the BLS model achieves higher fitting accuracy.
+
+ < g r a p h i c s >
+
+Fig. 8. Results comparison in category $\mathrm{C}$
+
+§ V. EXPERIMENTS AND ANALYSES
+
+In this paper, we developed a BLS based trajectory prediction model, and applied the KM clustering algorithm to process data within each cluster of trajectory segmentation. The processed LM and BLS models were then used to calculate the case AIS data in Dalian Port. Using latitude and longitude data, three comparison scenarios were established: raw data, first-order difference data, and second-order difference data. The experimental results show that, compared to the segmented LM method, the BLS significantly improves accuracy, exhibits better stability, and has a faster runtime when processing AIS data. The comparative analysis of trajectory prediction methods using BLS provides a foundation for establishing predictive models for large-scale AIS trajectories in the future. It also offers a new predictive approach for maritime authorities to monitor maritime traffic, demonstrating significant exploratory value for practical applications.
+
+§ ACKNOWLEDGMENT
+
+This work was supported in part by the National Natural Science Foundation of China (grant nos. 52131101 and 51939001), the Liao Ning Revitalization Talents Program (grant no. XLYC1807046), and the Science and Technology Fund for Distinguished Young Scholars of Dalian (grant no. 2021RJ08).
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/kzbTcOzvFr/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/kzbTcOzvFr/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..dc57c31315aaa1cf176033252d5ac6c0c04ff295
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/kzbTcOzvFr/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,267 @@
+# Distributed Energy Management for Ship-Integrated Energy System Considering Economic and Environmental Benefits
+
+${1}^{\text{st }}$ Yuxin Zhang
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+liam_zhang@dlmu@edu.cn
+
+${2}^{\text{nd }}$ Qihe Shan
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+tengfei@dlmu.edu.cn
+
+${3}^{\text{rd }}$ Haoran Liu
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+lhr6@dlmu.edu.cn
+
+${4}^{\text{th }}$ Tieshan Li
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China Chengdu, China
+
+litieshan073@uestc.edu.cn
+
+Abstract-To decrease the dependency of fuel-based resources in the shipping industry, the energy management problem is analyzed in this paper. Firstly, the development of shipboard energy system is reviewed from radiation pattern, ring pattern, two-end pattern to the ship-integrated energy system. Moreover, for ensuring the secure sailing, a multi-objective energy management model is established with the consideration of economic and environmental benefits. Meanwhile, the requirements of supply-demand balance, velocity, voltage security and so on are considered as well in the energy management. Then, a distributed energy management strategy based on ADMM algorithm is proposed. Finally, simulation results of a 5-node test system proves the effectiveness of the constructed energy management model and the distributed algorithm.
+
+Index Terms-ship-integrated energy system, distributed optimization, energy management
+
+## I. INTRODUCTION
+
+With the deepening of economic and manufacturing cooperation among countries, we have gradually entered the era of globalization. As a major way of exchange of capital, goods, technology, and services among different countries and territories, the export value of international trade has been on the rise since 1950. Because of its low cost per unit of cargo delivery, wide route coverage, and other characteristics, the shipping industry has undertaken most of the global bulk cargo transport and has gradually become the most important manner of transport for trade exchanges between countries and supply chain operations between enterprises [1], [2]. As shown in Figure 1, between 1980 and 2022, the overall capacity of the shipping industry continued to rise, in which cruise ships (tankers), container ships (containers), and cruise ships (cruises) capacity increased significantly, affected by the epidemic and other factors, general cargo (general cargo) capacity has slightly decreased [3]. By 2022, there are 10 ports in the world with a throughput of more than 15 million TEUs (Twenty-Foot Equivalent Unit, TEU), and the order of throughput from large to small is Shanghai (CNSHA), Singapore (SGSGP), Ningbo-Zhoushan ( CNZOS), Shenzhen (CNSZN), Qingdao (CNQIN), Guangzhou (CNGUA), Busan (CNBUS), Tianjin (CNTJN), Los Angeles/Long Beach (USLSA), Hong Kong (HKHKG).
+
+However, the booming international trade and the revival of the shipping industry depend on huge fossil energy consumption, accompanied by the emission of various greenhouse gases (GHG) such as nitrogen and sulphur, which aggravate global warming and the melting of glaciers, and run counter to the concept of ecologically sustainable development [4]. In 2022, the transport sector will account for approximately ${20}\%$ of global greenhouse gas emissions, second only to the electricity sector. As a necessary guarantee for international trade transactions, the greenhouse gas emissions of international shipping, international aviation and international rail account for ${58.8}\% ,{35.3}\%$ and ${5.9}\%$ of the total international trade transport emissions, respectively. Carbon dioxide, as the main component of greenhouse gas emissions from shipping, accounts for more than 90 per cent of the total, and its total emissions have been on a sharp upward trend overall since 1970 until 2021. In 2021 alone, the shipping industry will emit about 700 million tonnes of carbon dioxide into the atmosphere, an increase of about $5\%$ over the previous year. According to the International Maritime Organization (IMO), current ${\mathrm{{CO}}}_{2}$ emissions from shipping industry have doubled since 1990 and reached a staggering 701.9 million tonnes in 2017, as shown in Figure 2. If timely and effective improvement measures are not taken, the total amount of carbon dioxide generated by the shipping industry from fossil energy consumption will rapidly grow to 2.50-3.65 billion tonnes in 2050, accounting for about ${18}\%$ of total global carbon emissions. To address the contradiction between the development of the maritime economy and the high level of carbon pollution from shipping, IMO and its subsidiary body, the Maritime Environment Protection Committee (MEPC), have established a number of regulations and strategic targets to reduce the total carbon emissions from the global shipping industry since 1997, as shown in Figure 3. In 2023, IMO member states unanimously agreed to adopt strategic carbon reduction targets that are expected to reduce emissions from international shipping by at least ${20}\%$ and ${70}\%$ relative to 2008 levels by 2030 and 2040, respectively, and to achieve zero GHG emissions by 2050. To this end, national classification societies and shipbuilding enterprises have paid extensive attention to the application of new energy in ship energy systems and invested in the research and development of onboard new energy equipment, thus promoting the innovation of ship energy systems.
+
+---
+
+This paper is supported by the National Natural Science Foundation of China (under Grants 52371360, 52201407, 51939001, 61976033) and the China Scholarship Council Program (under Grant 202406570011). Corresponding author: Fei Teng
+
+---
+
+
+
+Fig. 1. Trends of Export Trade and International Maritime Trade
+
+According to the type of ship and its operating characteristics, the connection of energy devices in the current ship energy system with new energy devices and traditional fossil fuel devices can be broadly classified into three types, i.e., radiation pattern for small and medium-sized ships, ring pattern for large ships, and two-end pattern, which are shown in Fig. 4. Among them, radiation pattern is the most common and relatively mature technology, which is widely used. Compared with the radiation pattern, the ring pattern and the two-end pattern are equipped with a large number of non-professional energy supply equipment, the structure is relatively complex, and are mostly used in the energy network of large ships [5].
+
+However, with the continuous progress and development of renewable technology, ship technology and information technology, more and more non-professional energy device and intelligent device integrated into the ship, the energy system presents a fully distributed flat structure coupled with a variety of heterogeneous energy sources. The cruise ship Ecoship, integrating diesel, natural gas, and photovoltaic panel, reduces carbon emissions by ${40}\%$ compared with the average 60,000-tonne-class large cruise ship. Ship integrated energy system (S-IES) is a typical power-heating coupling energy system, which takes energy management system as the core and heterogeneous energy conversion centre as the hub, and uses both traditional energy devices and renewable energy device. It can reduce the dependence on traditional fossil fuels for sailing and improve the efficiency of energy utilization [6], which provides continuous and high-quality energy for the normal operation [7]. Therefore, S-IES and its energy management problem are gradually gaining wide attention in the related fields.
+
+## II. Ship-Integrated Energy System and Its Energy MANAGEMENT
+
+## A. Structure and Framework for S-IES
+
+As a core unit to ensure secure and reliable navigation of ships, S-IES provides continuous and high-quality energy for the energy system, communication and navigation system, and mechanical towing system. Ship energy management system carries out real-time monitoring and collection of shipboard load-side equipment, gathers communication and navigation systems as well as wind, wave, and other climate perturbation information to predict the shipload demand, and finally sends the load demand value to the energy supply side, and solves the optimal energy management scheme based on the intelligent algorithm to realize secure sailing in a long-distance voyage, and the structure of the system is shown in Fig.5.
+
+Energy devices in S-IES can be roughly divided into five categories: power-only device (PO), heating-only device (HO), combined heating and power device (CHP), energy storage device (ESD) and load device (LD), storage device (ESD) and load device (LD).
+
+
+
+Fig. 3. Regulations development for GHG Emission Reduction
+
+For improving the energy supply performance of S-IES and ensuring that the energy supply side equipment provides continuous and high-quality energy for the safe navigation of the ship, it is necessary to carry out an in-depth analysis of the energy management problems of S-IES. The energy management system of S-IES is based on the theory of multi-intelligent body system, which realizes the bidirectional interaction of information and energy among the shipboard-distributed energy equipment. Then it formulates the optimal energy management strategy for the distributed energy management of S-IES. Ship energy efficiency monitoring technology, as a key role in ensuring the accuracy of the analysis of S-IES energy management issues, provides the necessary guarantee for maximizing the economic and environmental benefits of S-IES and has received extensive attention worldwide, as shown in Table I. Specifically, the EFMS designed by Ascenz Marorka, includes fuel pilferage prevention measures and reporting capabilities for tracking fuel usage and analyzing data over time, which plays a vital role in saving operational costs by analyzing the information collected by the EFMS. For improving the operational fuel efficiency of ships, Germanis-cher Lloyd proposes the ECO-Assistant. It allows mariner to sail at optimal trim during voyage by acquiring the optimal trim angle in different sailing conditions. Moreover, the ECO-Assistant depends on the existing shipborne devices without installing extra modifications to the ships. By utilization of the routing algorithms, VVOS is constructed to optimize each route for on-time arrival while minimizing fuel consumption and avoiding weather damage. Additionally, the Electronic Chart Display and Information System (ECDIS) and Integrated Navigation System (INS) can receive the voyage scheme planed by VVOS to realizing secure check and execution. NAPA-VO is a software designed by NAPA, which realizes the improvement of operational efficiency. The created route with NAPA-VO concerns different fuel types and the corresponding features under emission control areas. A novel propulsion concept for ships is proposed by ABB Dynafin, which reduces greenhouse emissions by at least half. Moreover, compared to the traditional shaftline ships, the new technology is set to decrease propulsion energy consumption by up to ${22}\%$ . ISSE can automatically calculate the carbon intensity index and related data sampling, which realizes the CII rating automatically and provides guarantees for sailing voyages.
+
+TABLE I
+
+ENERGY CONSUMPTION DETECTION TECHNOLOGY AROUND THE WORLD
+
+| Instituation | Product | Functions and Characteristics |
| Ascenz Marorka | Electronic Fuel Monitoring System(EFMS) | Monitor real-time data on fuel usage. Develop energy management plan to reduce fuel consumption. |
| Germanischer Lloyd | ECO Assistant | Calculate optimum trim angle without making extra devices. Improve the operational fuel efficiency of ships. |
| Jeppesen Marine | Vessel and Voyage Optimization Services (VVOS) | Optimize route for on-time arrival. Voyage plan can be received by ECDIS and INS. |
| NAPA | NAPA Voyage Optimization (NAPA-VO) | Decrease operation cost by improved schedule adjustment procedure. Increase efficiency in altering the voyage plans. |
| ABB | ABB Dynafin | Cutting annual greenhouse emissions by at least ${50}\%$ in future. Decrease the energy consumed by propulsion systems. |
| Hangzhou Yagena Technology Co., LTD. | Intelligent Ship System and Equipment (ISSE) | Calculate the carbon intensity index. Rank the carbon emission level. |
+
+
+
+Fig. 4. (a) Structure of Radiation Pattern for S-IES, (b) Structure of Ring Pattern for S-IES, (c) Structure of Two-End Pattern for S-IES.
+
+
+
+Fig. 5. Structure for S-IES
+
+At present, with the risen on awareness of sustainable development, the economic benefit ${F}_{\mathrm{{EC}}}$ is not the only factor which should be concerned in the energy management of S-IES. For this reason, the environmental ${F}_{\mathrm{{CA}}}$ , and social ${F}_{\mathrm{{SC}}}$ benefits are considered in this paper as well. Moreover, to ensure the high-security sailing and provide high-quality energy to the ships, the utilization of energy is also another major factor. Therefore, the objective function of the energy management problem for S-IES is constructed as,
+
+$$
+\min \left\{ {{F}_{\mathrm{{EC}}},{F}_{\mathrm{{CA}}},{F}_{\mathrm{{SC}}}}\right\} \text{.} \tag{1}
+$$
+
+Note that, the above objective function represents a compromise between economic benefits and carbon emissions, rather than indicating the smaller of ${F}_{\mathrm{{EC}}},{F}_{\mathrm{{CA}}}$ and ${F}_{\mathrm{{SC}}}$ as the objective function for S-IES energy management. In addition, physical constraints need to be imposed on ships and energy equipment to achieve secure performance during sailing voyage. Then, the energy management model of S-IES can be established as below.
+
+$$
+\min \left\{ {{F}_{\mathrm{{EC}}},{F}_{\mathrm{{CA}}},{F}_{\mathrm{{SC}}}}\right\} \text{,} \tag{2}
+$$
+
+$$
+\text{s.t.}A\left( X\right) = 0, B\left( X\right) \leq 0\text{,}
+$$
+
+where, $X$ is the decision variable of the energy management problem, which presents the energy output, $A\left( X\right)$ and $B\left( X\right)$ are equality and inequality constraints, respectively. Specifically, $A\left( X\right)$ contains supply-demand balance constraints, voltage security constraints. $B\left( X\right)$ includes energy output constraints, velocity constraints, which are shown below.
+
+1) Supply-demand balance constraints:
+
+$$
+\sum \left( {{p}_{i}^{\mathrm{{fu}}} + {p}_{i}^{\mathrm{{re}}} + {p}_{i}^{\mathrm{{chp}}} - {p}_{i}^{\mathrm{{ld}}}}\right) = 0 \tag{3}
+$$
+
+$$
+\sum \left( {{h}_{i}^{\mathrm{{fu}}} + {h}_{i}^{\mathrm{{re}}} + {h}_{i}^{\mathrm{{chp}}} - {h}_{i}^{\mathrm{{ld}}}}\right) = 0 \tag{4}
+$$
+
+where $p$ and $h$ are power and heating outputs, respectively. $i, l$ presents the node sequence of S-IES. fu, re, chp, ld are fuel-based generators, renewable-based generators, combining heating and power device, and load, respectively.
+
+2) Voltage security constraints:
+
+$$
+\sum {\alpha }_{i, l}^{\max }{p}_{i} + \Delta {V}_{l}^{\max } = {\pi }_{l}^{\max } \tag{5}
+$$
+
+$$
+\sum {\alpha }_{i, l}^{\min }{p}_{i} + \Delta {V}_{l}^{\min } = {\pi }_{l}^{\min } \tag{6}
+$$
+
+where $\alpha$ notes the voltage sensitivity coefficients, which can be calculated by a given transmission topology of S-IES. ${\Delta V}$ is the voltage excursion index, which indicates the voltage security margin. $\pi$ is a constant influencing by the reactive power. min and max are noted as the minimal and maximal values of the corresponding variables, respectively. Specifically, (5) and (6) present the upper and lower bounds of velocity security boundaries, respectively.
+
+3) Energy output constraints:
+
+$$
+{p}_{i}^{\min } \leq {p}_{i} \leq {p}^{\max },{h}^{\min } \leq {h}_{i} \leq {h}^{\max } \tag{7}
+$$
+
+4) Velocity constraints:
+
+$$
+{v}^{\min } \leq v \leq {v}^{\max } \tag{8}
+$$
+
+where $v$ describes the velocity of the ship.
+
+## B. Communication Topology for S-IES
+
+1) Centralized Energy Management Strategy: It relies on a centralized controller, which has significant advantages in terms of calculation rate and accuracy. The controller collects real-time information on the operation of all energy devices in the S-IES, and calculates the optimal energy management schemes based on the relevant data to complete the centralized control and management for the S-IES. However, the centralized strategy is prone to single point of failure, poor stability, limited scalability and huge investment costs.
+
+2) Decentralized Energy Management Strategy: According to the operating mechanism and working status of the energy equipment, the ship can be divided into multiple compartments, and the decentralized energy management strategy places the controllers in the compartments. Thus the control and management of the local energy system can be realized. Moreover, the decentralized strategy does not require real-time communication between the controllers in each cabin, and the energy equipment distributes the energy by itself, so the system response speed is fast and the scalability is strong. However, due to the lack of collaborative control between the controllers, the global optimization at the system level cannot be achieved. Therefore it is highly susceptible to external interference, which has a negative impact on secure navigation.
+
+However, the flat distributed structure of the energy network of S-IES presents that the traditional centralized and decentralized energy management strategies are no longer applicable, and a fully distributed energy management strategy continues to be utilized to complete the search for the optimal solution in order to achieve the reliable operation of S-IES.
+
+3) Distributed Energy Management Strategy: It relies on the communication network to achieve real-time information exchange between energy devices within the S-IES, so as to achieve complementary interconnection of information between neighbouring energy devices. When a local energy device fails to operate normally, the controllers of each energy device can achieve local interconnections and send signals to the controllers of neighbouring energy devices to ensure safe and stable operation of the S-IES based on the distributed communication network.
+
+## C. Main Algorithm
+
+To simplify the multi-objective functions of the energy management problems, the linear weighted sum method, the energy management model can be reconstructed as below.
+
+$$
+\min \left\{ {{\beta }_{1} \cdot {F}_{\mathrm{{EC}}} + {\beta }_{2} \cdot {F}_{\mathrm{{CA}}} + {\beta }_{3} \cdot {F}_{\mathrm{{SC}}}}\right\} , \tag{9}
+$$
+
+$$
+\text{s.t.}A\left( X\right) = 0, B\left( X\right) \leq 0\text{,}
+$$
+
+where ${\beta }_{1},{\beta }_{2}$ and ${\beta }_{3}$ are constant parameters of ${F}_{\mathrm{{EC}}},{F}_{\mathrm{{CA}}}$ , and ${F}_{\mathrm{{SC}}}$ , respectively. Moreover, the related parameters satisfy ${\beta }_{1} + {\beta }_{2} + {\beta }_{3} = 1$ .
+
+Define the communication topology of the constructed S-IES as $G = \{ T, B, W\}$ , where $T, B$ , and $W$ are node set, connected edge set and connected weighted sets, respectively. Specifically, $T = \left\lbrack {{v}_{i} \mid i \in \Omega }\right\rbrack$ and $\Omega$ is noted as the energy device set. Moreover, $B \subseteq T \times T$ is related to the node set. $W = \left\lbrack {{w}_{i, l} \mid i, l \in \Omega }\right\rbrack$ , where ${w}_{i, l}$ is the connected weighted parameter between the $i$ th node and the $l$ th node. Considering the different relationships between the $i$ th node and the $l$ th node, ${w}_{i, l}$ can be calculated as below.
+
+$$
+{w}_{i, l} = \left\{ \begin{array}{l} 1/\left( {\left| {N}_{zi}\right| + \left| {N}_{l}\right| + \varepsilon }\right) , l \in {N}_{i} \\ 1 - \mathop{\sum }\limits_{{l \in {N}_{i}}}1/\left( {\left| {N}_{i}\right| + \left| {N}_{l}\right| + \varepsilon }\right) , i = l \\ 0,\text{ otherwise } \end{array}\right. \tag{10}
+$$
+
+where $\varepsilon$ is a small positive constant. ${N}_{i}$ and $\left| {N}_{i}\right|$ are the neighbor set of the $i$ th node and its cardinal number. Similarity, ${N}_{l}$ and $\left| {N}_{l}\right|$ are the neighbor set of the $l$ th node and its cardinal number. When the connected weighted value between $i$ th node and the $l$ th node equals to 0, it means that the information exchange can not occur between the mentioned nodes.
+
+Since the high calculation speed, accuracy, and reliable performed by the alternating direction method of multipliers (ADMM), a fully distributed energy management strategy for S-IES is designed in this paper. It realizes the bi-directional transmissions of energy and information, which improves the reduction of communication resource and is suitable to the flat and distributed structure of S-IES. Then, the main algorithm can be designed as below.
+
+1) Iteration of energy output:
+
+$$
+{X}_{i, k + 1} \in \arg \min \left\{ {f\left( X\right) + \psi + {\lambda }_{i, k}^{\mathrm{T}}{W}^{\mathrm{T}}A{X}_{i, k}}\right\} , \tag{11}
+$$
+
+where $\lambda$ is the incremental cost. $A$ describes the physical relationship among nodes of S-IES, which is defined by physical constraints. $k$ is the sequence of time slot.
+
+2) Iteration of output error:
+
+$$
+{d}_{i, k + 1} = W{d}_{i, k} + A\left( {{X}_{i, k + 1} - {X}_{i, k}}\right) , \tag{12}
+$$
+
+where $d$ is the output error of S-IES.
+
+3) Iteration of incremental cost:
+
+$$
+{\left\lbrack {\overset{\overleftrightarrow{} }{\lambda }}_{{\Omega }_{\mathrm{W}}^{z}, k + 1},{\lambda }_{z, k + 1}\right\rbrack }^{\mathrm{T}} = W{\left\lbrack {\overset{\overleftrightarrow{} }{\lambda }}_{{\Omega }_{\mathrm{W}}^{z}, k},{\lambda }_{z, k}\right\rbrack }^{\mathrm{T}} + \tau {d}_{z, k + 1}. \tag{13}
+$$
+
+Repeat the iterations of energy output, output error, and incremental cost until each variable converge a preset threshold.
+
+## III. Simulation Results
+
+To verify the effectiveness of the designed algorithm, a 5- node S-IES is utilized as a test system, and the detailed physical/communication topology containing connected weighted parameters is shown in Fig.6. Moreover, in the test system, there are 2-fu, 1-re, 2-ld, and the operational coefficients and carbon emission parameters are presented in (14). Assume the load demand of power and heating are $\left\lbrack {{4.5},{2.4}}\right\rbrack \left( \mathrm{{MW}}\right)$ , respectively.
+
+$$
+{C}_{1}^{\mathrm{{fu}}} = {0.040} * {h}^{2} + {25} * h + {99}
+$$
+
+$$
+{C}_{1}^{re} = {0.043} * {p}^{2} + {22} * p + {80}
+$$
+
+$$
+{C}_{2}^{\mathrm{{fu}}} = {0.035} * {p}^{2} + {18} * p + {120}
+$$
+
+$$
+{C}_{1}^{\mathrm{{ld}}} = - {0.013} * {p}^{2} + {46} * p + {30} \tag{14}
+$$
+
+$$
+{C}_{2}^{\mathrm{{ld}}} = - {0.015} * {h}^{2} + {70} * h + {40}
+$$
+
+$$
+{E}_{1}^{\mathrm{{fu}}} = {0.0648} * {h}^{2} - {2.7} * h + {41}
+$$
+
+$$
+{E}_{2}^{\mathrm{{fu}}} = {0.0520} * {p}^{2} - {2.3} * p + {50}
+$$
+
+
+
+Fig. 6. Structure for the test S-IES
+
+The trajectories of incremental costs of 5 nodes in the test S-IES are depicted in Fig.7. It can be found that the variable of incremental cost can be converged within 20 iteration steps. And the specified value of the final incremental cost is 0.3066 (p.u.). Moreover, the calculated optimized energy management solution is same as the solutions obtained by centralized strategy, which verifies the accuracy of the designed algorithm.
+
+
+
+Fig. 7. Trajectories of incremental costs
+
+## IV. CONCLUSION
+
+In this paper, the development of S-IES has been analyzed. A multi-objective energy management model for S-IES has been constructed considering economic benefits and carbon emissions. Meanwhile, the entire sailing requirements for ships are considered in the construction of energy management model. Additionally, to search for the optimization solutions in a distributed manner, an energy management algorithm has been proposed based on ADMM theory. Simulation results proves the accuracy and effectiveness of the designed energy management model and the distributed algorithm.
+
+## REFERENCES
+
+[1] F. Teng, Y. Zhang, T. Yang, T. Li, Y. Xiao and Y. Li, "Distributed Optimal Energy Management for We-Energy Considering Operation Security," IEEE Transactions on Network Science and Engineering, vol. 11, no. 1, pp. 225-235, Jan.-Feb. 2024, doi: 10.1109/TNSE.2023.3295079.
+
+[2] Y. Zhang, Y. Xiao, F. Teng and T. Li, "Distributed Energy Management Method With EEOI Limitation for the Ship-Integrated Energy System," IEEE Systems Journal, vol. 18, no. 2, pp. 1332-1343, June 2024, doi: 10.1109/JSYST.2024.3361709.
+
+[3] F. Teng, Z. Ban, T. Li, Q. Sun and Y. Li, "A Privacy-Preserving Distributed Economic Dispatch Method for Integrated Port Micro-grid and Computing Power Network," IEEE Transactions on Industrial Informatics, vol. 20, no. 8, pp. 10103-10112, Aug. 2024, doi: 10.1109/TII.2024.3393569.
+
+[4] M. Rafiei, J. Boudjadar and M. -H. Khooban, -Energy Management of a Zero-Emission Ferry Boat With a Fuel-Cell-Based Hybrid Energy System: Feasibility Assessment," IEEE Transactions on Industrial Electronics, vol. 68, no. 2, pp. 1739-1748, Feb. 2021, doi: 10.1109/TIE.2020.2992005.
+
+[5] F. D. Kanellos, G. J. Tsekouras and N. D. Hatziargyriou, -Optimal Demand-Side Management and Power Generation Scheduling in an All-Electric Ship," IEEE Transactions on Sustainable Energy, vol. 5, no. 4, pp. 1166-1175, Oct. 2014, doi: 10.1109/TSTE.2014.2336973.
+
+[6] Y. Li, T. Li, H. Zhang, X. Xie and Q. Sun, -Distributed Resilient Double-Gradient-Descent Based Energy Management Strategy for Multi-Energy System Under DoS Attacks," IEEE Transactions on Network Science and Engineering, vol. 9, no. 4, pp. 2301-2316, 1 July-Aug. 2022, doi: 10.1109/TNSE.2022.3162669.
+
+[7] F. D. Kanellos, "Optimal Power Management With GHG Emissions Limitation in All-Electric Ship Power Systems Comprising Energy Storage Systems," IEEE Transactions on Power Systems, vol. 29, no. 1, pp. 330-339, Jan. 2014, doi: 10.1109/TPWRS.2013.2280064.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/kzbTcOzvFr/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/kzbTcOzvFr/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..453795676fe04f5602d491519972f40b0faa26a4
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/kzbTcOzvFr/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,269 @@
+§ DISTRIBUTED ENERGY MANAGEMENT FOR SHIP-INTEGRATED ENERGY SYSTEM CONSIDERING ECONOMIC AND ENVIRONMENTAL BENEFITS
+
+${1}^{\text{ st }}$ Yuxin Zhang
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+liam_zhang@dlmu@edu.cn
+
+${2}^{\text{ nd }}$ Qihe Shan
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+tengfei@dlmu.edu.cn
+
+${3}^{\text{ rd }}$ Haoran Liu
+
+Navigation College
+
+Dalian Maritime University
+
+Dalian, China
+
+lhr6@dlmu.edu.cn
+
+${4}^{\text{ th }}$ Tieshan Li
+
+School of Automation Engineering
+
+University of Electronic Science and Technology of China Chengdu, China
+
+litieshan073@uestc.edu.cn
+
+Abstract-To decrease the dependency of fuel-based resources in the shipping industry, the energy management problem is analyzed in this paper. Firstly, the development of shipboard energy system is reviewed from radiation pattern, ring pattern, two-end pattern to the ship-integrated energy system. Moreover, for ensuring the secure sailing, a multi-objective energy management model is established with the consideration of economic and environmental benefits. Meanwhile, the requirements of supply-demand balance, velocity, voltage security and so on are considered as well in the energy management. Then, a distributed energy management strategy based on ADMM algorithm is proposed. Finally, simulation results of a 5-node test system proves the effectiveness of the constructed energy management model and the distributed algorithm.
+
+Index Terms-ship-integrated energy system, distributed optimization, energy management
+
+§ I. INTRODUCTION
+
+With the deepening of economic and manufacturing cooperation among countries, we have gradually entered the era of globalization. As a major way of exchange of capital, goods, technology, and services among different countries and territories, the export value of international trade has been on the rise since 1950. Because of its low cost per unit of cargo delivery, wide route coverage, and other characteristics, the shipping industry has undertaken most of the global bulk cargo transport and has gradually become the most important manner of transport for trade exchanges between countries and supply chain operations between enterprises [1], [2]. As shown in Figure 1, between 1980 and 2022, the overall capacity of the shipping industry continued to rise, in which cruise ships (tankers), container ships (containers), and cruise ships (cruises) capacity increased significantly, affected by the epidemic and other factors, general cargo (general cargo) capacity has slightly decreased [3]. By 2022, there are 10 ports in the world with a throughput of more than 15 million TEUs (Twenty-Foot Equivalent Unit, TEU), and the order of throughput from large to small is Shanghai (CNSHA), Singapore (SGSGP), Ningbo-Zhoushan ( CNZOS), Shenzhen (CNSZN), Qingdao (CNQIN), Guangzhou (CNGUA), Busan (CNBUS), Tianjin (CNTJN), Los Angeles/Long Beach (USLSA), Hong Kong (HKHKG).
+
+However, the booming international trade and the revival of the shipping industry depend on huge fossil energy consumption, accompanied by the emission of various greenhouse gases (GHG) such as nitrogen and sulphur, which aggravate global warming and the melting of glaciers, and run counter to the concept of ecologically sustainable development [4]. In 2022, the transport sector will account for approximately ${20}\%$ of global greenhouse gas emissions, second only to the electricity sector. As a necessary guarantee for international trade transactions, the greenhouse gas emissions of international shipping, international aviation and international rail account for ${58.8}\% ,{35.3}\%$ and ${5.9}\%$ of the total international trade transport emissions, respectively. Carbon dioxide, as the main component of greenhouse gas emissions from shipping, accounts for more than 90 per cent of the total, and its total emissions have been on a sharp upward trend overall since 1970 until 2021. In 2021 alone, the shipping industry will emit about 700 million tonnes of carbon dioxide into the atmosphere, an increase of about $5\%$ over the previous year. According to the International Maritime Organization (IMO), current ${\mathrm{{CO}}}_{2}$ emissions from shipping industry have doubled since 1990 and reached a staggering 701.9 million tonnes in 2017, as shown in Figure 2. If timely and effective improvement measures are not taken, the total amount of carbon dioxide generated by the shipping industry from fossil energy consumption will rapidly grow to 2.50-3.65 billion tonnes in 2050, accounting for about ${18}\%$ of total global carbon emissions. To address the contradiction between the development of the maritime economy and the high level of carbon pollution from shipping, IMO and its subsidiary body, the Maritime Environment Protection Committee (MEPC), have established a number of regulations and strategic targets to reduce the total carbon emissions from the global shipping industry since 1997, as shown in Figure 3. In 2023, IMO member states unanimously agreed to adopt strategic carbon reduction targets that are expected to reduce emissions from international shipping by at least ${20}\%$ and ${70}\%$ relative to 2008 levels by 2030 and 2040, respectively, and to achieve zero GHG emissions by 2050. To this end, national classification societies and shipbuilding enterprises have paid extensive attention to the application of new energy in ship energy systems and invested in the research and development of onboard new energy equipment, thus promoting the innovation of ship energy systems.
+
+This paper is supported by the National Natural Science Foundation of China (under Grants 52371360, 52201407, 51939001, 61976033) and the China Scholarship Council Program (under Grant 202406570011). Corresponding author: Fei Teng
+
+ < g r a p h i c s >
+
+Fig. 1. Trends of Export Trade and International Maritime Trade
+
+According to the type of ship and its operating characteristics, the connection of energy devices in the current ship energy system with new energy devices and traditional fossil fuel devices can be broadly classified into three types, i.e., radiation pattern for small and medium-sized ships, ring pattern for large ships, and two-end pattern, which are shown in Fig. 4. Among them, radiation pattern is the most common and relatively mature technology, which is widely used. Compared with the radiation pattern, the ring pattern and the two-end pattern are equipped with a large number of non-professional energy supply equipment, the structure is relatively complex, and are mostly used in the energy network of large ships [5].
+
+However, with the continuous progress and development of renewable technology, ship technology and information technology, more and more non-professional energy device and intelligent device integrated into the ship, the energy system presents a fully distributed flat structure coupled with a variety of heterogeneous energy sources. The cruise ship Ecoship, integrating diesel, natural gas, and photovoltaic panel, reduces carbon emissions by ${40}\%$ compared with the average 60,000-tonne-class large cruise ship. Ship integrated energy system (S-IES) is a typical power-heating coupling energy system, which takes energy management system as the core and heterogeneous energy conversion centre as the hub, and uses both traditional energy devices and renewable energy device. It can reduce the dependence on traditional fossil fuels for sailing and improve the efficiency of energy utilization [6], which provides continuous and high-quality energy for the normal operation [7]. Therefore, S-IES and its energy management problem are gradually gaining wide attention in the related fields.
+
+§ II. SHIP-INTEGRATED ENERGY SYSTEM AND ITS ENERGY MANAGEMENT
+
+§ A. STRUCTURE AND FRAMEWORK FOR S-IES
+
+As a core unit to ensure secure and reliable navigation of ships, S-IES provides continuous and high-quality energy for the energy system, communication and navigation system, and mechanical towing system. Ship energy management system carries out real-time monitoring and collection of shipboard load-side equipment, gathers communication and navigation systems as well as wind, wave, and other climate perturbation information to predict the shipload demand, and finally sends the load demand value to the energy supply side, and solves the optimal energy management scheme based on the intelligent algorithm to realize secure sailing in a long-distance voyage, and the structure of the system is shown in Fig.5.
+
+Energy devices in S-IES can be roughly divided into five categories: power-only device (PO), heating-only device (HO), combined heating and power device (CHP), energy storage device (ESD) and load device (LD), storage device (ESD) and load device (LD).
+
+ < g r a p h i c s >
+
+Fig. 3. Regulations development for GHG Emission Reduction
+
+For improving the energy supply performance of S-IES and ensuring that the energy supply side equipment provides continuous and high-quality energy for the safe navigation of the ship, it is necessary to carry out an in-depth analysis of the energy management problems of S-IES. The energy management system of S-IES is based on the theory of multi-intelligent body system, which realizes the bidirectional interaction of information and energy among the shipboard-distributed energy equipment. Then it formulates the optimal energy management strategy for the distributed energy management of S-IES. Ship energy efficiency monitoring technology, as a key role in ensuring the accuracy of the analysis of S-IES energy management issues, provides the necessary guarantee for maximizing the economic and environmental benefits of S-IES and has received extensive attention worldwide, as shown in Table I. Specifically, the EFMS designed by Ascenz Marorka, includes fuel pilferage prevention measures and reporting capabilities for tracking fuel usage and analyzing data over time, which plays a vital role in saving operational costs by analyzing the information collected by the EFMS. For improving the operational fuel efficiency of ships, Germanis-cher Lloyd proposes the ECO-Assistant. It allows mariner to sail at optimal trim during voyage by acquiring the optimal trim angle in different sailing conditions. Moreover, the ECO-Assistant depends on the existing shipborne devices without installing extra modifications to the ships. By utilization of the routing algorithms, VVOS is constructed to optimize each route for on-time arrival while minimizing fuel consumption and avoiding weather damage. Additionally, the Electronic Chart Display and Information System (ECDIS) and Integrated Navigation System (INS) can receive the voyage scheme planed by VVOS to realizing secure check and execution. NAPA-VO is a software designed by NAPA, which realizes the improvement of operational efficiency. The created route with NAPA-VO concerns different fuel types and the corresponding features under emission control areas. A novel propulsion concept for ships is proposed by ABB Dynafin, which reduces greenhouse emissions by at least half. Moreover, compared to the traditional shaftline ships, the new technology is set to decrease propulsion energy consumption by up to ${22}\%$ . ISSE can automatically calculate the carbon intensity index and related data sampling, which realizes the CII rating automatically and provides guarantees for sailing voyages.
+
+TABLE I
+
+ENERGY CONSUMPTION DETECTION TECHNOLOGY AROUND THE WORLD
+
+max width=
+
+Instituation Product Functions and Characteristics
+
+1-3
+Ascenz Marorka Electronic Fuel Monitoring System(EFMS) Monitor real-time data on fuel usage. Develop energy management plan to reduce fuel consumption.
+
+1-3
+Germanischer Lloyd ECO Assistant Calculate optimum trim angle without making extra devices. Improve the operational fuel efficiency of ships.
+
+1-3
+Jeppesen Marine Vessel and Voyage Optimization Services (VVOS) Optimize route for on-time arrival. Voyage plan can be received by ECDIS and INS.
+
+1-3
+NAPA NAPA Voyage Optimization (NAPA-VO) Decrease operation cost by improved schedule adjustment procedure. Increase efficiency in altering the voyage plans.
+
+1-3
+ABB ABB Dynafin Cutting annual greenhouse emissions by at least ${50}\%$ in future. Decrease the energy consumed by propulsion systems.
+
+1-3
+Hangzhou Yagena Technology Co., LTD. Intelligent Ship System and Equipment (ISSE) Calculate the carbon intensity index. Rank the carbon emission level.
+
+1-3
+
+ < g r a p h i c s >
+
+Fig. 4. (a) Structure of Radiation Pattern for S-IES, (b) Structure of Ring Pattern for S-IES, (c) Structure of Two-End Pattern for S-IES.
+
+ < g r a p h i c s >
+
+Fig. 5. Structure for S-IES
+
+At present, with the risen on awareness of sustainable development, the economic benefit ${F}_{\mathrm{{EC}}}$ is not the only factor which should be concerned in the energy management of S-IES. For this reason, the environmental ${F}_{\mathrm{{CA}}}$ , and social ${F}_{\mathrm{{SC}}}$ benefits are considered in this paper as well. Moreover, to ensure the high-security sailing and provide high-quality energy to the ships, the utilization of energy is also another major factor. Therefore, the objective function of the energy management problem for S-IES is constructed as,
+
+$$
+\min \left\{ {{F}_{\mathrm{{EC}}},{F}_{\mathrm{{CA}}},{F}_{\mathrm{{SC}}}}\right\} \text{ . } \tag{1}
+$$
+
+Note that, the above objective function represents a compromise between economic benefits and carbon emissions, rather than indicating the smaller of ${F}_{\mathrm{{EC}}},{F}_{\mathrm{{CA}}}$ and ${F}_{\mathrm{{SC}}}$ as the objective function for S-IES energy management. In addition, physical constraints need to be imposed on ships and energy equipment to achieve secure performance during sailing voyage. Then, the energy management model of S-IES can be established as below.
+
+$$
+\min \left\{ {{F}_{\mathrm{{EC}}},{F}_{\mathrm{{CA}}},{F}_{\mathrm{{SC}}}}\right\} \text{ , } \tag{2}
+$$
+
+$$
+\text{ s.t. }A\left( X\right) = 0,B\left( X\right) \leq 0\text{ , }
+$$
+
+where, $X$ is the decision variable of the energy management problem, which presents the energy output, $A\left( X\right)$ and $B\left( X\right)$ are equality and inequality constraints, respectively. Specifically, $A\left( X\right)$ contains supply-demand balance constraints, voltage security constraints. $B\left( X\right)$ includes energy output constraints, velocity constraints, which are shown below.
+
+1) Supply-demand balance constraints:
+
+$$
+\sum \left( {{p}_{i}^{\mathrm{{fu}}} + {p}_{i}^{\mathrm{{re}}} + {p}_{i}^{\mathrm{{chp}}} - {p}_{i}^{\mathrm{{ld}}}}\right) = 0 \tag{3}
+$$
+
+$$
+\sum \left( {{h}_{i}^{\mathrm{{fu}}} + {h}_{i}^{\mathrm{{re}}} + {h}_{i}^{\mathrm{{chp}}} - {h}_{i}^{\mathrm{{ld}}}}\right) = 0 \tag{4}
+$$
+
+where $p$ and $h$ are power and heating outputs, respectively. $i,l$ presents the node sequence of S-IES. fu, re, chp, ld are fuel-based generators, renewable-based generators, combining heating and power device, and load, respectively.
+
+2) Voltage security constraints:
+
+$$
+\sum {\alpha }_{i,l}^{\max }{p}_{i} + \Delta {V}_{l}^{\max } = {\pi }_{l}^{\max } \tag{5}
+$$
+
+$$
+\sum {\alpha }_{i,l}^{\min }{p}_{i} + \Delta {V}_{l}^{\min } = {\pi }_{l}^{\min } \tag{6}
+$$
+
+where $\alpha$ notes the voltage sensitivity coefficients, which can be calculated by a given transmission topology of S-IES. ${\Delta V}$ is the voltage excursion index, which indicates the voltage security margin. $\pi$ is a constant influencing by the reactive power. min and max are noted as the minimal and maximal values of the corresponding variables, respectively. Specifically, (5) and (6) present the upper and lower bounds of velocity security boundaries, respectively.
+
+3) Energy output constraints:
+
+$$
+{p}_{i}^{\min } \leq {p}_{i} \leq {p}^{\max },{h}^{\min } \leq {h}_{i} \leq {h}^{\max } \tag{7}
+$$
+
+4) Velocity constraints:
+
+$$
+{v}^{\min } \leq v \leq {v}^{\max } \tag{8}
+$$
+
+where $v$ describes the velocity of the ship.
+
+§ B. COMMUNICATION TOPOLOGY FOR S-IES
+
+1) Centralized Energy Management Strategy: It relies on a centralized controller, which has significant advantages in terms of calculation rate and accuracy. The controller collects real-time information on the operation of all energy devices in the S-IES, and calculates the optimal energy management schemes based on the relevant data to complete the centralized control and management for the S-IES. However, the centralized strategy is prone to single point of failure, poor stability, limited scalability and huge investment costs.
+
+2) Decentralized Energy Management Strategy: According to the operating mechanism and working status of the energy equipment, the ship can be divided into multiple compartments, and the decentralized energy management strategy places the controllers in the compartments. Thus the control and management of the local energy system can be realized. Moreover, the decentralized strategy does not require real-time communication between the controllers in each cabin, and the energy equipment distributes the energy by itself, so the system response speed is fast and the scalability is strong. However, due to the lack of collaborative control between the controllers, the global optimization at the system level cannot be achieved. Therefore it is highly susceptible to external interference, which has a negative impact on secure navigation.
+
+However, the flat distributed structure of the energy network of S-IES presents that the traditional centralized and decentralized energy management strategies are no longer applicable, and a fully distributed energy management strategy continues to be utilized to complete the search for the optimal solution in order to achieve the reliable operation of S-IES.
+
+3) Distributed Energy Management Strategy: It relies on the communication network to achieve real-time information exchange between energy devices within the S-IES, so as to achieve complementary interconnection of information between neighbouring energy devices. When a local energy device fails to operate normally, the controllers of each energy device can achieve local interconnections and send signals to the controllers of neighbouring energy devices to ensure safe and stable operation of the S-IES based on the distributed communication network.
+
+§ C. MAIN ALGORITHM
+
+To simplify the multi-objective functions of the energy management problems, the linear weighted sum method, the energy management model can be reconstructed as below.
+
+$$
+\min \left\{ {{\beta }_{1} \cdot {F}_{\mathrm{{EC}}} + {\beta }_{2} \cdot {F}_{\mathrm{{CA}}} + {\beta }_{3} \cdot {F}_{\mathrm{{SC}}}}\right\} , \tag{9}
+$$
+
+$$
+\text{ s.t. }A\left( X\right) = 0,B\left( X\right) \leq 0\text{ , }
+$$
+
+where ${\beta }_{1},{\beta }_{2}$ and ${\beta }_{3}$ are constant parameters of ${F}_{\mathrm{{EC}}},{F}_{\mathrm{{CA}}}$ , and ${F}_{\mathrm{{SC}}}$ , respectively. Moreover, the related parameters satisfy ${\beta }_{1} + {\beta }_{2} + {\beta }_{3} = 1$ .
+
+Define the communication topology of the constructed S-IES as $G = \{ T,B,W\}$ , where $T,B$ , and $W$ are node set, connected edge set and connected weighted sets, respectively. Specifically, $T = \left\lbrack {{v}_{i} \mid i \in \Omega }\right\rbrack$ and $\Omega$ is noted as the energy device set. Moreover, $B \subseteq T \times T$ is related to the node set. $W = \left\lbrack {{w}_{i,l} \mid i,l \in \Omega }\right\rbrack$ , where ${w}_{i,l}$ is the connected weighted parameter between the $i$ th node and the $l$ th node. Considering the different relationships between the $i$ th node and the $l$ th node, ${w}_{i,l}$ can be calculated as below.
+
+$$
+{w}_{i,l} = \left\{ \begin{array}{l} 1/\left( {\left| {N}_{zi}\right| + \left| {N}_{l}\right| + \varepsilon }\right) ,l \in {N}_{i} \\ 1 - \mathop{\sum }\limits_{{l \in {N}_{i}}}1/\left( {\left| {N}_{i}\right| + \left| {N}_{l}\right| + \varepsilon }\right) ,i = l \\ 0,\text{ otherwise } \end{array}\right. \tag{10}
+$$
+
+where $\varepsilon$ is a small positive constant. ${N}_{i}$ and $\left| {N}_{i}\right|$ are the neighbor set of the $i$ th node and its cardinal number. Similarity, ${N}_{l}$ and $\left| {N}_{l}\right|$ are the neighbor set of the $l$ th node and its cardinal number. When the connected weighted value between $i$ th node and the $l$ th node equals to 0, it means that the information exchange can not occur between the mentioned nodes.
+
+Since the high calculation speed, accuracy, and reliable performed by the alternating direction method of multipliers (ADMM), a fully distributed energy management strategy for S-IES is designed in this paper. It realizes the bi-directional transmissions of energy and information, which improves the reduction of communication resource and is suitable to the flat and distributed structure of S-IES. Then, the main algorithm can be designed as below.
+
+1) Iteration of energy output:
+
+$$
+{X}_{i,k + 1} \in \arg \min \left\{ {f\left( X\right) + \psi + {\lambda }_{i,k}^{\mathrm{T}}{W}^{\mathrm{T}}A{X}_{i,k}}\right\} , \tag{11}
+$$
+
+where $\lambda$ is the incremental cost. $A$ describes the physical relationship among nodes of S-IES, which is defined by physical constraints. $k$ is the sequence of time slot.
+
+2) Iteration of output error:
+
+$$
+{d}_{i,k + 1} = W{d}_{i,k} + A\left( {{X}_{i,k + 1} - {X}_{i,k}}\right) , \tag{12}
+$$
+
+where $d$ is the output error of S-IES.
+
+3) Iteration of incremental cost:
+
+$$
+{\left\lbrack {\overset{\overleftrightarrow{} }{\lambda }}_{{\Omega }_{\mathrm{W}}^{z},k + 1},{\lambda }_{z,k + 1}\right\rbrack }^{\mathrm{T}} = W{\left\lbrack {\overset{\overleftrightarrow{} }{\lambda }}_{{\Omega }_{\mathrm{W}}^{z},k},{\lambda }_{z,k}\right\rbrack }^{\mathrm{T}} + \tau {d}_{z,k + 1}. \tag{13}
+$$
+
+Repeat the iterations of energy output, output error, and incremental cost until each variable converge a preset threshold.
+
+§ III. SIMULATION RESULTS
+
+To verify the effectiveness of the designed algorithm, a 5- node S-IES is utilized as a test system, and the detailed physical/communication topology containing connected weighted parameters is shown in Fig.6. Moreover, in the test system, there are 2-fu, 1-re, 2-ld, and the operational coefficients and carbon emission parameters are presented in (14). Assume the load demand of power and heating are $\left\lbrack {{4.5},{2.4}}\right\rbrack \left( \mathrm{{MW}}\right)$ , respectively.
+
+$$
+{C}_{1}^{\mathrm{{fu}}} = {0.040} * {h}^{2} + {25} * h + {99}
+$$
+
+$$
+{C}_{1}^{re} = {0.043} * {p}^{2} + {22} * p + {80}
+$$
+
+$$
+{C}_{2}^{\mathrm{{fu}}} = {0.035} * {p}^{2} + {18} * p + {120}
+$$
+
+$$
+{C}_{1}^{\mathrm{{ld}}} = - {0.013} * {p}^{2} + {46} * p + {30} \tag{14}
+$$
+
+$$
+{C}_{2}^{\mathrm{{ld}}} = - {0.015} * {h}^{2} + {70} * h + {40}
+$$
+
+$$
+{E}_{1}^{\mathrm{{fu}}} = {0.0648} * {h}^{2} - {2.7} * h + {41}
+$$
+
+$$
+{E}_{2}^{\mathrm{{fu}}} = {0.0520} * {p}^{2} - {2.3} * p + {50}
+$$
+
+ < g r a p h i c s >
+
+Fig. 6. Structure for the test S-IES
+
+The trajectories of incremental costs of 5 nodes in the test S-IES are depicted in Fig.7. It can be found that the variable of incremental cost can be converged within 20 iteration steps. And the specified value of the final incremental cost is 0.3066 (p.u.). Moreover, the calculated optimized energy management solution is same as the solutions obtained by centralized strategy, which verifies the accuracy of the designed algorithm.
+
+ < g r a p h i c s >
+
+Fig. 7. Trajectories of incremental costs
+
+§ IV. CONCLUSION
+
+In this paper, the development of S-IES has been analyzed. A multi-objective energy management model for S-IES has been constructed considering economic benefits and carbon emissions. Meanwhile, the entire sailing requirements for ships are considered in the construction of energy management model. Additionally, to search for the optimization solutions in a distributed manner, an energy management algorithm has been proposed based on ADMM theory. Simulation results proves the accuracy and effectiveness of the designed energy management model and the distributed algorithm.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/lAtPkUMK1M/Initial_manuscript_md/Initial_manuscript.md b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/lAtPkUMK1M/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..a354a951ca4f317ee43eb7071ddb7a44a50c5538
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/lAtPkUMK1M/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,521 @@
+# Adaptive dynamic programming-based optimal heading control for state constrained unmanned sailboat
+
+${1}^{\text{st }}$ Shitong Zhang
+
+School of Mechanical Engineering
+
+Yanshan University
+
+Qinhuangdao, China
+
+bben@stumail.ysu.edu.cn
+
+${2}^{\text{st }}$ Yifei Xu
+
+School of Mechanical Engineering Yanshan University
+
+Qinhuangdao, China
+
+xyf@ysu.edu.cn
+
+${3}^{\text{st }}$ Yingjie Deng*
+
+School of Mechanical Engineering
+
+Yanshan University
+
+Qinhuangdao, China
+
+dyj@ysu.edu.cn
+
+${4}^{\text{st }}$ Sheng Xu
+
+Shenzhen Institute of Advanced Technology
+
+Chinese Academy of Sciences
+
+Shenzhen, China
+
+${Abstract}$ -This paper proposes a new state-constrained adaptive optimal control strategy for unmanned sailboat heading angle tracking considering the motion constraints. To improve the tracking accuracy, a combination of backstepping and adaptive dynamic programming (ADP) is employed. Thus, the issue of virtual control rate derivatives in conventional backstepping control is resolved with satisfactory precision. Firstly, the motion constraints are considered by using the Barrier Lyapunov function (BLF), and the neural networks (NNs) is employed to approximate the model uncertainties and disturbances. Secondly, an adaptive backstepping feedforward controller is proposed, transforming the sailboat's affine nonlinear system tracking problem into a regulation problem. Thirdly, according to the ADP theory, critic NNs are constructed to approximate the analytical solution of the Hamilton-Jacobi-Bellman (HJB) equation, and the optimal feedback control is obtained by online learning. Finally, simulation results demonstrate the effectiveness and optimality of the proposed controller.
+
+Index Terms-Optimal control, Adaptive dynamic programming (ADP), barrier Lyapunov function (BLF), Neural networks (NNs), Unmanned sailboat
+
+## I. INTRODUCTION
+
+In the past few decades, surface and underwater intelligent vehicles have played important roles in sea patrols, resource exploration, and rescue. However, due to the consumption of fuel, electricity, and other energy sources, unmanned ships and submarines require a large amount of power support to complete long-distance tasks, which also leads to huge cost problems in [1]-[6]. Since unmanned sailboats can use sails as power, have low costs, and can transmit data in real-time through their sensors, research on unmanned sailboats has gradually entered the interest of scholars. However, due to the complex marine environment, the disturbances of wind and waves can easily generate obstructive forces on the sails and keel, making the sails control difficult. Therefore, the authors of [7] proposed a path tracking method for unmanned sailboats that combines logic virtual ship (LVS) guidance law and dynamic event-triggered control. In [8], a path tracking control scheme is proposed for unmanned sailboats that combined backstepping and dynamic surface control technology. In order to reduce the sideslip angle error during sailing, the authors of [9] proposed double finite-time observers-based line-of-sight guidance (DFLOS) and adaptive finite-time control (DFLOS-AFC) strategies. However, they did not consider the issue of state constraints. When subjected to significant interference, to return the sailboat to the reference heading, a larger rudder angle is required, which will damage the actuator as it cannot accept significant deflection in a short period. Therefore, while ensuring the accuracy of sailboat heading control, it is also necessary to ensure that the turning speed of the boat heading satisfies the prescribed constraints.
+
+In order to solve the problem of unmanned sailboat heading angle tracking accuracy, many scholars have conducted research on it. In [10], a control strategy of adaptive echo state networks and backstepping is proposed to control the steering angle of the rudder. The authors of [11] designed a nonlinear heading controller of velocity vector direction to track the reference heading angle. In [12], the ${L}_{1}$ adaptive control theory is proposed to complete heading control and ensure the stability of the required heading angle. However, most papers focused on the heading control of sailboats using backstepping or sign functions, which results in low tracking accuracy and cannot achieve optimal control effects.
+
+The key problem of the nonlinear optimal control is that the analytical solution of the Hamilton-Jacobi-Bellman (HJB) equation is difficult to solve. In order to obtain the analytical solution, the authors of [13] proposed the ADP theory, which approximates the solution of the equation online. However, the problem of slow approximation speed and multiple iterations has led to an explosion in computational complexity. Recently, with the rapid development of NNs, their approximation performance and speed for unknown nonlinear functions have become increasingly excellent. Therefore, the research combining ADP and NNs has solved the above problems. In [14], a new event-triggered optimal trajectory tracking control method based on goal representation heuristic dynamic programming (GrHDP) for underactuated ships is proposed. In [15], they proposed a model free dual heuristic dynamic programming (DHP) method for unmanned aerial vehicle attitude control. The authors of [16] applied ADP theory to the path planning problem of mobile robots. However, to the best of our knowledge, there is limited research on applying ADP theory to the heading control of unmanned sailboats.
+
+---
+
+This work is partially supported by the Natural Science Foundation of China (No.52101375), the Hebei Province Natural Science Fund (No.E2022203088, E2024203179), the Innovation Capacity Enhancement Program of Hebei Province (No.24461901D), the Joint Funds of the National Natural Science Foundation of China (No.U20A20332), and the Key Research and Development Project of Hebei Province (No.21351802D).
+
+---
+
+Taking inspiration from the above analysis, we introduce the LBF function to solve the state constraint problem. In the control design of heading tracking, while introducing the backstepping method, the ADP theory is also introduced to ensure the optimal tracking accuracy. We constructed an evaluation NN to approximate the analytical solution of the HJB equation and obtained the optimal feedback control through online learning. The main contributions are concluded below:
+
+- Compared with previous papers of [7]-[9], we proposed a LBF-based method to overcome the issues of the motion constraints caused by the turning rate limitation of unmanned sailboats.
+
+- By using the ADP optimal feedback control strategy, our designed control method achieves optimal tracking accuracy in heading control compared to [9]-[12].
+
+- According to the construction of the critic network, a training speed acceleration method is developed using online learning methods.
+
+The remaining of this paper is organized as follows. Section II elaborates on the heading angle control model of unmanned sailboats and lemma. Section III designs the backstepping feedforward controller and the optimal feedback controller. Section IV provides simulation verification of the proposed strategy superiority. Section V gives some conclusions.
+
+## II. MATHEMATICAL MODEL AND PRELIMINARIES
+
+The sailboat is divided into four parts: the sail, rudder, keel, and hull. Combined with the theory of gas fluid dynamics, force analysis is conducted on each part, ignoring the undulating and pitching motion of the sailboat to establish a 3-DOF mathematical model of sailboat motion. The sailboat model considering external interference and control input rudder angle in this paper is
+
+$$
+\left\{ \begin{array}{l} \dot{\psi } = r \\ \dot{r} = {f}_{r}\left( {\psi , r,{\delta }_{s}, u, v,{\tau }_{wr}}\right) + {g}_{r}{\tau }_{r} \end{array}\right. \tag{1}
+$$
+
+where $\psi \in {\mathbb{R}}^{M}$ and $r \in {\mathbb{R}}^{M}$ are system state variables, respectively; $u \in {\mathbb{R}}^{M}$ is the forward velocity; $v \in {\mathbb{R}}^{M}$ is the lateral velocity; ${\delta }_{s} \in {\mathbb{R}}^{M}$ is the sail angle; ${\tau }_{wr}$ is the external disturbance; ${f}_{r}\left( \cdot \right)$ is the nonlinear function of the unknown model of a sailboat; ${g}_{r}$ is the unknown control gain; ${\tau }_{r}$ is the control input of the system.
+
+Assumption 1 [17]: Due to the fact that unmanned sailboats navigate in limited space, there exists a normal number that satisfies the heading angle and heading angular speed to be less than or equal to this normal number. That is, $\psi \leq {k}_{\psi }$ and $r \leq {k}_{r}$ .
+
+Lemma 1 [10]: Due to the outstanding ability of NNs in function approximation, they are often used to approximate nonlinear functions. Therefore, NNs can be used to approximate unknown functions as follows:
+
+$$
+F\left( x\right) = {W}^{\mathrm{T}}\sigma \left( x\right) + \varepsilon \left( x\right) \tag{2}
+$$
+
+where $W = {\left( {w}_{1},{w}_{2},\ldots ,{w}_{n}\right) }^{\mathrm{T}}$ denotes the desired weight of the NNs and $\varepsilon \leq \bar{\varepsilon }$ denotes the approximating error.
+
+## III. CONTROL DESIGN
+
+## A. Feedforward controller design
+
+In this section, the backstepping method based on an adaptive NNs framework is adopted to transform system (1) into an affine nonlinear system.
+
+According to the system (1), define the error system function as
+
+$$
+\left\{ \begin{matrix} {\psi }_{e} = \psi - {\psi }_{d} \\ {r}_{e} = r - {r}_{d} \end{matrix}\right. \tag{3}
+$$
+
+where ${\psi }_{d}$ and ${r}_{d}$ are the reference heading angle and yaw speed, respectively. Differentiating ${\psi }_{e}$ along with (1), it has:
+
+$$
+{\dot{\psi }}_{e} = \dot{\psi } - {\dot{\psi }}_{d} = r - {\dot{\psi }}_{d} = \left( {{r}_{e} + {r}_{d}}\right) - {\dot{\psi }}_{d} \tag{4}
+$$
+
+where ${r}_{d} = {r}_{d}^{\alpha } + {r}_{d}^{ * }$ is the virtual control input of yaw speed. ${r}_{d}^{\alpha }$ denotes the feedforward virtual yaw speed input and ${r}_{d}^{ * }$ denotes the feedback virtual yaw speed input. Therefore, we can get
+
+$$
+{\dot{\psi }}_{e} = {r}_{e} + {r}_{d}^{\alpha } + {r}_{d}^{ * } - {\dot{\psi }}_{d} \tag{5}
+$$
+
+To construct the desired feedforward yaw speed virtual control input, consider the BLF as
+
+$$
+{V}_{1} = \frac{1}{2}\log \left( \frac{{k}_{\psi }^{2}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}\right) \tag{6}
+$$
+
+where ${k}_{\psi }$ is a positive value of the state constraint. Calculate the derivate of ${V}_{1}$ , we have
+
+$$
+{\dot{V}}_{1} = \frac{{\psi }_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}\left( {{r}_{e} + {r}_{d}^{\alpha } + {r}_{d}^{ * } - {\dot{\psi }}_{d}}\right) \tag{7}
+$$
+
+Therefore, the feedforward virtual yaw speed input ${r}_{d}^{\alpha }$ can be designed as
+
+$$
+{r}_{d}^{\alpha } = - \left( {{k}_{\psi }^{2} - {\psi }_{e}^{2}}\right) {k}_{1}{\psi }_{e} + {\dot{\psi }}_{d} \tag{8}
+$$
+
+where ${k}_{1} > 0$ is a tuning parameters. Substituting (8) into (7), we have
+
+$$
+{\dot{V}}_{1} = - {k}_{1}{\psi }_{e}^{2} + \frac{{\psi }_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}\left( {{r}_{e} + {r}_{d}^{ * }}\right) \tag{9}
+$$
+
+Taking the derivative of the second equation of (3) yields
+
+$$
+{\dot{r}}_{e} = {f}_{r}\left( \cdot \right) + {f}_{r}\left( {e}_{d}\right) - {f}_{r}\left( {e}_{d}\right) + {g}_{r}{\tau }_{r} - {\dot{r}}_{d} \tag{10}
+$$
+
+where ${e}_{d} = {\left\lbrack {\psi }_{d},{r}_{d}\right\rbrack }^{\mathrm{T}}$ . The unknown model uncertainty function ${f}_{r}\left( \cdot \right)$ and ${\dot{r}}_{d}$ can be transferred by the following function
+
+$$
+{F}_{2}\left( {z}_{2d}\right) = {f}_{r}\left( {e}_{d}\right) - {\dot{r}}_{d} \tag{11}
+$$
+
+where ${z}_{2d} = {\left\lbrack {e}_{d}^{\mathrm{T}},{\psi }_{e},{r}_{e}\right\rbrack }^{\mathrm{T}}$ . According to the Lemma 1, the above (11) can be approximated by NNs as follows:
+
+$$
+{F}_{2}\left( {z}_{2d}\right) = \left( {{\widehat{W}}_{2}^{\mathrm{T}} + {\widetilde{W}}_{2}^{\mathrm{T}}}\right) {\sigma }_{2}\left( {z}_{2d}\right) + {\varepsilon }_{2}\left( {z}_{2d}\right) \tag{12}
+$$
+
+where ${\widetilde{W}}_{2}^{\mathrm{T}} = {W}_{2} - {\widehat{W}}_{2}$ is the NNs approximate error and ${\widehat{W}}_{2}$ is the estimation of optimal weight vector ${W}_{2}$ . Through (11) and (12), ${f}_{r}\left( \cdot \right) - {f}_{r}\left( {e}_{d}\right)$ can be approximated as follows:
+
+$$
+{f}_{r}\left( \cdot \right) - {f}_{r}\left( {e}_{d}\right) = {F}_{2}\left( {z}_{2}\right) - {F}_{2}\left( {z}_{2d}\right)
+$$
+
+$$
+= p\left( e\right) + {\widetilde{W}}_{2}^{\mathrm{T}}\left\lbrack {{\sigma }_{2}\left( {z}_{2}\right) - {\sigma }_{2}\left( {z}_{2d}\right) }\right\rbrack + {\varepsilon }_{2}\left( {z}_{2}\right)
+$$
+
+$$
+- {\varepsilon }_{2}\left( {z}_{2d}\right)
+$$
+
+(13)
+
+where $p\left( e\right) = {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2}\right) - {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) , e = {\left\lbrack {\psi }_{e},{r}_{e}\right\rbrack }^{\mathrm{T}}$ and ${F}_{2}\left( {z}_{2}\right) = {f}_{r}\left( \cdot \right) - {\dot{r}}_{d}$ is a function of ${r}_{d}$ and ${\dot{r}}_{d}$ . The input ${z}_{2}$ is chosen as ${\left( {e}_{d}^{\mathrm{T}},\psi , r,{\delta }_{s}, u, v\right) }^{\mathrm{T}}$ . According to (12) and (13), (10) can be written as
+
+$$
+{\dot{r}}_{e} = p\left( e\right) + {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) + {\widetilde{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2}\right) + {\varepsilon }_{2}\left( {z}_{2}\right) \tag{14}
+$$
+
+$$
++ {g}_{r}{\tau }_{r}^{\alpha } + {g}_{r}{\tau }_{r}^{ * }
+$$
+
+To construct the feedforward virtual control input ${\tau }_{r}^{\alpha }$ , consider the BLF as
+
+$$
+{V}_{2} = {V}_{1} + \frac{1}{2}\log \left( \frac{{k}_{r}^{2}}{{k}_{r}^{2} - {r}_{e}^{2}}\right) + \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2} \tag{15}
+$$
+
+where ${k}_{r}$ is a positive value of the motion constraint. Calculate the derivate of ${V}_{2}$ , we have
+
+$$
+{\dot{V}}_{2} = - {k}_{1}{\psi }_{e}^{2} + \frac{{\psi }_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}\left( {{r}_{e} + {r}_{d}^{ * }}\right) + \frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}(p\left( e\right)
+$$
+
+$$
++ {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) + {\widetilde{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2}\right) + {\varepsilon }_{2}\left( {z}_{2}\right) + {g}_{r}{\tau }_{r}^{\alpha } \tag{16}
+$$
+
+$$
+\left. {+{g}_{r}{\tau }_{r}^{ * }}\right) - {\widetilde{W}}_{2}^{\mathrm{T}}{\dot{\widehat{W}}}_{2}
+$$
+
+According to the Young's inequality, we can get that
+
+$$
+\frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}{\varepsilon }_{2}\left( {z}_{2}\right) \leq \frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}{\bar{\varepsilon }}_{2} \leq \frac{1}{2}\frac{{r}_{e}^{2}}{{\left( {k}_{r}^{2} - {r}_{e}^{2}\right) }^{2}} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} \tag{17}
+$$
+
+Substituting (17) into (16), we have
+
+$$
+{\dot{V}}_{2} = - {k}_{1}{\psi }_{e}^{2} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} - {\widetilde{W}}_{2}^{\mathrm{T}}{\dot{\widehat{W}}}_{2} + \frac{{\psi }_{e}{r}_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}} + \frac{{\psi }_{e}{r}_{d}^{ * }}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}
+$$
+
+$$
++ \frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}\left( {p\left( e\right) + {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) + {\widetilde{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2}\right) }\right.
+$$
+
+$$
+\left. {+{\varepsilon }_{2}\left( {z}_{2}\right) + {g}_{r}{\tau }_{r}^{\alpha } + {g}_{r}{\tau }_{r}^{ * }}\right)
+$$
+
+(18)
+
+Therefore, the feedforward control input ${\tau }_{r}^{\alpha }$ can be designed as
+
+$$
+{\tau }_{r}^{\alpha } = - \frac{1}{{g}_{r}}\left\lbrack {\left( {{k}_{r}^{2} - {r}_{e}^{2}}\right) {k}_{2}{r}_{e} + \frac{\left( {{k}_{r}^{2} - {r}_{e}^{2}}\right) {\psi }_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}} + {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) }\right\rbrack
+$$
+
+(19)
+
+where ${k}_{2} > 0$ is a tuning parameters. The NNs weight vector adaptation law ${\widehat{W}}_{2}$ can be designed as
+
+$$
+{\dot{\widehat{W}}}_{2} = \frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}{\sigma }_{2}\left( {z}_{2}\right) - {\beta }_{2}{\widehat{W}}_{2} \tag{20}
+$$
+
+where ${\beta }_{2} > 0$ is also a tuning parameters. Substituting (19) and (20) into (18), we have
+
+$$
+{\dot{V}}_{2} \leq - {k}_{1}{\psi }_{e}^{2} - {k}_{2}{r}_{e}^{2} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} + {\beta }_{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widehat{W}}_{2} + \frac{p\left( e\right) {r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}} \tag{21}
+$$
+
+$$
++ \frac{{r}_{e}{g}_{r}}{{k}_{r}^{2} - {r}_{e}^{2}}{\tau }_{r}^{ * } + \frac{{\psi }_{e}{r}_{d}^{ * }}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}
+$$
+
+According to the Young's inequality, we can get that
+
+$$
+{\widetilde{W}}_{2}^{\mathrm{T}}{\widehat{W}}_{2} = {\widetilde{W}}_{2}^{\mathrm{T}}\left( {{W}_{2} - {\widetilde{W}}_{2}}\right) = {\widetilde{W}}_{2}^{\mathrm{T}}{W}_{2} - {\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2}
+$$
+
+$$
+\leq \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2} + \frac{1}{2}{W}_{2}^{\mathrm{T}}{W}_{2} - {\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2} \tag{22}
+$$
+
+$$
+= \frac{1}{2}{W}_{2}^{\mathrm{T}}{W}_{2} - \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2}
+$$
+
+Therefore, the above (22) can be written as
+
+$$
+{\dot{V}}_{2} \leq - \underline{k}\parallel E{\parallel }^{2} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} + \frac{1}{2}{\beta }_{2}{W}_{2}^{\mathrm{T}}{W}_{2} - \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2}
+$$
+
+$$
++ \frac{p\left( e\right) {r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}} + \frac{{r}_{e}{g}_{r}}{{k}_{r}^{2} - {r}_{e}^{2}}{\tau }_{r}^{ * } + \frac{{\psi }_{e}{r}_{d}^{ * }}{{k}_{\psi }^{2} - {\psi }_{e}^{2}} \tag{23}
+$$
+
+where $E = \left\lbrack {{\psi }_{e},{r}_{e}}\right\rbrack \mathrm{T},\underline{k} = \min \left( {{k}_{1},{k}_{2}}\right)$ .
+
+In previous research, the feedforward controller ${\tau }_{r}^{\alpha }$ was calculated based on the derivative of the virtual controller ${\dot{r}}_{d}^{\alpha }$ . However, in practical applications, it is not easy to get analytical solutions for ${\dot{r}}_{d}^{\alpha }$ . Our proposed control method denotes ${r}_{d} = {r}_{d}^{\alpha } + {r}_{d}^{ * }$ , which is determined by both ${r}_{d}^{\alpha }$ and ${r}_{d}^{ * }$ and they are obtained through NNs weights and the system's state. Therefore, we approximate the derivative of the virtual controller using NNs in (11). With this approach, the feedforward controller ${r}_{d}^{\alpha }$ and ${\tau }_{r}^{\alpha }$ can be obtained directly from the current system without the need for the derivative of the virtual controller used in previous studies. Consequently, compared to previous work, the method we propose is more feasible to implement in practical applications.
+
+Rewriting (23) as follow:
+
+$$
+{\dot{V}}_{2} \leq - \underline{k}\parallel E{\parallel }^{2} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} + \frac{1}{2}{\beta }_{2}{W}_{2}^{\mathrm{T}}{W}_{2} - \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2}
+$$
+
+$$
++ \frac{{E}^{\mathrm{T}}}{{\widetilde{E}}^{\mathrm{T}}}\left( {\left\lbrack \begin{matrix} 0 \\ p\left( e\right) \end{matrix}\right\rbrack + \left\lbrack \begin{matrix} 1 & 0 \\ 0 & {g}_{r} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {r}_{d}^{ * } \\ {\tau }_{r}^{ * } \end{matrix}\right\rbrack }\right) \tag{24}
+$$
+
+where $\widetilde{E} = {\left\lbrack {k}_{\psi }^{2} - {\psi }_{e}^{2},{k}_{r}^{2} - {r}_{e}^{2}\right\rbrack }^{\mathrm{T}}$ . The feedforward controller is expressed as ${U}^{\alpha } = \left\lbrack {{r}_{d}^{\alpha },{\tau }_{r}^{\alpha }}\right\rbrack$ . The feedback optimal controller ${U}^{ * } = \left\lbrack {{r}_{d}^{ * },{\tau }_{r}^{ * }}\right\rbrack$ will be designed in the subsection. Therefore, they constitute the controller of the entire system.
+
+## B. Feedback optimal controller design
+
+According to (24), the design of an individual feedforward controller for ${U}^{\alpha }$ cannot guarantee the stability of the entire closed-loop system. Therefore, to ensure the stability of the last term in (24), a feedback optimal controller is designed based on ADP theory. With this design, not only the tracking ability of the system can be optimized, but also the system's stability can be ensured.
+
+The last term in (24) can be written as:
+
+$$
+\dot{E} = \left\lbrack \begin{matrix} 0 \\ p\left( e\right) \end{matrix}\right\rbrack + \left\lbrack \begin{matrix} 1 & 0 \\ 0 & {g}_{r} \end{matrix}\right\rbrack {U}^{ * } \tag{25}
+$$
+
+Further, it can be obtained that
+
+$$
+\dot{E} = P\left( E\right) + G\widehat{U} \tag{26}
+$$
+
+where $E = {\left\lbrack {\psi }_{e},{r}_{e}\right\rbrack }^{\mathrm{T}}$ are the heading angle error and yaw speed error, $P\left( E\right) = {\left\lbrack 0, p\left( e\right) \right\rbrack }^{\mathrm{T}}, G = \operatorname{diag}{\left\lbrack 1,{g}_{r}\right\rbrack }^{\mathrm{T}}$ .
+
+According to the ADP theory, the performance index function can be define as
+
+$$
+J\left( E\right) = {\int }_{t}^{\infty }{E}^{\mathrm{T}}{QE} + {\widehat{U}}^{\mathrm{T}}R\widehat{U}\mathrm{\;d}\tau \tag{27}
+$$
+
+where $Q \in {\mathbb{R}}^{2 \times 2}$ and $R \in {\mathbb{R}}^{2 \times 2}$ are positive definite matrices. The Hamiltonian function of the performance index function can be defined as
+
+$$
+H\left( {E,\widehat{U},\nabla J\left( E\right) }\right) = {E}^{\mathrm{T}}{QE} + {\widehat{U}}^{\mathrm{T}}R\widehat{U}
+$$
+
+$$
++ \nabla J{\left( E\right) }^{\mathrm{T}}\left( {P\left( E\right) + G\widehat{U}}\right) \tag{28}
+$$
+
+where $\nabla J\left( E\right) = \frac{\partial J\left( E\right) }{\partial E}$ denotes the derivative of $J\left( E\right)$ with regard to $E$ . In order to solve the HJB equation, the feedback optimal control ${U}^{ * }$ can be designed as
+
+$$
+{U}^{ * }\left( E\right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\nabla {J}^{ * }\left( E\right) \tag{29}
+$$
+
+From (28), the optimal performance index function ${J}^{ * }\left( E\right)$ can be obtained by
+
+$$
+\mathop{\min }\limits_{{\widehat{U}\left( E\right) }}H\left( {E,\widehat{U},\nabla {J}^{ * }\left( E\right) }\right) = 0 \tag{30}
+$$
+
+Substituting the above (30) into (28), the HJB equation can be rewritten as follow:
+
+$$
+{E}^{\mathrm{T}}{QE} + {\left( \nabla {J}^{ * }\left( E\right) \right) }^{\mathrm{T}}P\left( E\right) - \frac{1}{4}\left( {\left( \nabla {J}^{ * }\left( E\right) \right) }^{\mathrm{T}}\right. \tag{31}
+$$
+
+$$
+\left. {G{R}^{-1}{G}^{\mathrm{T}}\nabla {J}^{ * }\left( E\right) }\right) = 0
+$$
+
+It is obvious that the above equation is a nonlinear partial differential equation, so it is difficult to obtain its analytical solution. Therefore, to address this issue, the ADP theory is adopted. By constructing a single-layer NN to approximate the following optimal performance index function as
+
+$$
+{J}^{ * }\left( E\right) = {W}_{c}^{\mathrm{T}}\sigma \left( E\right) + {\varepsilon }_{c}\left( E\right) \tag{32}
+$$
+
+where ${W}_{c}$ denotes the optimal weight vector of critic NNs, $\sigma \left( \cdot \right)$ is the activation function, ${\varepsilon }_{c}\left( E\right)$ is the critic NNs approximation error.
+
+The gradient of the optimal performance index function ${J}^{ * }\left( E\right)$ with regard to $E$ can be defined as
+
+$$
+\nabla {J}^{ * }\left( E\right) = {\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{W}_{c} + \nabla {\varepsilon }_{c}\left( E\right) \tag{33}
+$$
+
+From (33) and (32), (29) can be rewritten as
+
+$$
+{U}^{ * }\left( E\right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{W}_{c} - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\nabla {\varepsilon }_{c}\left( E\right)
+$$
+
+(34)
+
+Therefore, the HJB equation can be further designed as
+
+$$
+H\left( {E,{U}^{ * },{W}_{c}}\right) = {E}^{\mathrm{T}}{QE} + {W}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) P\left( E\right) + {\varepsilon }_{HJB}
+$$
+
+$$
+- \frac{1}{4}\left( {{W}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) G{R}^{-1}{G}^{\mathrm{T}}{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{W}_{c}}\right)
+$$
+
+$$
+= 0
+$$
+
+(35)
+
+where ${\varepsilon }_{HJB}$ is the error.
+
+By using NNs to estimate the desired weights of performance index function as follow:
+
+$$
+\widehat{J}\left( E\right) = {\widehat{W}}_{c}^{\mathrm{T}}\sigma \left( E\right) \tag{36}
+$$
+
+where ${\widehat{W}}_{c}$ and $\widehat{J}\left( E\right)$ are estimations of ${W}_{c}$ and $J\left( E\right)$ , respectively. Let weight estimation error as ${\widetilde{W}}_{c} = {W}_{c} - {\widehat{W}}_{c}$ , the estimate of optimal control ${U}^{ * }$ can be designed as
+
+$$
+\bar{U}\left( E\right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{\widehat{W}}_{c} \tag{37}
+$$
+
+Then, the HJB equation can be approximated as
+
+$$
+H\left( {E,\bar{U},{W}_{c}}\right) = {E}^{\mathrm{T}}{QE} + {W}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) P\left( E\right)
+$$
+
+$$
+- \frac{1}{4}\left( {{\widehat{W}}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) G{R}^{-1}{G}^{\mathrm{T}}{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{\widehat{W}}_{c}}\right)
+$$
+
+$$
+= {e}_{c}
+$$
+
+(38)
+
+The objective error function of critic NNs is defined as
+
+$$
+{E}_{c} = \frac{1}{2}{e}_{c}^{2} \tag{39}
+$$
+
+From the [18], we can design a appropriate critic NNs updating law, which can guarantee that ${\widehat{W}}_{c}$ converges to ${W}_{c}$ and also minimize the objective error function (39).
+
+$$
+{\dot{\widehat{W}}}_{c} = - {k}_{c}\frac{\Gamma }{{\left( 1 + {\Gamma }^{\mathrm{T}}\Gamma \right) }^{2}}{e}_{c} + \frac{{k}_{c}}{2}\Delta \nabla \sigma \left( E\right) G\nabla V\left( E\right)
+$$
+
+$$
++ {k}_{c}\left\lbrack {\frac{1}{4}\frac{\Gamma }{{\left( 1 + {\Gamma }^{\mathrm{T}}\Gamma \right) }^{2}}{\widehat{W}}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) G{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{\widehat{W}}_{c}}\right\rbrack
+$$
+
+$$
++ {k}_{c}\left\lbrack {{K}_{1}{\zeta }^{\mathrm{T}}{\widehat{W}}_{c} - {K}_{2}{\widehat{W}}_{c}}\right\rbrack
+$$
+
+(40)
+
+where ${k}_{c} > 0$ is the tuning parameter, $\Gamma = \nabla \sigma \left( E\right) (P\left( E\right) +$ $G\bar{U}),\zeta = \frac{\Gamma }{1 + {\Gamma }^{\mathrm{T}}\Gamma },{K}_{1}$ and ${K}_{2}$ are the tuning parameter. $\Delta$ is designed as
+
+$$
+\Delta = \left\{ \begin{array}{l} 0,{\left( \nabla V\left( E\right) \right) }^{\mathrm{T}}\left( {P\left( E\right) + G\bar{U}}\right) < 0 \\ 1,\text{ else } \end{array}\right. \tag{41}
+$$
+
+where $V\left( E\right)$ is a Lyapunov function. From this, we can obtain
+
+$$
+\dot{V}\left( E\right) = {\left( \nabla V\left( E\right) \right) }^{\mathrm{T}}\dot{E} = {\left( \nabla V\left( E\right) \right) }^{\mathrm{T}}\left( {P\left( E\right) + G{U}^{ * }}\right) \tag{42}
+$$
+
+$$
+= - {\left( \nabla V\left( E\right) \right) }^{\mathrm{T}}S\nabla V\left( E\right) \leq 0
+$$
+
+where $S$ is a positive definite matrix. Specifically, $V\left( E\right)$ is a function of the state variable $E$ and can be chosen appropriately, for example, $V\left( E\right) = {E}^{\mathrm{T}}E$ .
+
+Remark 1: The weight ${\widehat{W}}_{c}$ update process consists of the following four components: The first component employs gradient descent for design. The second component ensures the boundedness of the weights. The third and fourth components guarantee the stability of the weights. Through this design, the proposed control strategy achieves a higher tracking accuracy while ensuring the rapid and stable update of the neural network weights.
+
+## IV. SIMULATION
+
+The model parameters of the unmanned sailboat are selected from [10]. In order to facilitate simulation analysis without losing generality, the reference heading is set as ${\psi }_{d} = \sin \left( \mathrm{t}\right)$ . Select control parameters as ${k}_{\psi } = {1.2},{k}_{r} = {1.5},{k}_{1} = 3,{k}_{2} =$ $6,{k}_{c} = {3.8},{\beta }_{2} = 4,{K}_{1} = {0.0001}\mathrm{I},{K}_{2} = {0.00001}\mathrm{I}, Q = \mathrm{I}$ , $R = {0.25}\mathrm{I}$ . The time step is set as 0.05 . The initial states are defined as $\psi \left( 0\right) = {0.05}$ and $r\left( 0\right) = 0$ . The activation function of the critic network is chosen as $\sigma \left( E\right) = \left\lbrack {{e}_{1},{e}_{1}^{2},{e}_{2},{e}_{2}^{2},{e}_{1}{e}_{2}}\right\rbrack$ , the network weights are selected randomly in $\left\lbrack {0,1}\right\rbrack$ . Simulate real ocean and wind disturbances by using first-order Markov perturbations.
+
+To verify the superiority of the proposed strategy, we will compare the "LSBG" strategy of [10]. Fig. 1 shows the real-time curve of heading angle tracking, and the results show that the designed optimal control method can track the reference signal with smaller errors and within state constraints. The heading angular velocity tracking and its state constraints are shown in Fig. 2. Fig. 3 and Fig. 4 illustrate the error curves of heading angle and heading angular velocity, indicating that the proposed strategy achieves better tracking performance than the "LSBG" strategy. Fig. 5 displays the control inputs for control input ${\tau }_{r}$ under "LSBG" strategy, backstepping feedforward control, optimal feedback control, and system control under the proposed strategy, respectively. Fig. 6 shows the update curve of the evaluation network weights, it is obvious that the speed of online learning has reached stability in a very short time. From the above analysis, the proposed strategy can not only ensure better tracking accuracy, but also avoid system state violations of constraints.
+
+## V. CONCLUSION
+
+In this paper, the optimal control method based on ADP is proposed for the tracking control of unmanned sailboats with heading turning constraints. The proposed LBF-based method solves the problem of state constraints. The feedforward backstepping controller and the feedback optimal controller were designed using the backstepping method and ADP theory, respectively. The learning ability of critic NNs has been accelerated through online learning strategies. Finally, the simulation verified the optimality of the proposed strategy. In the future, we will apply this method to the path-tracking task of unmanned sailboats in pratice.
+
+
+
+Fig. 1. Comparison of heading angle tracking under different strategies.
+
+
+
+Fig. 2. Comparison of heading angle speed tracking under different strategies.
+
+
+
+Fig. 3. Comparison of heading angle error under different strategies.
+
+
+
+Fig. 4. Comparison of heading angle speed error under different strategies.
+
+
+
+Fig. 5. Control input ${\tau }_{r}$ under "LSBG" strategy, feedforward control input ${\tau }_{r}^{\alpha }$ , feedback control input ${\tau }_{r}^{ * }$ and optimal control input ${\tau }_{r}$ .
+
+## REFERENCES
+
+[1] Y. Deng, S. Zhang, J. Yan, N. Im and W. Zhou, "Adaptive asymptotic tracking control of autonomous underwater vehicles based on Bernstein polynomial approximation," Ocean Engineering, vol. 288, pp. 116220, 2023.
+
+[2] J. Zhang, T. Yang and T. Chai, "Neural Network Control of Underactuat-ed Surface Vehicles With Prescribed Trajectory Tracking Performance," IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 6, pp. 8026-8039, June 2024.
+
+[3] G. Zhu, Y. Ma, Z. Li, R. Malekian and M. Sotelo, "Event-Triggered Adaptive Neural Fault-Tolerant Control of Underactuated MSVs With Input Saturation," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 7045-7057, July 2022.
+
+[4] J. Chen, X. Hu, C. Lv, Z. Zhang and R. Ma, "Adaptive event-triggered fuzzy tracking control for underactuated surface vehicles under external disturbances," Ocean Engineering, vol. 283, pp. 115026, 2023.
+
+[5] G. Wu, Y. Ding, T. Tahsin and I. Atilla, "Adaptive neural network and extended state observer-based non-singular terminal sliding modetracking control for an underactuated USV with unknown uncertainties," Applied Ocean Research, vol. 135, pp. 103560, 2023.
+
+
+
+Fig. 6. Critic ADP weight update curve.
+
+[6] Ruiting Chu and Zhiquan Liu and Zhenzhong Chu, "Improved super-twisting sliding mode control for ship heading with sideslip angle compensation," Ocean Engineering, vol. 260, pp. 111996, 2022.
+
+[7] G. Zhang, L. Wang, J. Li and W. Zhang, "Improved LVS guidance and path-following control for unmanned sailboat robot with the minimum triggered setting," Ocean Engineering, vol. 272, pp. 113860, 2023.
+
+[8] Z. Shen, Y. Liu, Y. Nie and H. Yu, "Prescribed performance LOS guidance-based dynamic surface path following control of unmanned sailboats," Ocean Engineering, vol. 284, pp. 115182, 2023.
+
+[9] K. Shao, N. Wang, H. Qin, "Sideslip angle observation-based LOS and adaptive finite-time path following control for sailboat," Ocean Engineering, vol. 281, pp. 114636, 2023.
+
+[10] Y. Deng, X. Zhang and G. Zhang, "Line-of-Sight-Based Guidance and Adaptive Neural Path-Following Control for Sailboats," IEEE Journal of Oceanic Engineering, vol. 45, no. 4, pp. 1177-1189, Oct. 2020
+
+[11] H. Saoud, M. -D. Hua, F. Plumet and F. Ben Amar, "Routing and course control of an autonomous sailboat," 2015 European Conference on Mobile Robots (ECMR), Lincoln, UK, 2015, pp. 1-6.
+
+[12] X. Xiao, T. I. Fossen and J. Jouffroy, "Nonlinear Robust Heading Control for Sailing Yachts," IFAC Proceedings Volumes, vol. 45, no. 27, pp. 404-409, 2012.
+
+[13] P. Werbos, "Advanced forecasting methods for global crisis warning and models of intelligence," General System Yearbook, pp. 25-38, 1977.
+
+[14] Y. Deng, S. Zhang, Y. Xu, X. Zhang and W. Zhou, "Event-triggered optimal trajectory tracking control of underactuated ships based on goal representation heuristic dynamic programming," Ocean Engineering, vol. 308, pp. 118251, 2024.
+
+[15] X. Huang, J. Liu, C. Jia, Z. Wang and W. Li, "Online self-learning attitude tracking control of morphing unmanned aerial vehicle based on dual heuristic dynamic programming," Aerospace Science and Technology, vol. 143, pp. 108727, 2023.
+
+[16] X. Li, L. Wang, Y. An, Q. Huang, Y. Cui and H. Hu, "Dynamic path planning of mobile robots using adaptive dynamic programming," Expert Systems with Applications, vol. 235, pp. 121112, 2024.
+
+[17] J. Wang, P. Zhang, Y. Wang and Z. Ji, "Adaptive dynamic programming-based optimal control for nonlinear state constrained systems with input delay," Nonlinear Dynamics, vol. 111, pp. 19133-19149, 2023.
+
+[18] X. Yang, D. Liu and Q. Wei, "Online approximate optimal control for affine non-linear systems with unknown internal dynamics using adaptive dynamic programming," IET Control Theory & Applications, vol. 8, pp. 1676-1688, 2014.
\ No newline at end of file
diff --git a/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/lAtPkUMK1M/Initial_manuscript_tex/Initial_manuscript.tex b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/lAtPkUMK1M/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..912d05f8d65b1c296290342b79088d380a158805
--- /dev/null
+++ b/papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/lAtPkUMK1M/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,475 @@
+§ ADAPTIVE DYNAMIC PROGRAMMING-BASED OPTIMAL HEADING CONTROL FOR STATE CONSTRAINED UNMANNED SAILBOAT
+
+${1}^{\text{ st }}$ Shitong Zhang
+
+School of Mechanical Engineering
+
+Yanshan University
+
+Qinhuangdao, China
+
+bben@stumail.ysu.edu.cn
+
+${2}^{\text{ st }}$ Yifei Xu
+
+School of Mechanical Engineering Yanshan University
+
+Qinhuangdao, China
+
+xyf@ysu.edu.cn
+
+${3}^{\text{ st }}$ Yingjie Deng*
+
+School of Mechanical Engineering
+
+Yanshan University
+
+Qinhuangdao, China
+
+dyj@ysu.edu.cn
+
+${4}^{\text{ st }}$ Sheng Xu
+
+Shenzhen Institute of Advanced Technology
+
+Chinese Academy of Sciences
+
+Shenzhen, China
+
+${Abstract}$ -This paper proposes a new state-constrained adaptive optimal control strategy for unmanned sailboat heading angle tracking considering the motion constraints. To improve the tracking accuracy, a combination of backstepping and adaptive dynamic programming (ADP) is employed. Thus, the issue of virtual control rate derivatives in conventional backstepping control is resolved with satisfactory precision. Firstly, the motion constraints are considered by using the Barrier Lyapunov function (BLF), and the neural networks (NNs) is employed to approximate the model uncertainties and disturbances. Secondly, an adaptive backstepping feedforward controller is proposed, transforming the sailboat's affine nonlinear system tracking problem into a regulation problem. Thirdly, according to the ADP theory, critic NNs are constructed to approximate the analytical solution of the Hamilton-Jacobi-Bellman (HJB) equation, and the optimal feedback control is obtained by online learning. Finally, simulation results demonstrate the effectiveness and optimality of the proposed controller.
+
+Index Terms-Optimal control, Adaptive dynamic programming (ADP), barrier Lyapunov function (BLF), Neural networks (NNs), Unmanned sailboat
+
+§ I. INTRODUCTION
+
+In the past few decades, surface and underwater intelligent vehicles have played important roles in sea patrols, resource exploration, and rescue. However, due to the consumption of fuel, electricity, and other energy sources, unmanned ships and submarines require a large amount of power support to complete long-distance tasks, which also leads to huge cost problems in [1]-[6]. Since unmanned sailboats can use sails as power, have low costs, and can transmit data in real-time through their sensors, research on unmanned sailboats has gradually entered the interest of scholars. However, due to the complex marine environment, the disturbances of wind and waves can easily generate obstructive forces on the sails and keel, making the sails control difficult. Therefore, the authors of [7] proposed a path tracking method for unmanned sailboats that combines logic virtual ship (LVS) guidance law and dynamic event-triggered control. In [8], a path tracking control scheme is proposed for unmanned sailboats that combined backstepping and dynamic surface control technology. In order to reduce the sideslip angle error during sailing, the authors of [9] proposed double finite-time observers-based line-of-sight guidance (DFLOS) and adaptive finite-time control (DFLOS-AFC) strategies. However, they did not consider the issue of state constraints. When subjected to significant interference, to return the sailboat to the reference heading, a larger rudder angle is required, which will damage the actuator as it cannot accept significant deflection in a short period. Therefore, while ensuring the accuracy of sailboat heading control, it is also necessary to ensure that the turning speed of the boat heading satisfies the prescribed constraints.
+
+In order to solve the problem of unmanned sailboat heading angle tracking accuracy, many scholars have conducted research on it. In [10], a control strategy of adaptive echo state networks and backstepping is proposed to control the steering angle of the rudder. The authors of [11] designed a nonlinear heading controller of velocity vector direction to track the reference heading angle. In [12], the ${L}_{1}$ adaptive control theory is proposed to complete heading control and ensure the stability of the required heading angle. However, most papers focused on the heading control of sailboats using backstepping or sign functions, which results in low tracking accuracy and cannot achieve optimal control effects.
+
+The key problem of the nonlinear optimal control is that the analytical solution of the Hamilton-Jacobi-Bellman (HJB) equation is difficult to solve. In order to obtain the analytical solution, the authors of [13] proposed the ADP theory, which approximates the solution of the equation online. However, the problem of slow approximation speed and multiple iterations has led to an explosion in computational complexity. Recently, with the rapid development of NNs, their approximation performance and speed for unknown nonlinear functions have become increasingly excellent. Therefore, the research combining ADP and NNs has solved the above problems. In [14], a new event-triggered optimal trajectory tracking control method based on goal representation heuristic dynamic programming (GrHDP) for underactuated ships is proposed. In [15], they proposed a model free dual heuristic dynamic programming (DHP) method for unmanned aerial vehicle attitude control. The authors of [16] applied ADP theory to the path planning problem of mobile robots. However, to the best of our knowledge, there is limited research on applying ADP theory to the heading control of unmanned sailboats.
+
+This work is partially supported by the Natural Science Foundation of China (No.52101375), the Hebei Province Natural Science Fund (No.E2022203088, E2024203179), the Innovation Capacity Enhancement Program of Hebei Province (No.24461901D), the Joint Funds of the National Natural Science Foundation of China (No.U20A20332), and the Key Research and Development Project of Hebei Province (No.21351802D).
+
+Taking inspiration from the above analysis, we introduce the LBF function to solve the state constraint problem. In the control design of heading tracking, while introducing the backstepping method, the ADP theory is also introduced to ensure the optimal tracking accuracy. We constructed an evaluation NN to approximate the analytical solution of the HJB equation and obtained the optimal feedback control through online learning. The main contributions are concluded below:
+
+ * Compared with previous papers of [7]-[9], we proposed a LBF-based method to overcome the issues of the motion constraints caused by the turning rate limitation of unmanned sailboats.
+
+ * By using the ADP optimal feedback control strategy, our designed control method achieves optimal tracking accuracy in heading control compared to [9]-[12].
+
+ * According to the construction of the critic network, a training speed acceleration method is developed using online learning methods.
+
+The remaining of this paper is organized as follows. Section II elaborates on the heading angle control model of unmanned sailboats and lemma. Section III designs the backstepping feedforward controller and the optimal feedback controller. Section IV provides simulation verification of the proposed strategy superiority. Section V gives some conclusions.
+
+§ II. MATHEMATICAL MODEL AND PRELIMINARIES
+
+The sailboat is divided into four parts: the sail, rudder, keel, and hull. Combined with the theory of gas fluid dynamics, force analysis is conducted on each part, ignoring the undulating and pitching motion of the sailboat to establish a 3-DOF mathematical model of sailboat motion. The sailboat model considering external interference and control input rudder angle in this paper is
+
+$$
+\left\{ \begin{array}{l} \dot{\psi } = r \\ \dot{r} = {f}_{r}\left( {\psi ,r,{\delta }_{s},u,v,{\tau }_{wr}}\right) + {g}_{r}{\tau }_{r} \end{array}\right. \tag{1}
+$$
+
+where $\psi \in {\mathbb{R}}^{M}$ and $r \in {\mathbb{R}}^{M}$ are system state variables, respectively; $u \in {\mathbb{R}}^{M}$ is the forward velocity; $v \in {\mathbb{R}}^{M}$ is the lateral velocity; ${\delta }_{s} \in {\mathbb{R}}^{M}$ is the sail angle; ${\tau }_{wr}$ is the external disturbance; ${f}_{r}\left( \cdot \right)$ is the nonlinear function of the unknown model of a sailboat; ${g}_{r}$ is the unknown control gain; ${\tau }_{r}$ is the control input of the system.
+
+Assumption 1 [17]: Due to the fact that unmanned sailboats navigate in limited space, there exists a normal number that satisfies the heading angle and heading angular speed to be less than or equal to this normal number. That is, $\psi \leq {k}_{\psi }$ and $r \leq {k}_{r}$ .
+
+Lemma 1 [10]: Due to the outstanding ability of NNs in function approximation, they are often used to approximate nonlinear functions. Therefore, NNs can be used to approximate unknown functions as follows:
+
+$$
+F\left( x\right) = {W}^{\mathrm{T}}\sigma \left( x\right) + \varepsilon \left( x\right) \tag{2}
+$$
+
+where $W = {\left( {w}_{1},{w}_{2},\ldots ,{w}_{n}\right) }^{\mathrm{T}}$ denotes the desired weight of the NNs and $\varepsilon \leq \bar{\varepsilon }$ denotes the approximating error.
+
+§ III. CONTROL DESIGN
+
+§ A. FEEDFORWARD CONTROLLER DESIGN
+
+In this section, the backstepping method based on an adaptive NNs framework is adopted to transform system (1) into an affine nonlinear system.
+
+According to the system (1), define the error system function as
+
+$$
+\left\{ \begin{matrix} {\psi }_{e} = \psi - {\psi }_{d} \\ {r}_{e} = r - {r}_{d} \end{matrix}\right. \tag{3}
+$$
+
+where ${\psi }_{d}$ and ${r}_{d}$ are the reference heading angle and yaw speed, respectively. Differentiating ${\psi }_{e}$ along with (1), it has:
+
+$$
+{\dot{\psi }}_{e} = \dot{\psi } - {\dot{\psi }}_{d} = r - {\dot{\psi }}_{d} = \left( {{r}_{e} + {r}_{d}}\right) - {\dot{\psi }}_{d} \tag{4}
+$$
+
+where ${r}_{d} = {r}_{d}^{\alpha } + {r}_{d}^{ * }$ is the virtual control input of yaw speed. ${r}_{d}^{\alpha }$ denotes the feedforward virtual yaw speed input and ${r}_{d}^{ * }$ denotes the feedback virtual yaw speed input. Therefore, we can get
+
+$$
+{\dot{\psi }}_{e} = {r}_{e} + {r}_{d}^{\alpha } + {r}_{d}^{ * } - {\dot{\psi }}_{d} \tag{5}
+$$
+
+To construct the desired feedforward yaw speed virtual control input, consider the BLF as
+
+$$
+{V}_{1} = \frac{1}{2}\log \left( \frac{{k}_{\psi }^{2}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}\right) \tag{6}
+$$
+
+where ${k}_{\psi }$ is a positive value of the state constraint. Calculate the derivate of ${V}_{1}$ , we have
+
+$$
+{\dot{V}}_{1} = \frac{{\psi }_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}\left( {{r}_{e} + {r}_{d}^{\alpha } + {r}_{d}^{ * } - {\dot{\psi }}_{d}}\right) \tag{7}
+$$
+
+Therefore, the feedforward virtual yaw speed input ${r}_{d}^{\alpha }$ can be designed as
+
+$$
+{r}_{d}^{\alpha } = - \left( {{k}_{\psi }^{2} - {\psi }_{e}^{2}}\right) {k}_{1}{\psi }_{e} + {\dot{\psi }}_{d} \tag{8}
+$$
+
+where ${k}_{1} > 0$ is a tuning parameters. Substituting (8) into (7), we have
+
+$$
+{\dot{V}}_{1} = - {k}_{1}{\psi }_{e}^{2} + \frac{{\psi }_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}\left( {{r}_{e} + {r}_{d}^{ * }}\right) \tag{9}
+$$
+
+Taking the derivative of the second equation of (3) yields
+
+$$
+{\dot{r}}_{e} = {f}_{r}\left( \cdot \right) + {f}_{r}\left( {e}_{d}\right) - {f}_{r}\left( {e}_{d}\right) + {g}_{r}{\tau }_{r} - {\dot{r}}_{d} \tag{10}
+$$
+
+where ${e}_{d} = {\left\lbrack {\psi }_{d},{r}_{d}\right\rbrack }^{\mathrm{T}}$ . The unknown model uncertainty function ${f}_{r}\left( \cdot \right)$ and ${\dot{r}}_{d}$ can be transferred by the following function
+
+$$
+{F}_{2}\left( {z}_{2d}\right) = {f}_{r}\left( {e}_{d}\right) - {\dot{r}}_{d} \tag{11}
+$$
+
+where ${z}_{2d} = {\left\lbrack {e}_{d}^{\mathrm{T}},{\psi }_{e},{r}_{e}\right\rbrack }^{\mathrm{T}}$ . According to the Lemma 1, the above (11) can be approximated by NNs as follows:
+
+$$
+{F}_{2}\left( {z}_{2d}\right) = \left( {{\widehat{W}}_{2}^{\mathrm{T}} + {\widetilde{W}}_{2}^{\mathrm{T}}}\right) {\sigma }_{2}\left( {z}_{2d}\right) + {\varepsilon }_{2}\left( {z}_{2d}\right) \tag{12}
+$$
+
+where ${\widetilde{W}}_{2}^{\mathrm{T}} = {W}_{2} - {\widehat{W}}_{2}$ is the NNs approximate error and ${\widehat{W}}_{2}$ is the estimation of optimal weight vector ${W}_{2}$ . Through (11) and (12), ${f}_{r}\left( \cdot \right) - {f}_{r}\left( {e}_{d}\right)$ can be approximated as follows:
+
+$$
+{f}_{r}\left( \cdot \right) - {f}_{r}\left( {e}_{d}\right) = {F}_{2}\left( {z}_{2}\right) - {F}_{2}\left( {z}_{2d}\right)
+$$
+
+$$
+= p\left( e\right) + {\widetilde{W}}_{2}^{\mathrm{T}}\left\lbrack {{\sigma }_{2}\left( {z}_{2}\right) - {\sigma }_{2}\left( {z}_{2d}\right) }\right\rbrack + {\varepsilon }_{2}\left( {z}_{2}\right)
+$$
+
+$$
+- {\varepsilon }_{2}\left( {z}_{2d}\right)
+$$
+
+(13)
+
+where $p\left( e\right) = {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2}\right) - {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) ,e = {\left\lbrack {\psi }_{e},{r}_{e}\right\rbrack }^{\mathrm{T}}$ and ${F}_{2}\left( {z}_{2}\right) = {f}_{r}\left( \cdot \right) - {\dot{r}}_{d}$ is a function of ${r}_{d}$ and ${\dot{r}}_{d}$ . The input ${z}_{2}$ is chosen as ${\left( {e}_{d}^{\mathrm{T}},\psi ,r,{\delta }_{s},u,v\right) }^{\mathrm{T}}$ . According to (12) and (13), (10) can be written as
+
+$$
+{\dot{r}}_{e} = p\left( e\right) + {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) + {\widetilde{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2}\right) + {\varepsilon }_{2}\left( {z}_{2}\right) \tag{14}
+$$
+
+$$
++ {g}_{r}{\tau }_{r}^{\alpha } + {g}_{r}{\tau }_{r}^{ * }
+$$
+
+To construct the feedforward virtual control input ${\tau }_{r}^{\alpha }$ , consider the BLF as
+
+$$
+{V}_{2} = {V}_{1} + \frac{1}{2}\log \left( \frac{{k}_{r}^{2}}{{k}_{r}^{2} - {r}_{e}^{2}}\right) + \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2} \tag{15}
+$$
+
+where ${k}_{r}$ is a positive value of the motion constraint. Calculate the derivate of ${V}_{2}$ , we have
+
+$$
+{\dot{V}}_{2} = - {k}_{1}{\psi }_{e}^{2} + \frac{{\psi }_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}\left( {{r}_{e} + {r}_{d}^{ * }}\right) + \frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}(p\left( e\right)
+$$
+
+$$
++ {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) + {\widetilde{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2}\right) + {\varepsilon }_{2}\left( {z}_{2}\right) + {g}_{r}{\tau }_{r}^{\alpha } \tag{16}
+$$
+
+$$
+\left. {+{g}_{r}{\tau }_{r}^{ * }}\right) - {\widetilde{W}}_{2}^{\mathrm{T}}{\dot{\widehat{W}}}_{2}
+$$
+
+According to the Young's inequality, we can get that
+
+$$
+\frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}{\varepsilon }_{2}\left( {z}_{2}\right) \leq \frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}{\bar{\varepsilon }}_{2} \leq \frac{1}{2}\frac{{r}_{e}^{2}}{{\left( {k}_{r}^{2} - {r}_{e}^{2}\right) }^{2}} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} \tag{17}
+$$
+
+Substituting (17) into (16), we have
+
+$$
+{\dot{V}}_{2} = - {k}_{1}{\psi }_{e}^{2} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} - {\widetilde{W}}_{2}^{\mathrm{T}}{\dot{\widehat{W}}}_{2} + \frac{{\psi }_{e}{r}_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}} + \frac{{\psi }_{e}{r}_{d}^{ * }}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}
+$$
+
+$$
++ \frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}\left( {p\left( e\right) + {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) + {\widetilde{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2}\right) }\right.
+$$
+
+$$
+\left. {+{\varepsilon }_{2}\left( {z}_{2}\right) + {g}_{r}{\tau }_{r}^{\alpha } + {g}_{r}{\tau }_{r}^{ * }}\right)
+$$
+
+(18)
+
+Therefore, the feedforward control input ${\tau }_{r}^{\alpha }$ can be designed as
+
+$$
+{\tau }_{r}^{\alpha } = - \frac{1}{{g}_{r}}\left\lbrack {\left( {{k}_{r}^{2} - {r}_{e}^{2}}\right) {k}_{2}{r}_{e} + \frac{\left( {{k}_{r}^{2} - {r}_{e}^{2}}\right) {\psi }_{e}}{{k}_{\psi }^{2} - {\psi }_{e}^{2}} + {\widehat{W}}_{2}^{\mathrm{T}}{\sigma }_{2}\left( {z}_{2d}\right) }\right\rbrack
+$$
+
+(19)
+
+where ${k}_{2} > 0$ is a tuning parameters. The NNs weight vector adaptation law ${\widehat{W}}_{2}$ can be designed as
+
+$$
+{\dot{\widehat{W}}}_{2} = \frac{{r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}}{\sigma }_{2}\left( {z}_{2}\right) - {\beta }_{2}{\widehat{W}}_{2} \tag{20}
+$$
+
+where ${\beta }_{2} > 0$ is also a tuning parameters. Substituting (19) and (20) into (18), we have
+
+$$
+{\dot{V}}_{2} \leq - {k}_{1}{\psi }_{e}^{2} - {k}_{2}{r}_{e}^{2} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} + {\beta }_{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widehat{W}}_{2} + \frac{p\left( e\right) {r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}} \tag{21}
+$$
+
+$$
++ \frac{{r}_{e}{g}_{r}}{{k}_{r}^{2} - {r}_{e}^{2}}{\tau }_{r}^{ * } + \frac{{\psi }_{e}{r}_{d}^{ * }}{{k}_{\psi }^{2} - {\psi }_{e}^{2}}
+$$
+
+According to the Young's inequality, we can get that
+
+$$
+{\widetilde{W}}_{2}^{\mathrm{T}}{\widehat{W}}_{2} = {\widetilde{W}}_{2}^{\mathrm{T}}\left( {{W}_{2} - {\widetilde{W}}_{2}}\right) = {\widetilde{W}}_{2}^{\mathrm{T}}{W}_{2} - {\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2}
+$$
+
+$$
+\leq \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2} + \frac{1}{2}{W}_{2}^{\mathrm{T}}{W}_{2} - {\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2} \tag{22}
+$$
+
+$$
+= \frac{1}{2}{W}_{2}^{\mathrm{T}}{W}_{2} - \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2}
+$$
+
+Therefore, the above (22) can be written as
+
+$$
+{\dot{V}}_{2} \leq - \underline{k}\parallel E{\parallel }^{2} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} + \frac{1}{2}{\beta }_{2}{W}_{2}^{\mathrm{T}}{W}_{2} - \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2}
+$$
+
+$$
++ \frac{p\left( e\right) {r}_{e}}{{k}_{r}^{2} - {r}_{e}^{2}} + \frac{{r}_{e}{g}_{r}}{{k}_{r}^{2} - {r}_{e}^{2}}{\tau }_{r}^{ * } + \frac{{\psi }_{e}{r}_{d}^{ * }}{{k}_{\psi }^{2} - {\psi }_{e}^{2}} \tag{23}
+$$
+
+where $E = \left\lbrack {{\psi }_{e},{r}_{e}}\right\rbrack \mathrm{T},\underline{k} = \min \left( {{k}_{1},{k}_{2}}\right)$ .
+
+In previous research, the feedforward controller ${\tau }_{r}^{\alpha }$ was calculated based on the derivative of the virtual controller ${\dot{r}}_{d}^{\alpha }$ . However, in practical applications, it is not easy to get analytical solutions for ${\dot{r}}_{d}^{\alpha }$ . Our proposed control method denotes ${r}_{d} = {r}_{d}^{\alpha } + {r}_{d}^{ * }$ , which is determined by both ${r}_{d}^{\alpha }$ and ${r}_{d}^{ * }$ and they are obtained through NNs weights and the system's state. Therefore, we approximate the derivative of the virtual controller using NNs in (11). With this approach, the feedforward controller ${r}_{d}^{\alpha }$ and ${\tau }_{r}^{\alpha }$ can be obtained directly from the current system without the need for the derivative of the virtual controller used in previous studies. Consequently, compared to previous work, the method we propose is more feasible to implement in practical applications.
+
+Rewriting (23) as follow:
+
+$$
+{\dot{V}}_{2} \leq - \underline{k}\parallel E{\parallel }^{2} + \frac{1}{2}{\bar{\varepsilon }}_{2}^{2} + \frac{1}{2}{\beta }_{2}{W}_{2}^{\mathrm{T}}{W}_{2} - \frac{1}{2}{\widetilde{W}}_{2}^{\mathrm{T}}{\widetilde{W}}_{2}
+$$
+
+$$
++ \frac{{E}^{\mathrm{T}}}{{\widetilde{E}}^{\mathrm{T}}}\left( {\left\lbrack \begin{matrix} 0 \\ p\left( e\right) \end{matrix}\right\rbrack + \left\lbrack \begin{matrix} 1 & 0 \\ 0 & {g}_{r} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {r}_{d}^{ * } \\ {\tau }_{r}^{ * } \end{matrix}\right\rbrack }\right) \tag{24}
+$$
+
+where $\widetilde{E} = {\left\lbrack {k}_{\psi }^{2} - {\psi }_{e}^{2},{k}_{r}^{2} - {r}_{e}^{2}\right\rbrack }^{\mathrm{T}}$ . The feedforward controller is expressed as ${U}^{\alpha } = \left\lbrack {{r}_{d}^{\alpha },{\tau }_{r}^{\alpha }}\right\rbrack$ . The feedback optimal controller ${U}^{ * } = \left\lbrack {{r}_{d}^{ * },{\tau }_{r}^{ * }}\right\rbrack$ will be designed in the subsection. Therefore, they constitute the controller of the entire system.
+
+§ B. FEEDBACK OPTIMAL CONTROLLER DESIGN
+
+According to (24), the design of an individual feedforward controller for ${U}^{\alpha }$ cannot guarantee the stability of the entire closed-loop system. Therefore, to ensure the stability of the last term in (24), a feedback optimal controller is designed based on ADP theory. With this design, not only the tracking ability of the system can be optimized, but also the system's stability can be ensured.
+
+The last term in (24) can be written as:
+
+$$
+\dot{E} = \left\lbrack \begin{matrix} 0 \\ p\left( e\right) \end{matrix}\right\rbrack + \left\lbrack \begin{matrix} 1 & 0 \\ 0 & {g}_{r} \end{matrix}\right\rbrack {U}^{ * } \tag{25}
+$$
+
+Further, it can be obtained that
+
+$$
+\dot{E} = P\left( E\right) + G\widehat{U} \tag{26}
+$$
+
+where $E = {\left\lbrack {\psi }_{e},{r}_{e}\right\rbrack }^{\mathrm{T}}$ are the heading angle error and yaw speed error, $P\left( E\right) = {\left\lbrack 0,p\left( e\right) \right\rbrack }^{\mathrm{T}},G = \operatorname{diag}{\left\lbrack 1,{g}_{r}\right\rbrack }^{\mathrm{T}}$ .
+
+According to the ADP theory, the performance index function can be define as
+
+$$
+J\left( E\right) = {\int }_{t}^{\infty }{E}^{\mathrm{T}}{QE} + {\widehat{U}}^{\mathrm{T}}R\widehat{U}\mathrm{\;d}\tau \tag{27}
+$$
+
+where $Q \in {\mathbb{R}}^{2 \times 2}$ and $R \in {\mathbb{R}}^{2 \times 2}$ are positive definite matrices. The Hamiltonian function of the performance index function can be defined as
+
+$$
+H\left( {E,\widehat{U},\nabla J\left( E\right) }\right) = {E}^{\mathrm{T}}{QE} + {\widehat{U}}^{\mathrm{T}}R\widehat{U}
+$$
+
+$$
++ \nabla J{\left( E\right) }^{\mathrm{T}}\left( {P\left( E\right) + G\widehat{U}}\right) \tag{28}
+$$
+
+where $\nabla J\left( E\right) = \frac{\partial J\left( E\right) }{\partial E}$ denotes the derivative of $J\left( E\right)$ with regard to $E$ . In order to solve the HJB equation, the feedback optimal control ${U}^{ * }$ can be designed as
+
+$$
+{U}^{ * }\left( E\right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\nabla {J}^{ * }\left( E\right) \tag{29}
+$$
+
+From (28), the optimal performance index function ${J}^{ * }\left( E\right)$ can be obtained by
+
+$$
+\mathop{\min }\limits_{{\widehat{U}\left( E\right) }}H\left( {E,\widehat{U},\nabla {J}^{ * }\left( E\right) }\right) = 0 \tag{30}
+$$
+
+Substituting the above (30) into (28), the HJB equation can be rewritten as follow:
+
+$$
+{E}^{\mathrm{T}}{QE} + {\left( \nabla {J}^{ * }\left( E\right) \right) }^{\mathrm{T}}P\left( E\right) - \frac{1}{4}\left( {\left( \nabla {J}^{ * }\left( E\right) \right) }^{\mathrm{T}}\right. \tag{31}
+$$
+
+$$
+\left. {G{R}^{-1}{G}^{\mathrm{T}}\nabla {J}^{ * }\left( E\right) }\right) = 0
+$$
+
+It is obvious that the above equation is a nonlinear partial differential equation, so it is difficult to obtain its analytical solution. Therefore, to address this issue, the ADP theory is adopted. By constructing a single-layer NN to approximate the following optimal performance index function as
+
+$$
+{J}^{ * }\left( E\right) = {W}_{c}^{\mathrm{T}}\sigma \left( E\right) + {\varepsilon }_{c}\left( E\right) \tag{32}
+$$
+
+where ${W}_{c}$ denotes the optimal weight vector of critic NNs, $\sigma \left( \cdot \right)$ is the activation function, ${\varepsilon }_{c}\left( E\right)$ is the critic NNs approximation error.
+
+The gradient of the optimal performance index function ${J}^{ * }\left( E\right)$ with regard to $E$ can be defined as
+
+$$
+\nabla {J}^{ * }\left( E\right) = {\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{W}_{c} + \nabla {\varepsilon }_{c}\left( E\right) \tag{33}
+$$
+
+From (33) and (32), (29) can be rewritten as
+
+$$
+{U}^{ * }\left( E\right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{W}_{c} - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\nabla {\varepsilon }_{c}\left( E\right)
+$$
+
+(34)
+
+Therefore, the HJB equation can be further designed as
+
+$$
+H\left( {E,{U}^{ * },{W}_{c}}\right) = {E}^{\mathrm{T}}{QE} + {W}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) P\left( E\right) + {\varepsilon }_{HJB}
+$$
+
+$$
+- \frac{1}{4}\left( {{W}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) G{R}^{-1}{G}^{\mathrm{T}}{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{W}_{c}}\right)
+$$
+
+$$
+= 0
+$$
+
+(35)
+
+where ${\varepsilon }_{HJB}$ is the error.
+
+By using NNs to estimate the desired weights of performance index function as follow:
+
+$$
+\widehat{J}\left( E\right) = {\widehat{W}}_{c}^{\mathrm{T}}\sigma \left( E\right) \tag{36}
+$$
+
+where ${\widehat{W}}_{c}$ and $\widehat{J}\left( E\right)$ are estimations of ${W}_{c}$ and $J\left( E\right)$ , respectively. Let weight estimation error as ${\widetilde{W}}_{c} = {W}_{c} - {\widehat{W}}_{c}$ , the estimate of optimal control ${U}^{ * }$ can be designed as
+
+$$
+\bar{U}\left( E\right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{\widehat{W}}_{c} \tag{37}
+$$
+
+Then, the HJB equation can be approximated as
+
+$$
+H\left( {E,\bar{U},{W}_{c}}\right) = {E}^{\mathrm{T}}{QE} + {W}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) P\left( E\right)
+$$
+
+$$
+- \frac{1}{4}\left( {{\widehat{W}}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) G{R}^{-1}{G}^{\mathrm{T}}{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{\widehat{W}}_{c}}\right)
+$$
+
+$$
+= {e}_{c}
+$$
+
+(38)
+
+The objective error function of critic NNs is defined as
+
+$$
+{E}_{c} = \frac{1}{2}{e}_{c}^{2} \tag{39}
+$$
+
+From the [18], we can design a appropriate critic NNs updating law, which can guarantee that ${\widehat{W}}_{c}$ converges to ${W}_{c}$ and also minimize the objective error function (39).
+
+$$
+{\dot{\widehat{W}}}_{c} = - {k}_{c}\frac{\Gamma }{{\left( 1 + {\Gamma }^{\mathrm{T}}\Gamma \right) }^{2}}{e}_{c} + \frac{{k}_{c}}{2}\Delta \nabla \sigma \left( E\right) G\nabla V\left( E\right)
+$$
+
+$$
++ {k}_{c}\left\lbrack {\frac{1}{4}\frac{\Gamma }{{\left( 1 + {\Gamma }^{\mathrm{T}}\Gamma \right) }^{2}}{\widehat{W}}_{c}^{\mathrm{T}}\nabla \sigma \left( E\right) G{\left( \nabla \sigma \left( E\right) \right) }^{\mathrm{T}}{\widehat{W}}_{c}}\right\rbrack
+$$
+
+$$
++ {k}_{c}\left\lbrack {{K}_{1}{\zeta }^{\mathrm{T}}{\widehat{W}}_{c} - {K}_{2}{\widehat{W}}_{c}}\right\rbrack
+$$
+
+(40)
+
+where ${k}_{c} > 0$ is the tuning parameter, $\Gamma = \nabla \sigma \left( E\right) (P\left( E\right) +$ $G\bar{U}),\zeta = \frac{\Gamma }{1 + {\Gamma }^{\mathrm{T}}\Gamma },{K}_{1}$ and ${K}_{2}$ are the tuning parameter. $\Delta$ is designed as
+
+$$
+\Delta = \left\{ \begin{array}{l} 0,{\left( \nabla V\left( E\right) \right) }^{\mathrm{T}}\left( {P\left( E\right) + G\bar{U}}\right) < 0 \\ 1,\text{ else } \end{array}\right. \tag{41}
+$$
+
+where $V\left( E\right)$ is a Lyapunov function. From this, we can obtain
+
+$$
+\dot{V}\left( E\right) = {\left( \nabla V\left( E\right) \right) }^{\mathrm{T}}\dot{E} = {\left( \nabla V\left( E\right) \right) }^{\mathrm{T}}\left( {P\left( E\right) + G{U}^{ * }}\right) \tag{42}
+$$
+
+$$
+= - {\left( \nabla V\left( E\right) \right) }^{\mathrm{T}}S\nabla V\left( E\right) \leq 0
+$$
+
+where $S$ is a positive definite matrix. Specifically, $V\left( E\right)$ is a function of the state variable $E$ and can be chosen appropriately, for example, $V\left( E\right) = {E}^{\mathrm{T}}E$ .
+
+Remark 1: The weight ${\widehat{W}}_{c}$ update process consists of the following four components: The first component employs gradient descent for design. The second component ensures the boundedness of the weights. The third and fourth components guarantee the stability of the weights. Through this design, the proposed control strategy achieves a higher tracking accuracy while ensuring the rapid and stable update of the neural network weights.
+
+§ IV. SIMULATION
+
+The model parameters of the unmanned sailboat are selected from [10]. In order to facilitate simulation analysis without losing generality, the reference heading is set as ${\psi }_{d} = \sin \left( \mathrm{t}\right)$ . Select control parameters as ${k}_{\psi } = {1.2},{k}_{r} = {1.5},{k}_{1} = 3,{k}_{2} =$ $6,{k}_{c} = {3.8},{\beta }_{2} = 4,{K}_{1} = {0.0001}\mathrm{I},{K}_{2} = {0.00001}\mathrm{I},Q = \mathrm{I}$ , $R = {0.25}\mathrm{I}$ . The time step is set as 0.05 . The initial states are defined as $\psi \left( 0\right) = {0.05}$ and $r\left( 0\right) = 0$ . The activation function of the critic network is chosen as $\sigma \left( E\right) = \left\lbrack {{e}_{1},{e}_{1}^{2},{e}_{2},{e}_{2}^{2},{e}_{1}{e}_{2}}\right\rbrack$ , the network weights are selected randomly in $\left\lbrack {0,1}\right\rbrack$ . Simulate real ocean and wind disturbances by using first-order Markov perturbations.
+
+To verify the superiority of the proposed strategy, we will compare the "LSBG" strategy of [10]. Fig. 1 shows the real-time curve of heading angle tracking, and the results show that the designed optimal control method can track the reference signal with smaller errors and within state constraints. The heading angular velocity tracking and its state constraints are shown in Fig. 2. Fig. 3 and Fig. 4 illustrate the error curves of heading angle and heading angular velocity, indicating that the proposed strategy achieves better tracking performance than the "LSBG" strategy. Fig. 5 displays the control inputs for control input ${\tau }_{r}$ under "LSBG" strategy, backstepping feedforward control, optimal feedback control, and system control under the proposed strategy, respectively. Fig. 6 shows the update curve of the evaluation network weights, it is obvious that the speed of online learning has reached stability in a very short time. From the above analysis, the proposed strategy can not only ensure better tracking accuracy, but also avoid system state violations of constraints.
+
+§ V. CONCLUSION
+
+In this paper, the optimal control method based on ADP is proposed for the tracking control of unmanned sailboats with heading turning constraints. The proposed LBF-based method solves the problem of state constraints. The feedforward backstepping controller and the feedback optimal controller were designed using the backstepping method and ADP theory, respectively. The learning ability of critic NNs has been accelerated through online learning strategies. Finally, the simulation verified the optimality of the proposed strategy. In the future, we will apply this method to the path-tracking task of unmanned sailboats in pratice.
+
+ < g r a p h i c s >
+
+Fig. 1. Comparison of heading angle tracking under different strategies.
+
+ < g r a p h i c s >
+
+Fig. 2. Comparison of heading angle speed tracking under different strategies.
+
+ < g r a p h i c s >
+
+Fig. 3. Comparison of heading angle error under different strategies.
+
+ < g r a p h i c s >
+
+Fig. 4. Comparison of heading angle speed error under different strategies.
+
+ < g r a p h i c s >
+
+Fig. 5. Control input ${\tau }_{r}$ under "LSBG" strategy, feedforward control input ${\tau }_{r}^{\alpha }$ , feedback control input ${\tau }_{r}^{ * }$ and optimal control input ${\tau }_{r}$ .
\ No newline at end of file