Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_md/Initial_manuscript.md +155 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_tex/Initial_manuscript.tex +123 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_md/Initial_manuscript.md +513 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_tex/Initial_manuscript.tex +332 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_md/Initial_manuscript.md +327 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_tex/Initial_manuscript.tex +144 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_md/Initial_manuscript.md +215 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_tex/Initial_manuscript.tex +195 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_md/Initial_manuscript.md +309 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_tex/Initial_manuscript.tex +237 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_md/Initial_manuscript.md +95 -0
- papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_tex/Initial_manuscript.tex +61 -0
- papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_md/Initial_manuscript.md +26 -0
- papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_tex/Initial_manuscript.tex +25 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_md/Initial_manuscript.md +351 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_tex/Initial_manuscript.tex +445 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_md/Initial_manuscript.md +577 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_tex/Initial_manuscript.tex +525 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_md/Initial_manuscript.md +397 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_tex/Initial_manuscript.tex +345 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_md/Initial_manuscript.md +181 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_tex/Initial_manuscript.tex +155 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_md/Initial_manuscript.md +387 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_tex/Initial_manuscript.tex +380 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_md/Initial_manuscript.md +653 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_tex/Initial_manuscript.tex +615 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_md/Initial_manuscript.md +393 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_tex/Initial_manuscript.tex +414 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_md/Initial_manuscript.md +417 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_tex/Initial_manuscript.tex +374 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_md/Initial_manuscript.md +465 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_tex/Initial_manuscript.tex +419 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_md/Initial_manuscript.md +483 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_tex/Initial_manuscript.tex +464 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_md/Initial_manuscript.md +427 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_tex/Initial_manuscript.tex +381 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_md/Initial_manuscript.md +97 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_tex/Initial_manuscript.tex +75 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_md/Initial_manuscript.md +297 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_tex/Initial_manuscript.tex +277 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_md/Initial_manuscript.md +303 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_tex/Initial_manuscript.tex +279 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_md/Initial_manuscript.md +349 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_tex/Initial_manuscript.tex +369 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_md/Initial_manuscript.md +597 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_tex/Initial_manuscript.tex +543 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_md/Initial_manuscript.md +347 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_tex/Initial_manuscript.tex +349 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_md/Initial_manuscript.md +527 -0
- papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_tex/Initial_manuscript.tex +471 -0
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,155 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Tactile Sensing and its Role in Learning and Deploying Robotic Grasping Controllers
|
| 2 |
+
|
| 3 |
+
Alexander Koenig ${}^{1,2}$ , Zixi Liu ${}^{2}$ , Lucas Janson ${}^{3}$ and Robert Howe ${}^{2,4}$
|
| 4 |
+
|
| 5 |
+
Abstract- A long-standing question in robot hand design is how accurate tactile sensing must be. This paper uses simulated tactile signals and the reinforcement learning (RL) framework to study the sensing needs in grasping systems. Our first experiment investigates the need for rich tactile sensing in the rewards of RL-based grasp refinement algorithms for multi-fingered robotic hands. We systematically integrate different levels of tactile data into the rewards using analytic grasp stability metrics. We find that combining information on contact positions, normals, and forces in the reward yields the highest average success rates of ${95.4}\%$ for cuboids, ${93.1}\%$ for cylinders, and 62.3% for spheres across wrist position errors between 0 and 7 centimeters and rotational errors between 0 and 14 degrees. This contact-based reward outperforms a non-tactile binary-reward baseline by ${42.9}\%$ . Our follow-up experiment shows that when training with tactile-enabled rewards, the use of tactile information in the control policy's state vector is drastically reducible at only a slight performance decrease of at most ${6.6}\%$ for no tactile sensing in the state. Since policies do not require access to the reward signal at test time, our work implies that models trained on tactile-enabled hands are deployable to robotic hands with a smaller sensor suite, potentially reducing cost dramatically.
|
| 6 |
+
|
| 7 |
+
## I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Tactile sensing provides information about local object geometry, surface properties, contact forces, and grasp stability [1]. Hence, tactile sensors can be a valuable tool in contact-rich scenarios such as robotic grasp refinement [2] where a grasping system recovers from calibration errors. Computer vision approaches for grasp refinement often face limitations due to the occlusion of contact events. Tactile sensors can be expensive and fragile hardware components. Hence, for cost-effective robotic hand design, it is essential to understand when robot hands need precise sensing and how accurate it should be to achieve good grasping performance.
|
| 10 |
+
|
| 11 |
+
A few research papers investigated the effect of tactile sensor resolution on grasp success. Wan et al. [3] found that reduced spatial resolution of tactile sensors negatively impacts grasp success since inaccuracies in contact position and normal sensing can influence grasp stability predictions. Other works analyzed the effect of contact sensor resolution on grasp performance in the context of reinforcement learning. In simulated experiments, Merzić et al. [4] found that contact feedback in a policy's state vector improves the performance of RL-based grasping controllers, and [5], [6] presented similar results for in-hand manipulation. However, [5], [6] also concluded that models trained with binary contact signals perform equally well as models that receive accurate normal force information. Furthermore, [5], [6] found that tactile resolution (92 vs. 16 sensors) has no noticeable effect on performance and sample efficiency of reinforcement learned manipulation controllers.
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
|
| 15 |
+
Fig. 1: The hypothesized workflow for training and deploying RL-controlled grasping systems. First, train a policy $\pi \left( {\mathbf{a} \mid \mathbf{s}}\right)$ on a hand ${H}_{f}$ with a full tactile sensor suite (e.g., contact position, normal and force sensors) where the grasp quality metrics are available as a reward ${r}_{f}$ to learn a task, but only provide a subset of the available contact data in the state vector ${\mathbf{s}}_{r}$ . Afterwards, deploy the policy to many structurally similar hands ${H}_{r}$ with a reduced sensor set to save cost.
|
| 16 |
+
|
| 17 |
+
In this paper, we use accurate tactile signals from simulation and the reinforcement learning framework to explore the tactile sensing needs in robotic systems. RL algorithms aim to produce a policy $\pi \left( {\mathbf{a} \mid \mathbf{s}}\right)$ that outputs actions $\mathbf{a}$ given state information $s$ such that the cumulative reward signal $r$ is maximized. The reward function is a critical part of every RL algorithm [7]. While the previous work in [4], [5], [6] only studied the tactile resolution in the policy's state, our first contribution investigates the impact of tactile information in the reward signal. We propose a unified framework to systematically incorporate different levels of tactile information from robotic hands into a reward signal via analytic grasp stability metrics. We conduct grasp refinement experiments on two types of quality metrics discussed in Section II: $\epsilon$ [8] calculated from contact positions and normals and a contact force-based reward $\delta$ . In Section III, we estimate the relevance of contact position, normal, and force sensing for the reward signal by comparing the individual and combined performance of $\epsilon$ and $\delta$ .
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
This material is based upon work supported by the US National Science Foundation under Grant No. IIS-1924984 and by the German Academic Exchange Service. An extended paper including the material in this abstract has been submitted for publication.
|
| 22 |
+
|
| 23 |
+
${}^{1}$ Department of Informatics, Technical University of Munich
|
| 24 |
+
|
| 25 |
+
${}^{2}$ School of Engineering and Applied Sciences, Harvard University
|
| 26 |
+
|
| 27 |
+
${}^{3}$ Department of Statistics, Harvard University
|
| 28 |
+
|
| 29 |
+
${}^{4}$ RightHand Robotics, Inc.,237 Washington St, Somerville, MA 02143 USA. Robert Howe is corresponding author howe@seas . harvard. edu.
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
Calculating grasp stability metrics requires costly tactile sensing capabilities on physical grippers. However, the reward signal is only required during the training of policies but not while testing, which suggests that sensing needs in both stages could be different. We hypothesize in Fig. 1 that policies trained with grasp stability metrics on a robotic hand ${H}_{f}$ with a full tactile sensor suite are deployable to structurally similar but more affordable hands ${H}_{r}$ with reduced tactile sensing at a small performance decrease. Hence, our second experiment in Section IV gradually decreases tactile resolution in the state vector to find realistic training and deployment workflows for grasping algorithms.
|
| 34 |
+
|
| 35 |
+
## II. GRASP STABILITY METRICS
|
| 36 |
+
|
| 37 |
+
## A. Largest-minimum resisted forces and torques
|
| 38 |
+
|
| 39 |
+
Mirtich and Canny [8] define two quality metrics ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ that measure a grasp’s ability to resist unit forces and torques, respectively. As discussed in [9], the friction cone constrains the contact force ${\mathbf{f}}_{i}$ at each contact $i$ . It is discretized using $m$ edges ${\mathbf{f}}_{i, j}$ . The set of forces ${\mathcal{W}}_{f}$ that the contacts can apply to the object is ${\mathcal{W}}_{f} =$ ConvexHull $\left( {\mathop{\bigcup }\limits_{{i = 1}}^{{n}_{c}}\left\{ {{\mathbf{f}}_{i,1},\ldots ,{\mathbf{f}}_{i, m}}\right\} }\right)$ , where ${n}_{c}$ is the number of contacts. Finally, the quality metric ${\epsilon }_{f} =$ $\mathop{\min }\limits_{{\mathbf{f} \in \partial {\mathcal{W}}_{f}}}\parallel \mathbf{f}\parallel$ is the shortest distance from the origin to the nearest hyper-plane of ${\mathcal{W}}_{f}$ . Hence, the metric defines a lower bound on the resisted force in all directions.
|
| 40 |
+
|
| 41 |
+
This concept is easily extended to the torque domain. The reaction torque ${\tau }_{i, j}$ resulting from a friction cone edge ${\mathbf{f}}_{i, j}$ is ${\mathbf{\tau }}_{i, j} = {\mathbf{r}}_{\mathbf{i}} \times {\mathbf{f}}_{i, j}$ , where ${\mathbf{r}}_{\mathbf{i}}$ is a vector pointing from the object’s center of mass to the contact point ${\mathbf{p}}_{\mathbf{i}}$ . Further, ${\mathcal{W}}_{\tau } =$ ConvexHull $\left( {\mathop{\bigcup }\limits_{{i = 1}}^{n}\left\{ {{\mathbf{\tau }}_{i,1},\ldots ,{\mathbf{\tau }}_{i, m}}\right\} }\right)$ is the set of resisted torques. The metric ${\epsilon }_{\tau } = \mathop{\min }\limits_{{\mathbf{\tau } \in \partial {\mathcal{W}}_{\tau }}}\parallel \mathbf{\tau }\parallel$ evaluates the grasp's quality by identifying the magnitude of the largest-minimum resisted torque.
|
| 42 |
+
|
| 43 |
+
## B. Minimum distance to the friction cone
|
| 44 |
+
|
| 45 |
+
The quality metrics ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ analyze the forces that each contact can theoretically exert on the object. However, these metrics do not consider the actual contact forces that the contacts apply to the object. To this end, we define two force-based quality metrics ${\delta }_{\text{cur }}$ and ${\delta }_{\text{task }}$ .
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Fig. 2: Grasp with current contact forces ${\mathbf{f}}_{i,{cur}}$ and tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ to the friction cones.
|
| 50 |
+
|
| 51 |
+
Similar to Buss et al. [10], we measure grasp stability in terms of how far the contact forces are from the friction limits. Fig. 2 shows a grasp with the current contact forces ${\mathbf{f}}_{i,{cur}}$ and the tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ . The vectors ${\mathbf{f}}_{i,{cur}}$ are forces in the tangential direction that point from ${\mathbf{f}}_{i,{cur}}$ to the closest point on the friction cone, thereby identifying the direction in which the contact can take the least tangential force before slipping. A grasp with large tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ is desirable since the contacts are less prone to sliding when an object wrench is applied. Hence, the metric ${\delta }_{\text{cur }}$ measures the average magnitude of the safety margins $\begin{Vmatrix}{\overline{\mathbf{f}}}_{i,{cur}}\end{Vmatrix}$ across all contacts $i$ .
|
| 52 |
+
|
| 53 |
+
The set of wrenches that the grasp must resist during task execution (e.g., object weight or wrenches from expected collisions) can often be estimated. Our task-oriented metric ${\delta }_{\text{task }}$ evaluates whether the current contact forces of a grasp are suitable to balance the anticipated task wrenches. We calculate the additional contact force ${\mathbf{f}}_{i,{add}}$ that each contact $i$ must react with to compensate a task wrench $w$ with ${\mathbf{G}}^{ + }\mathbf{w} = {\left( \begin{array}{llll} {\mathbf{f}}_{1,{add}}^{T} & {\mathbf{f}}_{2,{add}}^{T} & \ldots & {\mathbf{f}}_{{n}_{c},{add}}^{T} \end{array}\right) }^{T}$ , where ${\mathbf{G}}^{ + }$ is the pseudoinverse of the grasp matrix as defined in [11]. The task contact force is ${\mathbf{f}}_{i,\text{ task }} = {\mathbf{f}}_{i,\text{ cur }} + {\mathbf{f}}_{i,\text{ add }}$ for each contact. Finally, ${\delta }_{\text{task }}$ computes the average magnitude of the tangential force margins $\begin{Vmatrix}{\overline{\mathbf{f}}}_{i,\text{ task }}\end{Vmatrix}$ of the task contact forces ${\mathbf{f}}_{i,\text{ task }}$ to the friction cone.
|
| 54 |
+
|
| 55 |
+
## III. TACTILE SENSING AND THE REWARD FUNCTION
|
| 56 |
+
|
| 57 |
+
## A. Train and Test Dataset
|
| 58 |
+
|
| 59 |
+
Each training sample consists of a tuple(O, E), where $O$ is the object, and $E$ is the wrist pose error sampled uniformly before every episode. There are three object types (cuboid, cylinder, and sphere) with a mass $\in \left\lbrack {{0.1},{0.4}}\right\rbrack \mathrm{{kg}}$ and randomly sampled sizes. Fig. 3 visualizes the minimum and maximum object dimensions. The wrist pose error $E$ consists of a translational and a rotational error. We uniformly sample the translational error $\left( {{e}_{x},{e}_{y},{e}_{z}}\right)$ from $\left\lbrack {-5,5}\right\rbrack \mathrm{{cm}}$ and the rotational error $\left( {{e}_{\xi },{e}_{\eta },{e}_{\zeta }}\right)$ from $\left\lbrack {-{10},{10}}\right\rbrack$ deg for each variable, respectively.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Fig. 3: Minimum and maximum object sizes. We place the spheres on a concave mount to prevent rolling.
|
| 64 |
+
|
| 65 |
+
We define 8 different wrist error cases for the test dataset. Let $d\left( {a, b, c}\right) = \sqrt{{a}^{2} + {b}^{2} + {c}^{2}}$ be the L2 norm of the variables(a, b, c). Table I shows the wrist error cases, where case A corresponds to no error and case $\mathrm{H}$ means maximum wrist error. The test dataset consists of 30 random objects $O$ (10 cuboids,10 cylinders, and 10 spheres). Per object $O$ , we randomly generate the eight wrist error cases $\{ A, B,\ldots , H\}$ from Table I. Hence, we run ${30} \times 8 = {240}$ grasping experiments to test one model.
|
| 66 |
+
|
| 67 |
+
TABLE I: Wrist error cases
|
| 68 |
+
|
| 69 |
+
<table><tr><td>Wrist Error Case</td><td>A</td><td>B</td><td>C</td><td>D</td><td>E</td><td>$\mathbf{F}$</td><td>G</td><td>H</td></tr><tr><td>$d\left( {{e}_{x},{e}_{y},{e}_{z}}\right)$ in cm</td><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td></tr><tr><td>$d\left( {{e}_{\xi },{e}_{\eta },{e}_{\zeta }}\right)$ in deg</td><td>0</td><td>2</td><td>4</td><td>6</td><td>8</td><td>10</td><td>12</td><td>14</td></tr></table>
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
|
| 73 |
+
Fig. 4: Overview of one algorithm episode. (A) Initialization of hand and object. (B) We split the grasp refinement algorithm into four stages and compare four reward frameworks: (1) $\epsilon$ and $\delta$ ,(2) only $\delta$ ,(3) only $\epsilon$ and (4) the non-tactile binary reward baseline $\beta$ . The weighting factors of ${\alpha }_{1} = 5$ and ${\alpha }_{2} = {0.5}$ were empirically determined.
|
| 74 |
+
|
| 75 |
+
## B. State and Action Space
|
| 76 |
+
|
| 77 |
+
The state vector $s$ consists of 7 joint positions ( 1 finger separation, 3 proximal bending, 3 distal bending degrees of freedom), and 7 contact cues ( 3 on proximal links, 3 on distal links, and 1 on palm) that include contact position, contact normal and contact force, which have $3\left( {x, y, z}\right)$ components each. The dimension of the state vector is $\mathbf{s} \in$ ${\mathbb{R}}^{7 + 7 \times \left( {3 \times 3}\right) = {70}}$ . Note that we do not assume any information about the object (e.g., object pose, geometry, or mass) in the state vector. The contact normals and positions are provided in the wrist frame, while the contact forces are represented in the contact frame. The action vector $\mathbf{a}$ consists of 3 finger position increments, 3 wrist position increments and 3 wrist rotation increments. The action vector's dimension is $\mathbf{a} \in {\mathbb{R}}^{3 + 3 + 3 = 9}$ . The policy ${\pi }_{\mathbf{\theta }}$ is parametrized by a neural network with weights $\mathbf{\theta }$ . The network is a multilayer perceptron with four layers(70,256,256,9). We use the stable-baselines 3 [12] implementation of the soft actor-critic (SAC) [13] algorithm and train for 25000 steps.
|
| 78 |
+
|
| 79 |
+
## C. Experimental Setup
|
| 80 |
+
|
| 81 |
+
We simulate the three-fingered ReFlex TakkTile hand (RightHand Robotics, Somerville, MA USA) using a custom Gazebo [14] simulation environment and the DART [15] physics engine. We model the under-actuated distal flexure [16] as a rigid link with two revolute joints (one between the proximal and one between the distal finger link). Further, we approximate the finger geometries as cuboids to reduce computational load. Our source code is available at github.com/axkoenig/grasp_refinement.
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
|
| 85 |
+
Fig. 5: Test results for reward frameworks.
|
| 86 |
+
|
| 87 |
+
Fig. 4 shows an overview of one training episode. In stage (A), we initialize the world. Thereby, we randomly generate a new object, wrist error tuple(O, E)(or we select one from the test dataset). We assume a computer vision system and a grasp planner that produces a side-ways facing grasp at a fixed $5\mathrm{\;{cm}}$ offset from the object’s center of mass. We add the wrist pose error $E$ to this grasp pose to simulate calibration errors and close the fingers of the robotic hand in the erroneous wrist pose until the fingers make contact with the object. Consequently, the grasp refinement episode (B) starts. We divide each episode into three stages, as displayed in Fig. 4. Firstly, the policy ${\pi }_{\mathbf{\theta }}$ refines the grasp. Afterward, the agent lifts the object by ${15}\mathrm{\;{cm}}$ via hard-coded increments to the wrist’s $z$ -position and holds the object in place to test the grasp’s stability. The policy ${\pi }_{\theta }$ can update the wrist and finger positions while lifting and holding. The control frequency of the policy in all stages is $3\mathrm{{Hz}}$ , while the update frequency of the low-level proportional-derivative (PD) controllers in the wrist and the fingers is ${100}\mathrm{\;{Hz}}$ .
|
| 88 |
+
|
| 89 |
+
As shown in the table of Fig. 4, we use the analytic grasp stability metrics from section II as reward functions. We compare the following reward configurations: (1) both $\epsilon$ and $\delta$ ,(2) only $\epsilon$ ,(3) only $\delta$ and (4) the baseline $\beta$ . Fig. 4 shows that $\delta$ refers to ${\delta }_{\text{task }}$ in the refine stage to measure expected grasp stability before lifting and ${\delta }_{\text{cur }}$ in the lift and hold stages to measure current stability. Further, $\epsilon$ is a weighted combination of ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ . While $\epsilon$ and $\delta ,\delta$ , and $\epsilon$ provide stability feedback after every algorithm step, the baseline $\beta$ gives a sparse reward after the holding stage, indicating if the object is still in the hand (1) or not (0).
|
| 90 |
+
|
| 91 |
+
## D. Results and Discussion
|
| 92 |
+
|
| 93 |
+
For all experiments in this paper, we average over 40 models trained with different seeds for each framework. The error bars in all plots represent $\pm 2$ standard errors. Fig. 5 summarizes the performance on the test dataset. Our main observation is that combining the geometric grasp stability metric $\epsilon$ with the force-agnostic metric $\delta$ yields the highest average success rates of ${83.6}\%$ across all objects (95.4% for cuboids, 93.1% for cylinders, and 62.3% for spheres) over all wrist errors. The $\epsilon$ and $\delta$ framework outperforms the binary reward framework $\beta$ by ${42.9}\%$ . The p-values for our results ${\mu }_{\epsilon }$ and $\delta > {\mu }_{\delta },{\mu }_{\epsilon }$ and $\delta > {\mu }_{\epsilon }$ and ${\mu }_{\epsilon }$ and $\delta > {\mu }_{\beta }$ (where ${\mu }_{x}$ is the mean performance of framework $x$ ) are all $\ll {0.001}$ and are hence statistically significant. We also notice that the combination between $\epsilon$ and $\delta$ is particularly helpful for spheres. The average performance of all frameworks on spheres is greatly reduced, while the algorithms trained with $\beta$ especially struggle to grasp spheres.
|
| 94 |
+
|
| 95 |
+
This study investigates the tactile sensing needs in the reward of RL grasping controllers by incorporating highly accurate contact information via analytic grasp stability metrics. The results demonstrate that information about contact positions and normals encoded in $\epsilon$ combines well with the force-based information in the $\delta$ reward. This result motivates building physical robotic hands capable of sensing these types of information. The low success rates for the spheres may be because they can roll and are therefore harder to grasp (cuboids and cylinders move comparatively less when touched by fingers or the palm). The $\beta$ framework performs worst after the defined number of training steps, which is unsurprising because shaped rewards are known to be more sample efficient than sparse rewards [17].
|
| 96 |
+
|
| 97 |
+
## IV. TACTILE SENSING AND THE STATE VECTOR
|
| 98 |
+
|
| 99 |
+
## A. Experimental Setup
|
| 100 |
+
|
| 101 |
+
In a second experiment, we investigate the effect of contact sensing resolution in the state vector on grasp refinement. We compare four contact sensing frameworks. The full contact sensing framework receives the same state vector $\mathbf{s} \in {\mathbb{R}}^{70}$ as in section III-B. In the normal framework, we only provide the algorithm with the contact normal forces and omit the tangential forces $\left( {s \in {\mathbb{R}}^{56}}\right)$ . In the binary framework we only give a binary signal whether a link is in contact (1) or not (0) $\left( {s \in {\mathbb{R}}^{56}}\right)$ . Finally, we solely provide the joint positions in the none framework $\left( {s \in {\mathbb{R}}^{7}}\right)$ . We adjust the size of the input layer of the neural network from section III-B to match the size of the state vector of each framework. The reward function in these experiments is $\epsilon$ and $\delta$ from Fig. 4. Hence, all contact sensing frameworks receive contact information indirectly via the reward.
|
| 102 |
+
|
| 103 |
+
## B. Results and Discussion
|
| 104 |
+
|
| 105 |
+
In Fig. 6, we observe that the frameworks which receive contact feedback (full, normal, binary) outperform the none framework by ${6.3}\% ,{6.6}\%$ and ${3.7}\%$ , respectively. Providing normal force information yields a performance increase of ${2.9}\%$ compared to the binary framework. However, training with the full contact force vectors only increases the performance by ${2.6}\%$ compared to the binary framework. As expected, performance decreases for larger wrist errors. The results ${\mu }_{\text{normal }} > {\mu }_{\text{binary }}$ and ${\mu }_{\text{normal }} > {\mu }_{\text{none }}$ are statistically significant (p-values $\ll {0.001}$ ), while the result ${\mu }_{\text{normal }} >$ ${\mu }_{\text{full }}$ is not (p-value 0.2232).
|
| 106 |
+
|
| 107 |
+
This experiment studies how contact sensing resolution in the policy's state vector is related to grasp success when training with fully contact informed rewards. Thereby, we investigate the viability of our hypothesized training and deployment workflow in Fig. 1. The improvements for the normal force framework over the binary and none frameworks are small. The results suggest that an affordable binary contact sensor suite, or even no contact sensing at all, may be suitable if a small decrease in performance is tolerable. This result supports our hypothesis that RL grasping algorithms are deployable to hands with reduced contact sensor resolution at little performance decrease when incorporating rich tactile feedback at train time. The algorithms trained with the full force vector perform approximately on par with the ones that receive the normal force information. This could be due to three reasons. (1) The full force framework has the most network parameters and requires even longer training times. (2) The model fails to represent the concept of the friction cone internally. An alternative representation of the tangential forces could be a solution (e.g., providing a margin to the friction cone instead of a tangential force vector). (3) Simulated contact forces are prone to instability [18], especially when simulating robotic grasping [19].
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Fig. 6: Test results for contact sensing frameworks.
|
| 112 |
+
|
| 113 |
+
## V. CONCLUSION
|
| 114 |
+
|
| 115 |
+
This paper investigated the importance of tactile signals in the reward and the policy's state vector to identify the tactile sensing needs in RL-based grasping algorithms. We found that rewards incorporating contact positions, normals, and forces are the most powerful optimization objectives for RL grasp refinement controllers. While this tactile information is essential in the reward function, we uncovered that reducing contact sensor resolution in the policy's state vector decreases algorithm performance only by a small amount. This result has implications for the design of physical grippers and their training and deployment workflows.
|
| 116 |
+
|
| 117 |
+
In future work, we aim to build physical robotic hands with advanced sensing capabilities to calculate grasp metrics. Secondly, we want to test the proposed training and deployment workflow, providing only limited contact information in the state vector and testing the algorithm on other robotic hands.
|
| 118 |
+
|
| 119 |
+
[1] M. R. Cutkosky and W. Provancher, "Force and tactile sensing," in Springer Handbook of Robotics. Springer, 2016, pp. 717-736.
|
| 120 |
+
|
| 121 |
+
[2] A. M. Dollar, L. P. Jentoft, J. H. Gao, and R. D. Howe, "Contact sensing and grasping performance of compliant hands," Autonomous Robots, vol. 28, no. 1, pp. 65-75, 2010.
|
| 122 |
+
|
| 123 |
+
[3] Q. Wan and R. D. Howe, "Modeling the effects of contact sensor resolution on grasp success," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1933-1940, 2018.
|
| 124 |
+
|
| 125 |
+
[4] H. Merzić, M. Bogdanović, D. Kappler, L. Righetti, and J. Bohg, "Leveraging contact forces for learning to grasp," in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 3615-3621.
|
| 126 |
+
|
| 127 |
+
[5] A. Melnik, L. Lach, M. Plappert, T. Korthals, R. Haschke, and H. Ritter, "Tactile sensing and deep reinforcement learning for in-hand manipulation tasks," in IROS Workshop on Autonomous Object Manipulation, 2019.
|
| 128 |
+
|
| 129 |
+
[6] ——, “Using tactile sensing to improve the sample efficiency and performance of deep deterministic policy gradients for simulated in-hand manipulation tasks," Frontiers in Robotics and AI, vol. 8, p. 57, 2021.
|
| 130 |
+
|
| 131 |
+
[7] D. Silver, S. Singh, D. Precup, and R. S. Sutton, "Reward is enough," Artificial Intelligence, vol. 299, p. 103535, 2021.
|
| 132 |
+
|
| 133 |
+
[8] B. Mirtich and J. Canny, "Easily computable optimum grasps in 2-d and 3-d," in Proceedings of the 1994 IEEE International Conference on Robotics and Automation. IEEE, 1994, pp. 739-747.
|
| 134 |
+
|
| 135 |
+
[9] I. Kao, K. M. Lynch, and J. W. Burdick, "Contact modeling and manipulation," in Springer Handbook of Robotics. Springer, 2016, pp. 931-954.
|
| 136 |
+
|
| 137 |
+
[10] M. Buss, H. Hashimoto, and J. Moore, "Dextrous hand grasping force optimization," IEEE Transactions on Robotics and Automation, vol. 12, no. 3, pp. 406-418, 1996.
|
| 138 |
+
|
| 139 |
+
[11] D. Prattichizzo and J. C. Trinkle, "Grasping," in Springer Handbook of Robotics. Springer, 2016, pp. 955-988.
|
| 140 |
+
|
| 141 |
+
[12] A. Raffin, A. Hill, M. Ernestus, A. Gleave, A. Kanervisto, and N. Dormann, "Stable baselines3," https://github.com/DLR-RM/ stable-baselines3, 2019.
|
| 142 |
+
|
| 143 |
+
[13] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor," 2018.
|
| 144 |
+
|
| 145 |
+
[14] N. Koenig and A. Howard, "Design and use paradigms for gazebo, an open-source multi-robot simulator," in IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, Sep 2004, pp. 2149-2154.
|
| 146 |
+
|
| 147 |
+
[15] J. Lee, M. X. Grey, S. Ha, T. Kunz, S. Jain, Y. Ye, S. S. Srinivasa, M. Stilman, and C. K. Liu, "Dart: Dynamic animation and robotics toolkit," Journal of Open Source Software, vol. 3, no. 22, p. 500, 2018.
|
| 148 |
+
|
| 149 |
+
[16] L. U. Odhner, L. P. Jentoft, M. R. Claffee, N. Corson, Y. Tenzer, R. R. Ma, M. Buehler, R. Kohout, R. D. Howe, and A. M. Dollar, "A compliant, underactuated hand for robust manipulation," The International Journal of Robotics Research, vol. 33, no. 5, pp. 736-752, 2014.
|
| 150 |
+
|
| 151 |
+
[17] A. Y. Ng, D. Harada, and S. Russell, "Policy invariance under reward transformations: Theory and application to reward shaping," in In Proceedings of the Sixteenth International Conference on Machine Learning. Morgan Kaufmann, 1999, pp. 278-287.
|
| 152 |
+
|
| 153 |
+
[18] J. M. Hsu and S. C. Peters, "Extending open dynamics engine for the darpa virtual robotics challenge," in Proceedings of the 4th International Conference on Simulation, Modeling, and Programming for Autonomous Robots - Volume 8810, ser. SIMPAR 2014. Berlin, Heidelberg: Springer-Verlag, 2014, p. 37-48.
|
| 154 |
+
|
| 155 |
+
[19] J. R. Taylor, E. M. Drumwright, and J. Hsu, "Analysis of grasping failures in multi-rigid body simulations," in 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), 2016, pp. 295-301.
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/OqmWRIsvA4O/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TACTILE SENSING AND ITS ROLE IN LEARNING AND DEPLOYING ROBOTIC GRASPING CONTROLLERS
|
| 2 |
+
|
| 3 |
+
Alexander Koenig ${}^{1,2}$ , Zixi Liu ${}^{2}$ , Lucas Janson ${}^{3}$ and Robert Howe ${}^{2,4}$
|
| 4 |
+
|
| 5 |
+
Abstract- A long-standing question in robot hand design is how accurate tactile sensing must be. This paper uses simulated tactile signals and the reinforcement learning (RL) framework to study the sensing needs in grasping systems. Our first experiment investigates the need for rich tactile sensing in the rewards of RL-based grasp refinement algorithms for multi-fingered robotic hands. We systematically integrate different levels of tactile data into the rewards using analytic grasp stability metrics. We find that combining information on contact positions, normals, and forces in the reward yields the highest average success rates of ${95.4}\%$ for cuboids, ${93.1}\%$ for cylinders, and 62.3% for spheres across wrist position errors between 0 and 7 centimeters and rotational errors between 0 and 14 degrees. This contact-based reward outperforms a non-tactile binary-reward baseline by ${42.9}\%$ . Our follow-up experiment shows that when training with tactile-enabled rewards, the use of tactile information in the control policy's state vector is drastically reducible at only a slight performance decrease of at most ${6.6}\%$ for no tactile sensing in the state. Since policies do not require access to the reward signal at test time, our work implies that models trained on tactile-enabled hands are deployable to robotic hands with a smaller sensor suite, potentially reducing cost dramatically.
|
| 6 |
+
|
| 7 |
+
§ I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Tactile sensing provides information about local object geometry, surface properties, contact forces, and grasp stability [1]. Hence, tactile sensors can be a valuable tool in contact-rich scenarios such as robotic grasp refinement [2] where a grasping system recovers from calibration errors. Computer vision approaches for grasp refinement often face limitations due to the occlusion of contact events. Tactile sensors can be expensive and fragile hardware components. Hence, for cost-effective robotic hand design, it is essential to understand when robot hands need precise sensing and how accurate it should be to achieve good grasping performance.
|
| 10 |
+
|
| 11 |
+
A few research papers investigated the effect of tactile sensor resolution on grasp success. Wan et al. [3] found that reduced spatial resolution of tactile sensors negatively impacts grasp success since inaccuracies in contact position and normal sensing can influence grasp stability predictions. Other works analyzed the effect of contact sensor resolution on grasp performance in the context of reinforcement learning. In simulated experiments, Merzić et al. [4] found that contact feedback in a policy's state vector improves the performance of RL-based grasping controllers, and [5], [6] presented similar results for in-hand manipulation. However, [5], [6] also concluded that models trained with binary contact signals perform equally well as models that receive accurate normal force information. Furthermore, [5], [6] found that tactile resolution (92 vs. 16 sensors) has no noticeable effect on performance and sample efficiency of reinforcement learned manipulation controllers.
|
| 12 |
+
|
| 13 |
+
< g r a p h i c s >
|
| 14 |
+
|
| 15 |
+
Fig. 1: The hypothesized workflow for training and deploying RL-controlled grasping systems. First, train a policy $\pi \left( {\mathbf{a} \mid \mathbf{s}}\right)$ on a hand ${H}_{f}$ with a full tactile sensor suite (e.g., contact position, normal and force sensors) where the grasp quality metrics are available as a reward ${r}_{f}$ to learn a task, but only provide a subset of the available contact data in the state vector ${\mathbf{s}}_{r}$ . Afterwards, deploy the policy to many structurally similar hands ${H}_{r}$ with a reduced sensor set to save cost.
|
| 16 |
+
|
| 17 |
+
In this paper, we use accurate tactile signals from simulation and the reinforcement learning framework to explore the tactile sensing needs in robotic systems. RL algorithms aim to produce a policy $\pi \left( {\mathbf{a} \mid \mathbf{s}}\right)$ that outputs actions $\mathbf{a}$ given state information $s$ such that the cumulative reward signal $r$ is maximized. The reward function is a critical part of every RL algorithm [7]. While the previous work in [4], [5], [6] only studied the tactile resolution in the policy's state, our first contribution investigates the impact of tactile information in the reward signal. We propose a unified framework to systematically incorporate different levels of tactile information from robotic hands into a reward signal via analytic grasp stability metrics. We conduct grasp refinement experiments on two types of quality metrics discussed in Section II: $\epsilon$ [8] calculated from contact positions and normals and a contact force-based reward $\delta$ . In Section III, we estimate the relevance of contact position, normal, and force sensing for the reward signal by comparing the individual and combined performance of $\epsilon$ and $\delta$ .
|
| 18 |
+
|
| 19 |
+
This material is based upon work supported by the US National Science Foundation under Grant No. IIS-1924984 and by the German Academic Exchange Service. An extended paper including the material in this abstract has been submitted for publication.
|
| 20 |
+
|
| 21 |
+
${}^{1}$ Department of Informatics, Technical University of Munich
|
| 22 |
+
|
| 23 |
+
${}^{2}$ School of Engineering and Applied Sciences, Harvard University
|
| 24 |
+
|
| 25 |
+
${}^{3}$ Department of Statistics, Harvard University
|
| 26 |
+
|
| 27 |
+
${}^{4}$ RightHand Robotics, Inc.,237 Washington St, Somerville, MA 02143 USA. Robert Howe is corresponding author howe@seas . harvard. edu.
|
| 28 |
+
|
| 29 |
+
Calculating grasp stability metrics requires costly tactile sensing capabilities on physical grippers. However, the reward signal is only required during the training of policies but not while testing, which suggests that sensing needs in both stages could be different. We hypothesize in Fig. 1 that policies trained with grasp stability metrics on a robotic hand ${H}_{f}$ with a full tactile sensor suite are deployable to structurally similar but more affordable hands ${H}_{r}$ with reduced tactile sensing at a small performance decrease. Hence, our second experiment in Section IV gradually decreases tactile resolution in the state vector to find realistic training and deployment workflows for grasping algorithms.
|
| 30 |
+
|
| 31 |
+
§ II. GRASP STABILITY METRICS
|
| 32 |
+
|
| 33 |
+
§ A. LARGEST-MINIMUM RESISTED FORCES AND TORQUES
|
| 34 |
+
|
| 35 |
+
Mirtich and Canny [8] define two quality metrics ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ that measure a grasp’s ability to resist unit forces and torques, respectively. As discussed in [9], the friction cone constrains the contact force ${\mathbf{f}}_{i}$ at each contact $i$ . It is discretized using $m$ edges ${\mathbf{f}}_{i,j}$ . The set of forces ${\mathcal{W}}_{f}$ that the contacts can apply to the object is ${\mathcal{W}}_{f} =$ ConvexHull $\left( {\mathop{\bigcup }\limits_{{i = 1}}^{{n}_{c}}\left\{ {{\mathbf{f}}_{i,1},\ldots ,{\mathbf{f}}_{i,m}}\right\} }\right)$ , where ${n}_{c}$ is the number of contacts. Finally, the quality metric ${\epsilon }_{f} =$ $\mathop{\min }\limits_{{\mathbf{f} \in \partial {\mathcal{W}}_{f}}}\parallel \mathbf{f}\parallel$ is the shortest distance from the origin to the nearest hyper-plane of ${\mathcal{W}}_{f}$ . Hence, the metric defines a lower bound on the resisted force in all directions.
|
| 36 |
+
|
| 37 |
+
This concept is easily extended to the torque domain. The reaction torque ${\tau }_{i,j}$ resulting from a friction cone edge ${\mathbf{f}}_{i,j}$ is ${\mathbf{\tau }}_{i,j} = {\mathbf{r}}_{\mathbf{i}} \times {\mathbf{f}}_{i,j}$ , where ${\mathbf{r}}_{\mathbf{i}}$ is a vector pointing from the object’s center of mass to the contact point ${\mathbf{p}}_{\mathbf{i}}$ . Further, ${\mathcal{W}}_{\tau } =$ ConvexHull $\left( {\mathop{\bigcup }\limits_{{i = 1}}^{n}\left\{ {{\mathbf{\tau }}_{i,1},\ldots ,{\mathbf{\tau }}_{i,m}}\right\} }\right)$ is the set of resisted torques. The metric ${\epsilon }_{\tau } = \mathop{\min }\limits_{{\mathbf{\tau } \in \partial {\mathcal{W}}_{\tau }}}\parallel \mathbf{\tau }\parallel$ evaluates the grasp's quality by identifying the magnitude of the largest-minimum resisted torque.
|
| 38 |
+
|
| 39 |
+
§ B. MINIMUM DISTANCE TO THE FRICTION CONE
|
| 40 |
+
|
| 41 |
+
The quality metrics ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ analyze the forces that each contact can theoretically exert on the object. However, these metrics do not consider the actual contact forces that the contacts apply to the object. To this end, we define two force-based quality metrics ${\delta }_{\text{ cur }}$ and ${\delta }_{\text{ task }}$ .
|
| 42 |
+
|
| 43 |
+
< g r a p h i c s >
|
| 44 |
+
|
| 45 |
+
Fig. 2: Grasp with current contact forces ${\mathbf{f}}_{i,{cur}}$ and tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ to the friction cones.
|
| 46 |
+
|
| 47 |
+
Similar to Buss et al. [10], we measure grasp stability in terms of how far the contact forces are from the friction limits. Fig. 2 shows a grasp with the current contact forces ${\mathbf{f}}_{i,{cur}}$ and the tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ . The vectors ${\mathbf{f}}_{i,{cur}}$ are forces in the tangential direction that point from ${\mathbf{f}}_{i,{cur}}$ to the closest point on the friction cone, thereby identifying the direction in which the contact can take the least tangential force before slipping. A grasp with large tangential force margins ${\overline{\mathbf{f}}}_{i,{cur}}$ is desirable since the contacts are less prone to sliding when an object wrench is applied. Hence, the metric ${\delta }_{\text{ cur }}$ measures the average magnitude of the safety margins $\begin{Vmatrix}{\overline{\mathbf{f}}}_{i,{cur}}\end{Vmatrix}$ across all contacts $i$ .
|
| 48 |
+
|
| 49 |
+
The set of wrenches that the grasp must resist during task execution (e.g., object weight or wrenches from expected collisions) can often be estimated. Our task-oriented metric ${\delta }_{\text{ task }}$ evaluates whether the current contact forces of a grasp are suitable to balance the anticipated task wrenches. We calculate the additional contact force ${\mathbf{f}}_{i,{add}}$ that each contact $i$ must react with to compensate a task wrench $w$ with ${\mathbf{G}}^{ + }\mathbf{w} = {\left( \begin{array}{llll} {\mathbf{f}}_{1,{add}}^{T} & {\mathbf{f}}_{2,{add}}^{T} & \ldots & {\mathbf{f}}_{{n}_{c},{add}}^{T} \end{array}\right) }^{T}$ , where ${\mathbf{G}}^{ + }$ is the pseudoinverse of the grasp matrix as defined in [11]. The task contact force is ${\mathbf{f}}_{i,\text{ task }} = {\mathbf{f}}_{i,\text{ cur }} + {\mathbf{f}}_{i,\text{ add }}$ for each contact. Finally, ${\delta }_{\text{ task }}$ computes the average magnitude of the tangential force margins $\begin{Vmatrix}{\overline{\mathbf{f}}}_{i,\text{ task }}\end{Vmatrix}$ of the task contact forces ${\mathbf{f}}_{i,\text{ task }}$ to the friction cone.
|
| 50 |
+
|
| 51 |
+
§ III. TACTILE SENSING AND THE REWARD FUNCTION
|
| 52 |
+
|
| 53 |
+
§ A. TRAIN AND TEST DATASET
|
| 54 |
+
|
| 55 |
+
Each training sample consists of a tuple(O, E), where $O$ is the object, and $E$ is the wrist pose error sampled uniformly before every episode. There are three object types (cuboid, cylinder, and sphere) with a mass $\in \left\lbrack {{0.1},{0.4}}\right\rbrack \mathrm{{kg}}$ and randomly sampled sizes. Fig. 3 visualizes the minimum and maximum object dimensions. The wrist pose error $E$ consists of a translational and a rotational error. We uniformly sample the translational error $\left( {{e}_{x},{e}_{y},{e}_{z}}\right)$ from $\left\lbrack {-5,5}\right\rbrack \mathrm{{cm}}$ and the rotational error $\left( {{e}_{\xi },{e}_{\eta },{e}_{\zeta }}\right)$ from $\left\lbrack {-{10},{10}}\right\rbrack$ deg for each variable, respectively.
|
| 56 |
+
|
| 57 |
+
< g r a p h i c s >
|
| 58 |
+
|
| 59 |
+
Fig. 3: Minimum and maximum object sizes. We place the spheres on a concave mount to prevent rolling.
|
| 60 |
+
|
| 61 |
+
We define 8 different wrist error cases for the test dataset. Let $d\left( {a,b,c}\right) = \sqrt{{a}^{2} + {b}^{2} + {c}^{2}}$ be the L2 norm of the variables(a, b, c). Table I shows the wrist error cases, where case A corresponds to no error and case $\mathrm{H}$ means maximum wrist error. The test dataset consists of 30 random objects $O$ (10 cuboids,10 cylinders, and 10 spheres). Per object $O$ , we randomly generate the eight wrist error cases $\{ A,B,\ldots ,H\}$ from Table I. Hence, we run ${30} \times 8 = {240}$ grasping experiments to test one model.
|
| 62 |
+
|
| 63 |
+
TABLE I: Wrist error cases
|
| 64 |
+
|
| 65 |
+
max width=
|
| 66 |
+
|
| 67 |
+
Wrist Error Case A B C D E $\mathbf{F}$ G H
|
| 68 |
+
|
| 69 |
+
1-9
|
| 70 |
+
$d\left( {{e}_{x},{e}_{y},{e}_{z}}\right)$ in cm 0 1 2 3 4 5 6 7
|
| 71 |
+
|
| 72 |
+
1-9
|
| 73 |
+
$d\left( {{e}_{\xi },{e}_{\eta },{e}_{\zeta }}\right)$ in deg 0 2 4 6 8 10 12 14
|
| 74 |
+
|
| 75 |
+
1-9
|
| 76 |
+
|
| 77 |
+
< g r a p h i c s >
|
| 78 |
+
|
| 79 |
+
Fig. 4: Overview of one algorithm episode. (A) Initialization of hand and object. (B) We split the grasp refinement algorithm into four stages and compare four reward frameworks: (1) $\epsilon$ and $\delta$ ,(2) only $\delta$ ,(3) only $\epsilon$ and (4) the non-tactile binary reward baseline $\beta$ . The weighting factors of ${\alpha }_{1} = 5$ and ${\alpha }_{2} = {0.5}$ were empirically determined.
|
| 80 |
+
|
| 81 |
+
§ B. STATE AND ACTION SPACE
|
| 82 |
+
|
| 83 |
+
The state vector $s$ consists of 7 joint positions ( 1 finger separation, 3 proximal bending, 3 distal bending degrees of freedom), and 7 contact cues ( 3 on proximal links, 3 on distal links, and 1 on palm) that include contact position, contact normal and contact force, which have $3\left( {x,y,z}\right)$ components each. The dimension of the state vector is $\mathbf{s} \in$ ${\mathbb{R}}^{7 + 7 \times \left( {3 \times 3}\right) = {70}}$ . Note that we do not assume any information about the object (e.g., object pose, geometry, or mass) in the state vector. The contact normals and positions are provided in the wrist frame, while the contact forces are represented in the contact frame. The action vector $\mathbf{a}$ consists of 3 finger position increments, 3 wrist position increments and 3 wrist rotation increments. The action vector's dimension is $\mathbf{a} \in {\mathbb{R}}^{3 + 3 + 3 = 9}$ . The policy ${\pi }_{\mathbf{\theta }}$ is parametrized by a neural network with weights $\mathbf{\theta }$ . The network is a multilayer perceptron with four layers(70,256,256,9). We use the stable-baselines 3 [12] implementation of the soft actor-critic (SAC) [13] algorithm and train for 25000 steps.
|
| 84 |
+
|
| 85 |
+
§ C. EXPERIMENTAL SETUP
|
| 86 |
+
|
| 87 |
+
We simulate the three-fingered ReFlex TakkTile hand (RightHand Robotics, Somerville, MA USA) using a custom Gazebo [14] simulation environment and the DART [15] physics engine. We model the under-actuated distal flexure [16] as a rigid link with two revolute joints (one between the proximal and one between the distal finger link). Further, we approximate the finger geometries as cuboids to reduce computational load. Our source code is available at github.com/axkoenig/grasp_refinement.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Fig. 5: Test results for reward frameworks.
|
| 92 |
+
|
| 93 |
+
Fig. 4 shows an overview of one training episode. In stage (A), we initialize the world. Thereby, we randomly generate a new object, wrist error tuple(O, E)(or we select one from the test dataset). We assume a computer vision system and a grasp planner that produces a side-ways facing grasp at a fixed $5\mathrm{\;{cm}}$ offset from the object’s center of mass. We add the wrist pose error $E$ to this grasp pose to simulate calibration errors and close the fingers of the robotic hand in the erroneous wrist pose until the fingers make contact with the object. Consequently, the grasp refinement episode (B) starts. We divide each episode into three stages, as displayed in Fig. 4. Firstly, the policy ${\pi }_{\mathbf{\theta }}$ refines the grasp. Afterward, the agent lifts the object by ${15}\mathrm{\;{cm}}$ via hard-coded increments to the wrist’s $z$ -position and holds the object in place to test the grasp’s stability. The policy ${\pi }_{\theta }$ can update the wrist and finger positions while lifting and holding. The control frequency of the policy in all stages is $3\mathrm{{Hz}}$ , while the update frequency of the low-level proportional-derivative (PD) controllers in the wrist and the fingers is ${100}\mathrm{\;{Hz}}$ .
|
| 94 |
+
|
| 95 |
+
As shown in the table of Fig. 4, we use the analytic grasp stability metrics from section II as reward functions. We compare the following reward configurations: (1) both $\epsilon$ and $\delta$ ,(2) only $\epsilon$ ,(3) only $\delta$ and (4) the baseline $\beta$ . Fig. 4 shows that $\delta$ refers to ${\delta }_{\text{ task }}$ in the refine stage to measure expected grasp stability before lifting and ${\delta }_{\text{ cur }}$ in the lift and hold stages to measure current stability. Further, $\epsilon$ is a weighted combination of ${\epsilon }_{f}$ and ${\epsilon }_{\tau }$ . While $\epsilon$ and $\delta ,\delta$ , and $\epsilon$ provide stability feedback after every algorithm step, the baseline $\beta$ gives a sparse reward after the holding stage, indicating if the object is still in the hand (1) or not (0).
|
| 96 |
+
|
| 97 |
+
§ D. RESULTS AND DISCUSSION
|
| 98 |
+
|
| 99 |
+
For all experiments in this paper, we average over 40 models trained with different seeds for each framework. The error bars in all plots represent $\pm 2$ standard errors. Fig. 5 summarizes the performance on the test dataset. Our main observation is that combining the geometric grasp stability metric $\epsilon$ with the force-agnostic metric $\delta$ yields the highest average success rates of ${83.6}\%$ across all objects (95.4% for cuboids, 93.1% for cylinders, and 62.3% for spheres) over all wrist errors. The $\epsilon$ and $\delta$ framework outperforms the binary reward framework $\beta$ by ${42.9}\%$ . The p-values for our results ${\mu }_{\epsilon }$ and $\delta > {\mu }_{\delta },{\mu }_{\epsilon }$ and $\delta > {\mu }_{\epsilon }$ and ${\mu }_{\epsilon }$ and $\delta > {\mu }_{\beta }$ (where ${\mu }_{x}$ is the mean performance of framework $x$ ) are all $\ll {0.001}$ and are hence statistically significant. We also notice that the combination between $\epsilon$ and $\delta$ is particularly helpful for spheres. The average performance of all frameworks on spheres is greatly reduced, while the algorithms trained with $\beta$ especially struggle to grasp spheres.
|
| 100 |
+
|
| 101 |
+
This study investigates the tactile sensing needs in the reward of RL grasping controllers by incorporating highly accurate contact information via analytic grasp stability metrics. The results demonstrate that information about contact positions and normals encoded in $\epsilon$ combines well with the force-based information in the $\delta$ reward. This result motivates building physical robotic hands capable of sensing these types of information. The low success rates for the spheres may be because they can roll and are therefore harder to grasp (cuboids and cylinders move comparatively less when touched by fingers or the palm). The $\beta$ framework performs worst after the defined number of training steps, which is unsurprising because shaped rewards are known to be more sample efficient than sparse rewards [17].
|
| 102 |
+
|
| 103 |
+
§ IV. TACTILE SENSING AND THE STATE VECTOR
|
| 104 |
+
|
| 105 |
+
§ A. EXPERIMENTAL SETUP
|
| 106 |
+
|
| 107 |
+
In a second experiment, we investigate the effect of contact sensing resolution in the state vector on grasp refinement. We compare four contact sensing frameworks. The full contact sensing framework receives the same state vector $\mathbf{s} \in {\mathbb{R}}^{70}$ as in section III-B. In the normal framework, we only provide the algorithm with the contact normal forces and omit the tangential forces $\left( {s \in {\mathbb{R}}^{56}}\right)$ . In the binary framework we only give a binary signal whether a link is in contact (1) or not (0) $\left( {s \in {\mathbb{R}}^{56}}\right)$ . Finally, we solely provide the joint positions in the none framework $\left( {s \in {\mathbb{R}}^{7}}\right)$ . We adjust the size of the input layer of the neural network from section III-B to match the size of the state vector of each framework. The reward function in these experiments is $\epsilon$ and $\delta$ from Fig. 4. Hence, all contact sensing frameworks receive contact information indirectly via the reward.
|
| 108 |
+
|
| 109 |
+
§ B. RESULTS AND DISCUSSION
|
| 110 |
+
|
| 111 |
+
In Fig. 6, we observe that the frameworks which receive contact feedback (full, normal, binary) outperform the none framework by ${6.3}\% ,{6.6}\%$ and ${3.7}\%$ , respectively. Providing normal force information yields a performance increase of ${2.9}\%$ compared to the binary framework. However, training with the full contact force vectors only increases the performance by ${2.6}\%$ compared to the binary framework. As expected, performance decreases for larger wrist errors. The results ${\mu }_{\text{ normal }} > {\mu }_{\text{ binary }}$ and ${\mu }_{\text{ normal }} > {\mu }_{\text{ none }}$ are statistically significant (p-values $\ll {0.001}$ ), while the result ${\mu }_{\text{ normal }} >$ ${\mu }_{\text{ full }}$ is not (p-value 0.2232).
|
| 112 |
+
|
| 113 |
+
This experiment studies how contact sensing resolution in the policy's state vector is related to grasp success when training with fully contact informed rewards. Thereby, we investigate the viability of our hypothesized training and deployment workflow in Fig. 1. The improvements for the normal force framework over the binary and none frameworks are small. The results suggest that an affordable binary contact sensor suite, or even no contact sensing at all, may be suitable if a small decrease in performance is tolerable. This result supports our hypothesis that RL grasping algorithms are deployable to hands with reduced contact sensor resolution at little performance decrease when incorporating rich tactile feedback at train time. The algorithms trained with the full force vector perform approximately on par with the ones that receive the normal force information. This could be due to three reasons. (1) The full force framework has the most network parameters and requires even longer training times. (2) The model fails to represent the concept of the friction cone internally. An alternative representation of the tangential forces could be a solution (e.g., providing a margin to the friction cone instead of a tangential force vector). (3) Simulated contact forces are prone to instability [18], especially when simulating robotic grasping [19].
|
| 114 |
+
|
| 115 |
+
< g r a p h i c s >
|
| 116 |
+
|
| 117 |
+
Fig. 6: Test results for contact sensing frameworks.
|
| 118 |
+
|
| 119 |
+
§ V. CONCLUSION
|
| 120 |
+
|
| 121 |
+
This paper investigated the importance of tactile signals in the reward and the policy's state vector to identify the tactile sensing needs in RL-based grasping algorithms. We found that rewards incorporating contact positions, normals, and forces are the most powerful optimization objectives for RL grasp refinement controllers. While this tactile information is essential in the reward function, we uncovered that reducing contact sensor resolution in the policy's state vector decreases algorithm performance only by a small amount. This result has implications for the design of physical grippers and their training and deployment workflows.
|
| 122 |
+
|
| 123 |
+
In future work, we aim to build physical robotic hands with advanced sensing capabilities to calculate grasp metrics. Secondly, we want to test the proposed training and deployment workflow, providing only limited contact information in the state vector and testing the algorithm on other robotic hands.
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,513 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# RRL: Resnet as representation for Reinforcement Learning
|
| 2 |
+
|
| 3 |
+
Rutav Shah ${}^{1}$ and Vikash Kumar ${}^{2,3}$
|
| 4 |
+
|
| 5 |
+
Abstract-Generalist robots capable of performing dexterous, contact-rich manipulation tasks will enhance productivity and provide care in un-instrumented settings like homes. Such tasks warrant operations in real-world only using the robot's proprioceptive sensor such as onboard cameras, joint encoders, etc which can be challenging for policy learning owing to the high dimensionality and partial observability issues. We propose RRL: Resnet as representation for Reinforcement Learning - a straightforward yet effective approach that can learn complex behaviors directly from proprioceptive inputs. RRL fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. In a simulated dexterous manipulation benchmark, where the state of the art methods fails to make significant progress, RRL delivers contact rich behaviors. The appeal of RRL lies in its simplicity in bringing together progress from the fields of Representation Learning, Imitation Learning, and Reinforcement Learning. Its effectiveness in learning behaviors directly from visual inputs with performance and sample efficiency matching learning directly from the state, even in complex high dimensional domains, is far from obvious.
|
| 6 |
+
|
| 7 |
+
## I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Recently, Reinforcement learning (RL) has seen tremendous momentum and progress [9, 19, 37, 21] in learning complex behaviors from states [18, 24, 17]. Most success stories, however, are limited to simulations or instrumented laboratory conditions as real world doesn't provide direct access to its internal state. Not only learning with state-space, but visual observation spaces have also found reasonable success [26, 42]. However, the majority of these methods have been tested on low-dimensional, 2D tasks [31] that lack depth information. Contact rich manipulation tasks, on the other hand, are high dimensional and necessitate intricate details in order to be completed successfully. In order to deliver the promise presented by data-driven techniques, we need efficient techniques that can learn complex behaviors unobtrusively without the need for environment instrumentation.
|
| 10 |
+
|
| 11 |
+
Learning without environment instrumentation, especially in unstructured settings like homes, can be quite challenging [59, 34, 46]. Challenges include - (a) Decision making with incomplete information owing to partial observability as the agents must rely only on proprioceptive on-board sensors (vision, touch, joint position encoders, etc) to perceive and act. (b) The influx of sensory information makes the input space quite high dimensional. (c) Information contamination due to sensory noise and task-irrelevant conditions like lightning, shadows, etc. (d) Most importantly, the scene being flushed with information irrelevant to the task (background, clutter, etc). Agents learning under these constraints is forced to take a large number of samples simply to untangle these task-irrelevant details before it makes any progress on the true task objective. A common approach to handle these high dimensionality and multi-modality issues is to learn representations that distil information into low dimensional features and use them as inputs to the policy. While such ideas have found reasonable success [43, 40], designing such representations in a supervised manner requires a deep understanding of the problem and domain expertise. An alternative approach is to leverage unsupervised representation learning to autonomously acquire representations based on either reconstruction [13, 59, 56] or contrastive [51, 52] objective. These methods are quite brittle as the representations are acquired from narrow task-specific distributions [61], and hence, do not generalize well across different tasks Table II. Additionally, they acquire task-specific representations, often needing additional samples from the environment leading to poor sample efficiency or domains specific data-augmentations for training representations.
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
|
| 15 |
+
Fig. 1. RRL Resnet as representation for Reinforcement Learning takes a small step in bridging the gap between Representation learning and Reinforcement learning. RRL pre-trains an encoder on a wide variety of real world classes like ImageNet dataset using a simple supervised classification objective. Since the encoder is exposed to a much wider distribution of images while pretraining, it remains effective whatever distribution the policy might induce during the training of the agent. This allows us to freeze the encoder after pretraining without any additional efforts.
|
| 16 |
+
|
| 17 |
+
The key idea behind our method stems from an intuitive observation over the desiderata of a good representation i.e. (a) it should be low dimensional for a compact representation. (b) it should be able to capture silent features encapsulating the diversity and the variability present in a real-world task for better generalization performance. (c) it should be robust to irrelevant information like noise, lighting, viewpoints, etc so that it is resilient to the changes in surroundings. (d) it should provide effective representation in the entire distribution that a policy can induce for effective learning. These requirements are quite harsh needing extreme domain expertise to manually design and an abundance of samples to automatically acquire. Can we acquire this representation without any additional effort? Our work takes a very small step in this direction.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
${}^{1}$ Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, India rutavms@gmail.com
|
| 22 |
+
|
| 23 |
+
${}^{2}$ Department of Computer Science, University of Washington, Seattle, USA vikash@cs.washington.edu
|
| 24 |
+
|
| 25 |
+
${}^{3}$ Facebook AI Research, USA
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
The key insight behind our method (Figure 1) is embarrassingly simple - representations do not necessarily have to be trained on the exact task distribution; a representation trained on a sufficiently wide distribution of real-world scenarios, will remain effective on any distribution a policy optimizing a task in the real world might induce. While training over such wide distribution is demanding, this is precisely what the success of large image classification models [8, 10, 54, 12] in Computer Vision delivers - representations learned over a large family of real-world scenarios.
|
| 30 |
+
|
| 31 |
+
Our Contributions: We list the major contributions
|
| 32 |
+
|
| 33 |
+
1) We present a surprisingly simple method (RRL) at the intersection of representation learning, imitation learning (IL) and reinforcement learning (RL) that uses features from pre-trained image classification models (Resnet34) as representations in standard RL pipeline. Our method is quite general and can be incorporated with minimal changes to most state based RL/IL algorithms.
|
| 34 |
+
|
| 35 |
+
2) Task-specific representations learned by supervised as well as unsupervised methods are usually brittle and suffer from distribution mismatch. We demonstrate that features learned by image classification models are general towards different task (Figure 2), robust to visual distractors, and when used in conjunction with standard IL and RL pipelines can efficiently acquire policies directly from proprioceptive inputs.
|
| 36 |
+
|
| 37 |
+
3) While competing methods have restricted results primarily to planar tasks devoid of depth perspectives, on a rich collection of simulated high dimensional dexterous manipulation tasks, where state-of-the-art methods struggle, we demonstrate that RRL can learn rich behaviors directly from visual inputs with performance & sample efficiency approaching state-based methods.
|
| 38 |
+
|
| 39 |
+
4) Additionally, we underline the performance gap between the SOTA approaches and RRL on simple low dimensional tasks as well as high dimensional more realistic tasks. Furthermore, we experimentally establish that the commonly used environments for studying image based continuous control methods are not a true representative of real-world scenario.
|
| 40 |
+
|
| 41 |
+
## II. Related Work
|
| 42 |
+
|
| 43 |
+
RRL rests on recent developments from the fields of Representation Learning, Imitation Learning and Reinforcement Learning. In this section, we outline related works leveraging representation learning for visual reinforcement and imitation learning.
|
| 44 |
+
|
| 45 |
+
## A. Learning without explicit representation
|
| 46 |
+
|
| 47 |
+
A common approach is to learn behaviors in an end to end fashion - from pixels to actions - without explicit distinction between feature representation and policy representations. Success stories in this categories range from seminal work [5] mastering Atari 2600 computer games using only raw pixels as input, to [14] which learns trajectory-centric local policies using Guided Policy Search [4] for diverse continuous control manipulation tasks in the real world learned directly from camera inputs. More recently, [35] has demonstrated success in acquiring multi-finger dexterous manipulation [33] and agile locomotion behaviors using off-policy action critic methods [24]. While learning directly from pixels has found reasonable success, it requires training large networks with high input dimensionality. Agents require a prohibitively large number of samples to untangle task-relevant information in order to acquire behaviors, limiting their application to simulations or constrained lab settings. RRL maintains an explicit representation network to extract low dimensional features. Decoupling representation learning from policy learning delivers results with large gains in efficiency. Next, we outline related works that use explicit representations.
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
Fig. 2. Visualization of Layer 4 of Resnet model of the top 1 class using Grad-CAM [45][Top] and Guided Backpropogation [11][Bottom]. This indicates that Resnet is indeed looking for the right features in our task images (right) in spite of such high distributional shift.
|
| 52 |
+
|
| 53 |
+
## B. Learning with supervised representations
|
| 54 |
+
|
| 55 |
+
Another approach is to first acquire representations using expert supervision, and use features extracted from representation as inputs in standard policy learning pipelines. A predominant idea is to learn representative keypoints encapsulating task details from the input images and using the extracted keypoints as a replacement of the state information [38]. Using these techniques, [43, 39] demonstrated tool manipulation behaviors in rich scenes flushed with task-irrelevant details. [41] demonstrated simultaneous manipulation of multiple objects in the task of Baoding ball tasks on a high dimensional dexterous manipulation hand. Along with the inbuilt proprioceptive sensing at each joint, they use an RGB stereo image pair that is fed into a separate pre-trained tracker to produce 3D position estimates [57] for the two Baoding balls. These methods, while powerful, learn task-specific features and requires expert supervision, making it harder to (a) translate to variation in tasks/environments, and (b) scale with increasing task diversity. RRL, on the other hand, uses single task-agnostic representations with better generalization capability making it easy to scale.
|
| 56 |
+
|
| 57 |
+
## C. Learning with unsupervised representations
|
| 58 |
+
|
| 59 |
+
With the ambition of being scalable, this group of methods intends to acquire representation via unsupervised techniques. [30] uses contrastive learning to time-align visual features across different embodiment to demonstrate behavior transfer from human to a Fetch robot. [20], [62, 59] use variational inference $\left\lbrack {7,{20}}\right\rbrack$ to learn compressed latent representations and use it as input to standard RL pipeline to demonstrate rich manipulation behaviors. [47] additionally learns dynamics models directly in the latent space and use model-based RL to acquire behaviors on simulated tasks. On similar tasks, [36] uses multi-step variational inference to learn world dynamic as well as rewards models for off-policy RL. [51] use image augmentation with variational inference to construct features to be used in standard RL pipeline and demonstrate performance at par with learning directly from the state. [49, 48] demonstrate comparable results by assimilating updates over features acquired only via image augmentation. Similar to supervised methods, unsupervised methods often learns task-specific brittle representations as they break when subjected to small variations in the surroundings and often suffers challenges from non-stationarity arising from the mismatch between the distribution representations are learned on and the distribution policy induces. To induce stability, RRL uses pre-trained stationary representations trained on distribution with wider support than what policy can induce. Additionally, representations learned over a wide distribution of real-world samples are robust to noise and irrelevant information like lighting, illumination, etc.
|
| 60 |
+
|
| 61 |
+
## D. Learning with representations and demonstrations
|
| 62 |
+
|
| 63 |
+
Learning from demonstrations has a rich history. We focus our discussion on DAPG [17], a state-based method which optimizes for the natural gradient [2] of a joint loss with imitation as well as reinforcement objective. DAPG has been demonstrated to outperform competing methods [15, 16] on the high dimensional ADROIT dexterous manipulation task suite we test on. RRL extends DAPG to solve the task suite directly from proprioceptive signals with performance and sample efficiency comparable to state-DAPG. Unlike DAPG which is on-policy, FERM [58] is a closely related off-policy actor-critic methods combining learning from demonstrations with RL. FERM builds on RAD [49] and inherits its challenges like learning task-specific representations. We demonstrate via experiments that RRL is more stable, more robust to various distractors, and convincingly outperforms FERM since RRL uses a fixed feature extractor pre-trained over wide variety of real world images and avoids learning task specific representations.
|
| 64 |
+
|
| 65 |
+
## III. BACKGROUND
|
| 66 |
+
|
| 67 |
+
RRL solves a standard Markov decision process (Section III-A) by combining three fundamental building blocks - (a) Policy gradient algorithm (Section III-B), (b) Demonstration bootstrapping (Section III-C), and (c) Representation learning (Section III-D). We briefly outline these fundamentals before detailing our method in Section IV.
|
| 68 |
+
|
| 69 |
+
## A. Preliminaries: MDP
|
| 70 |
+
|
| 71 |
+
We model the control problem as a Markov decision process (MDP), which is defined using the tuple: $\mathcal{M} =$ $\left( {\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},{\rho }_{0},\gamma }\right) .\mathcal{S} \in {\mathbb{R}}^{n}$ and $\mathcal{A} \in {\mathbb{R}}^{m}$ represent the state and actions. $\mathcal{R} : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is the reward function. In the ideal case, this function is simply an indicator for task completion (sparse reward setting). $\mathcal{T} : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ is the transition dynamics, which can be stochastic. In model-free RL, we do not assume any knowledge about the transition function and require only sampling access to this function. ${\rho }_{0}$ is the probability distribution over initial states and $\gamma \in \lbrack 0,1)$ is the discount factor. We wish to solve for a stochastic policy of the form $\pi : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ which optimizes the expected sum of rewards:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\eta \left( \pi \right) = {\mathbb{E}}_{\pi ,\mathcal{M}}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t}}\right\rbrack \tag{1}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
## B. Policy Gradient
|
| 78 |
+
|
| 79 |
+
The goal of the RL agent is to maximise the expected discounted return $\eta \left( \pi \right)$ (Equation 1) under the distribution induced by the current policy $\pi$ . Policy Gradient algorithms optimize the policy ${\pi }_{\theta }\left( {a \mid s}\right)$ directly, where $\theta$ is the function parameter by estimating $\nabla \eta \left( \pi \right)$ . First we introduce a few standard notations, Value function : ${V}^{\pi }\left( s\right) ,\mathrm{Q}$ function : ${Q}^{\pi }\left( {s, a}\right)$ and the advantage function : ${A}^{\pi }\left( {s, a}\right)$ . The advantage function can be considered as another version of Q-value with lower variance by taking the state-value off as the baseline.
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{V}^{\pi }\left( s\right) = {\mathbb{E}}_{\pi \mathcal{M}}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t} \mid {s}_{0} = s}\right\rbrack
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{Q}^{\pi }\left( {s, a}\right) = {\mathbb{E}}_{\mathcal{M}}\left\lbrack {\mathcal{R}\left( {s, a}\right) }\right\rbrack + {\mathbb{E}}_{{s}^{\prime } \sim \mathcal{T}\left( {f, d}\right) }\left\lbrack {{V}^{\pi }\left( {s}^{\prime }\right) }\right\rbrack
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
{A}^{\pi }\left( {s, a}\right) = {Q}^{\pi }\left( {s, a}\right) - {V}^{\pi }\left( s\right)
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
(2)
|
| 94 |
+
|
| 95 |
+
The gradient can be estimated using the Likelihood ratio approach and Markov property of the problem [1] and using a sampling based strategy,
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\nabla \eta \left( \pi \right) = g = \frac{1}{NT}\mathop{\sum }\limits_{{i = 0}}^{N}\mathop{\sum }\limits_{{t = 0}}^{T}{\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t}^{i} \mid {s}_{t}^{i}}\right) {\widehat{A}}^{\pi }\left( {{s}_{t}^{i},{a}_{t}^{i}, t}\right) \tag{3}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
Amongst the wide collection of policy gradient algorithms, we build upon Natural Policy Gradient (NPG) [2] to solve our MDP formulation owing to its stability and effectiveness in solving complex problems. We refer to [32] for a detailed background on different policy gradient approaches. In the next section, we describe how human demonstrations can be effectively used along with NPG to aid policy optimization.
|
| 102 |
+
|
| 103 |
+
## C. Demo Augmented Policy Gradient
|
| 104 |
+
|
| 105 |
+
Policy Gradients with appropriately shaped rewards can solve arbitrarily complex tasks. However, real-world environments seldom provide shaped rewards, and it must be manually specified by domain experts. Learning with sparse signals, such as task completion indicator functions, can relax domain expertise in reward shaping but it results in extremely high sample complexity due to exploration challenges. DAPG ([17]) combines policy gradients with few demonstrations in two ways to mitigate this issue and effectively learn from them. We represent the demonstration dataset using ${\rho }_{D} = \left\{ \left( {{s}_{t}^{\left( i\right) },{a}_{t}^{\left( i\right) },{s}_{t + 1}^{\left( i\right) },{r}_{t}^{\left( i\right) }}\right) \right\}$ where $t$ indexes time and $i$ indexes different trajectories.
|
| 106 |
+
|
| 107 |
+
(1) Warm up the policy using few demonstrations (25 in our setting) using a simple Mean Squared Error(MSE) loss, i.e, initialize the policy using behavior cloning [Eq 4]. This provides an informed policy initialization that helps in resolving the early exploration issue as it now pays attention to task relevant state-action pairs and thereby, reduces the sample complexity.
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{L}_{BC}\left( \theta \right) = \frac{1}{2}\mathop{\sum }\limits_{{i, t \in \text{ minibatch }}}{\left( {\pi }_{\theta }\left( {s}_{t}^{\left( i\right) }\right) - {a}_{t}^{\left( i\right) H}\right) }^{2} \tag{4}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where, $\theta$ are the agent parameters and ${a}_{t}^{\left( i\right) H}$ represents the action taken by the human/expert.
|
| 114 |
+
|
| 115 |
+
(2) DAPG builds upon on-policy NPG algorithm [2] which uses a normalized gradient ascent procedure where the normalization is under the Fischer metric.
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{\theta }_{k + 1} = {\theta }_{k} + \sqrt{\frac{\delta }{{g}^{T}{\widehat{F}}_{{\theta }_{k}}^{-1}g}}{\widehat{F}}_{{\theta }_{k}}^{-1}g \tag{5}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where ${\widehat{F}}_{{\theta }_{k}}$ is the Fischer Information Metric at the current iterate ${\theta }_{k}$ ,
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{\widehat{F}}_{{\theta }_{k}} = \frac{1}{T}\mathop{\sum }\limits_{{t = 0}}^{T}{\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {s}_{t}}\right) {\nabla }_{\theta }\log {\pi }_{\theta }{\left( {a}_{t} \mid {s}_{t}\right) }^{T} \tag{6}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
and $g$ is the sample based estimate of the policy gradient [Eq [3]. To make the best use of available demonstrations, DAPG proposes a joint loss ${g}_{\text{aug }}$ combining task as well as imitation objective. The imitation objective asymptotically decays over time allowing the agent to learn behaviors surpassing the expert.
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{g}_{\text{aug }} = \mathop{\sum }\limits_{{\left( {s, a}\right) \in {\rho }_{\pi }}}{\nabla }_{\theta }\ln {\pi }_{\theta }\left( {a \mid s}\right) {A}^{\pi }\left( {s, a}\right) \tag{7}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
+ \mathop{\sum }\limits_{{\left( {s, a}\right) \in {\rho }_{D}}}{\nabla }_{\theta }\ln {\pi }_{\theta }\left( {a \mid s}\right) w\left( {s, a}\right)
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
where, ${\rho }_{\pi }$ is the dataset obtained by executing the current policy, ${\rho }_{D}$ is the demonstration data and $w\left( {s, a}\right)$ is the heuristic weighting function defined as :
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
w\left( {s, a}\right) = {\lambda }_{0}{\lambda }_{1}^{k}\mathop{\max }\limits_{{\left( {{s}^{\prime },{a}^{\prime }}\right) \in {\rho }_{\pi }}}{A}^{\pi }\left( {{s}^{\prime },{a}^{\prime }}\right) \;\forall \;\left( {s, a}\right) \in {\rho }_{D} \tag{8}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
DAPG has proven to be successful in learning policy for the dexterous manipulation tasks with reasonable sample complexity.
|
| 144 |
+
|
| 145 |
+
## D. Representation Learning
|
| 146 |
+
|
| 147 |
+
DAPG has thus far only been demonstrated to be effective with access to low-level state information which is not readily available in real-world. DAPG is based on NPG which works well but faces issues with input dimensionality and hence, cannot be directly used with the input images acquired from onboard cameras. Representation learning [6] is learning representations of input data typically by transforming it or extracting features from it, which makes it easier to perform the task (in our case it can be used in place of the exact state of the environment). Let $I \in {\mathbb{R}}^{n}$ represents the high dimensional input image, then
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
h = {f}_{\rho }\left( I\right) \tag{9}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where $f$ represents the feature extractor, $\rho$ is the distribution over which $f$ is valid and $h \in {\mathbb{R}}^{d}$ with $d < < n$ is the compact, low dimensional representation of $I$ . In the next section, we outline our method that scales DAPG to solve directly from visual information.
|
| 154 |
+
|
| 155 |
+
## IV. RRL: RESNET AS REPRESENTATION FOR RL
|
| 156 |
+
|
| 157 |
+
In an ideal RL setting, the agent interacts with the environment based on the current state, and in return, the environment outputs the next state and the reward obtained. This works well in a simulated environment but in a real-world scenario, we do not have access to this low-level state information. Instead we get the information from cameras $\left( {I}_{t}\right)$ and other onboard sensors like joint encoders $\left( {\delta }_{t}\right)$ . To overcome the challenges associated with learning from high dimensional inputs, we use representations that project information into a lower-dimensional manifolds. These representations can be (a) learned in tandem with the RL objective. However, this leads to non-stationarity issue where the distribution induced by the current policy ${\pi }_{i}$ may lie outside the expressive power of $f,{\pi }_{i} ⊄ {\rho }_{i}$ at any step $i$ during training. (b) decoupled from RL by pre-training $f$ . For this to work effectively, the feature extractor must be trained on a sufficiently wide distribution such that it covers any distribution that the policy might induce during training, ${\pi }_{i} \subset \rho \forall i$ . Getting hold of such task specific training data beforehand becomes increasingly difficult as the complexity and diversity of the task increases. To this end, we propose to use a fixed feature extractor (Section V-B) that is pretrained on a wide variety of real world scenarios like ImageNet dataset [Highlighted in purple in Figure 1]. We experimentally demonstrate that the diversity (Section V-C) of the such feature extractor allows us to use it across all tasks we considered. The use of pre-trained representations induces stability to RRL as our representations are frozen and do-not face the non-stationarity issues encountered while learning policy and representation in tandem.
|
| 158 |
+
|
| 159 |
+
The features $\left( {h}_{t}\right)$ obtained from the above feature extractor are appended with the information obtained from the internal joint encoders of the Adroit Hand $\left( {\delta }_{t}\right)$ . As a substitute of the exact state $\left( {s}_{t}\right)$ , we empirically show that $\left\lbrack {{h}_{t},{\delta }_{t}}\right\rbrack$ can be used as an input to the policy. In principle any RL algorithm can be deployed to learn the policy, in RRL we build upon Natural Policy Gradients [3] owing to effectiveness in solving complex high dimensional tasks [17]. We present our full algorithm in Algorithm-1.
|
| 160 |
+
|
| 161 |
+
Algorithm 1 RRL
|
| 162 |
+
|
| 163 |
+
---
|
| 164 |
+
|
| 165 |
+
Input: 25 Human Demonstrations ${\rho }_{D}$
|
| 166 |
+
|
| 167 |
+
Initialize using Behavior Cloning [Eq. 4].
|
| 168 |
+
|
| 169 |
+
repeat
|
| 170 |
+
|
| 171 |
+
for $\mathrm{i} = 1$ to $\mathrm{n}$ do
|
| 172 |
+
|
| 173 |
+
for $\mathrm{t} = 1$ to horizon do
|
| 174 |
+
|
| 175 |
+
Take action
|
| 176 |
+
|
| 177 |
+
${a}_{t} = {\pi }_{\theta }\left( \left\lbrack {\operatorname{Encoder}\left( {I}_{t}\right) ,{\delta }_{t}}\right\rbrack \right)$
|
| 178 |
+
|
| 179 |
+
and receive ${I}_{t + 1},{\delta }_{t + 1},{r}_{t + 1}$
|
| 180 |
+
|
| 181 |
+
from the environment.
|
| 182 |
+
|
| 183 |
+
end for
|
| 184 |
+
|
| 185 |
+
end for
|
| 186 |
+
|
| 187 |
+
Compute ${\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {s}_{t}}\right)$ for each $\left( {s, a}\right) \in {\rho }_{\pi },{\rho }_{D}$
|
| 188 |
+
|
| 189 |
+
Compute ${A}^{\pi }\left( {s, a}\right)$ for each $\left( {s, a}\right) \in {\rho }_{\pi }$ and $w\left( {s, a}\right)$
|
| 190 |
+
|
| 191 |
+
for each $\left( {s, a}\right) \in {\rho }_{D}$ according to Equations 2,8
|
| 192 |
+
|
| 193 |
+
Calculate policy gradient according to [7]
|
| 194 |
+
|
| 195 |
+
Compute Fisher matrix [6]
|
| 196 |
+
|
| 197 |
+
Take the gradient ascent step according to 5 .
|
| 198 |
+
|
| 199 |
+
Update the parameters of the value function in order
|
| 200 |
+
|
| 201 |
+
to approximate 2 $: {V}_{k}^{\pi }\left( {s}_{t}^{\left( n\right) }\right) \approx \mathop{\sum }\limits_{{{t}^{\prime } = t}}^{T}{\gamma }^{{t}^{\prime } - t}{r}_{t}^{\left( n\right) }$
|
| 202 |
+
|
| 203 |
+
until Satisfactory performance
|
| 204 |
+
|
| 205 |
+
---
|
| 206 |
+
|
| 207 |
+
## V. EXPERIMENTAL EVALUATIONS
|
| 208 |
+
|
| 209 |
+
Our experimental evaluations aims to address the following questions: (1) Does pre-tained representations acquired via large real world image dataset allow RRL to learn complex tasks directly from proprioceptive signals (camera inputs and joint encoders)? (2) How does RRL's performance and efficiency compare against other state-of-the-art methods? (3) How various representational choices influence the generality and versatility of the resulting behaviors? (5) What are the effects of various design decisions on RRL? (6) Are commonly used benchmarks for studying image based continuous control methods effective?
|
| 210 |
+
|
| 211 |
+
## A. Tasks
|
| 212 |
+
|
| 213 |
+
Applicability of prior proprioception based RL methods $\left\lbrack {{49},{48},{47}}\right\rbrack$ have been limited to simple low dimensional tasks like Cartpole, Cheetah, Reacher, Finger spin, Walker, Ball in cup, etc. Moving beyond these simple domains, we investigate RRL on Adroit manipulation suite [17] which consists of contact-rich high-dimensional dexterous manipulation tasks (Figure 3) that have found to be challenging ever for state $\left( {s}_{t}\right)$ based methods. Furthermore, unlike prior task sets, which are fundamentally planar and devoid of depth perspective, the Adroit manipulation suite consists of visually-rich physically-realistic tasks that demand representations untangling complex depth information.
|
| 214 |
+
|
| 215 |
+
## B. Implementation Details
|
| 216 |
+
|
| 217 |
+
We use standard Resnet-34 model as RRL's feature extractor. The model is pre-trained on the ImageNet dataset which consists of 1000 classes. It is trained on 1.28 million images on the classification task of ImageNet. The last layer of the model is removed to recover a 512 dimensional feature space and all the parameters are frozen throughout the training of the RL agent. During inference, the observations obtained from the environment are of size ${256} \times {256}$ , a center crop of size ${224} \times {224}$ is fed into the model. We also evaluate our model using different Resnet sizes (Figure 7). All the hyperparameters used for training are summarized in Appendix(Table II). We report an average performance over three random seeds for all the experiments.
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
|
| 221 |
+
Fig. 3. ADROIT manipulation suite consisting of complex dexterous manipulation tasks involving object relocation, in hand manipulation (pen repositioning), tool use (hammering a nail), and interacting with human centric environments (opening a door).
|
| 222 |
+
|
| 223 |
+
## C. Results
|
| 224 |
+
|
| 225 |
+
In Figure 4, we contrast the performance of RRL against the state of the art baselines. We begin by observing that NPG [3] struggles to solve the suite even with full state information, which establishes the difficulty of our task suite. DAPG(State) [17] uses privileged state information and a few demonstrations from the environment to solve the tasks and pose as the best case oracle. RRL demonstrates good performance on all the tasks, relocate being the hardest, and often approaches performance comparable to our strongest oracle-DAPG(State).
|
| 226 |
+
|
| 227 |
+
A competing baseline FERM [58] is quite unstable in these tasks. It starts strong for hammer and door tasks but saturates in performance. It makes slow progress in pen, and completely fails for relocate. In Figure 5 [Left] we compare the computational footprint of FERM (along with other methods, discussed in later sections) with RRL. We note that our method not only outperforms FERM but also is approximately five times more compute-efficient.
|
| 228 |
+
|
| 229 |
+
---
|
| 230 |
+
|
| 231 |
+
${}^{1}$ Reporting best performance amongst over 30 configurations per task we tried in consultation with the FERM authors.
|
| 232 |
+
|
| 233 |
+
---
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+
Fig. 4. Performance on ADROIT dexterous manipulation suite [17]: State of the art policy gradient method NPG(State) [29] struggles to solve the suite even with privileged low level state information, establishing the difficulty of the suite. Amongst demonstration accelerated methods, RRL(Ours) demonstrates stable performance and approaches performance of DAPG(State) [17] (upper bound), a demonstration accelerated method using privileged state information. A competing baseline FERM [58] makes good initial, but unstable, progress in a few tasks and often saturates in performance before exhausting our computational budget (40 hours/ task/ seed).
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+
Fig. 5. LEFT: Comparison of the computational cost of RRL with Resnet34 i.e RRL(Ours), FERM - Strongest baseline, RRL with Resnet 18, RRL with Resnet 50, RRL (VAE), RRL with ShuffleNet, RRL with MobileNet and RRL with Very Deep VAE baseline. CENTER, RIGHT: Influence of various environment distractions (lightning condition, object color) on RRL(Ours), and FERM. RRL(Ours) consistently performs better than FERM in all the variations we considered.
|
| 242 |
+
|
| 243 |
+
## D. Effects of Visual Distractors
|
| 244 |
+
|
| 245 |
+
In Figure 5 [Center, Right] we probe the robustness of the final policies by injecting visual distractors in the environment during inference. We note that the resilience of the resnet features induces robustness to RRL's policies. On the other hand, task-specific features learned by FERM are brittle leading to larger degradation in performance. In addition to improved sample and time complexity resulting from the use of pre-trained features, the resilience, robustness, and versatility of Resnet features lead to policies that are also robust to visual distractors, clutter in the scene. More details about the experiment setting is provided in Section VII-H in Appendix.
|
| 246 |
+
|
| 247 |
+
## E. Effect of Representation
|
| 248 |
+
|
| 249 |
+
Is Resnet lucky? To investigate if architectural choice of Resnet is lucky, in Figure 6 we test different models pretrained on ImageNet dataset as RRL's feature extractors - MobileNetV2 [44], ShuffleNet [27] and state of the art hierarchical VAE [60] [Refer Section VII-E in Appendix for more details]. Not much degradation in performance is observed with respect to the Resnet model. This highlights that it is not the architecture choices in particular, rather the dataset on which models are being pre-trained, that delivers generic features effective for the RL agents.
|
| 250 |
+
|
| 251 |
+
Task-specific vs Task-agnostic representation: In Figure 7, we compare the performance between (a) learning task specific representations (VAE) (b) generic representation trained on a very wide distribution (Resnet). We note that RRL using Resnet34 significantly outperforms a variant RRL(VAE) (see appendix for details Section VII-G) that learns features via commonly used variational inference techniques on a task specific dataset [22, 23, 25, 28]. This indicates that pre-trained Resnet provides task agnostic and superior features compared to methods that explicitly learn brittle (Section-V-H) and task-specific features using additional samples from the environment. It is important to note that the latent dimension of the Resnet34 and VAE are kept same (512) for a fair comparison, however, the model sizes are different as one operates on a very wide distribution while the other on a much narrower task specific dataset. Additionally, we summarize the compute cost of both the methods RRL(Ours) and RRL(VAE) in Figrue 5 [Left]. We notice that even though RRL(VAE) is the cheapest, its performance is quite low (Figure 7). RRL(Ours) strikes a balance between compute and efficiency.
|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
|
| 255 |
+
Fig. 6. Effect of different types of Feature extractor pretrained on ImageNet dataset, highlighting that not just Resnet but any feature extractor pretrained on a sufficiently wide distribution of data remains effective.
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+
Fig. 7. Influence of representation: RRL(Ours), using resnet34 features, outperforms commonly used representation (RRL(VAE)) learning method VAE. Amongst different Resnet variations, Resnet34 strikes the balance between representation capacity and computational overhead. NPG(Resnet34) showcases the performance with Resnet34 features but without demonstration bootstrapping, indicating that only representational choices are not enough to solve the task suite.
|
| 260 |
+
|
| 261 |
+
F. Effects of proprioception choices and sensor noise
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+
Fig. 8. Influence of proprioceptive signals on RRL(Vision+sensors-Ours): RRL(Noise) demonstrates that RRL remains effectiveness in presence of noisy (2%) proprioception. RRL(Vision) demonstrates that RRL remains performant with (only) visual inputs as well.
|
| 266 |
+
|
| 267 |
+
While it's hard to envision a robot without proprioceptive joint sensing, harsh conditions of the real-world can lead to noisy sensing, even sensor failures. In Figure 8, we subjected RRL to (a) signals with $2\%$ noise in the information received from the joint encoders RRL(Noise), and (b) only visual inputs are used as proprioceptive signals RRL(Vision). In both these cases, our methods remained performant with slight to no degradation in performance.
|
| 268 |
+
|
| 269 |
+
## G. Ablations and Analysis of Design Decisions
|
| 270 |
+
|
| 271 |
+
In our next set of experiments, we evaluate the effect of various design decisions on our method. In Figure 7, we study the effect of different Resnet features as our representation. Resnet34, though computationally more demanding (Figure 5) than Resnet18, delivers better performance owing to its improved representational capacity and feature expressivity. A further boost in capacity (Resnet50) degrades performance, likely due to the incorporation of less useful features and an increase in samples required to train the resulting larger policy network.
|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
|
| 275 |
+
Fig. 9. LEFT: Influence of rewards signals: RRL(Ours), using sparse rewards, remains performant with a variation ${\mathrm{{RRL}}}_{\text{dense }}$ using well-shaped dense rewards. RIGHT: Effect of policy size on the performance of RRL. We observe that it is quite stable with respect to a wide range of policy sizes.
|
| 276 |
+
|
| 277 |
+
Reward design, especially for complex high dimensional tasks, requires domain expertise. RRL replaces the needs of well-shaped rewards by using a few demonstrations (to curb the exploration challenges in high dimensional space) and sparse rewards (indicating task completion). This significantly lowers the domain expertise required for our methods. In Figure 9-LEFT, we observe that RRL (using sparse rewards) delivers competitive performance to a variant of our methods that uses well-shaped dense rewards while being resilient to variation in policy network capacity (Figure 9-RIGHT).
|
| 278 |
+
|
| 279 |
+
## H. Rethinking benchmarking for visual ${RL}$
|
| 280 |
+
|
| 281 |
+
DMControl [31] is a widely used benchmark for proprioception based RL methods - RAD [49], SAC+AE [56], CURL [51], DrQ [48]. While these methods perform well (Table 1) on such simple DMControl tasks, their progress struggles to scale when met with task representative of real world complexities such as realistic Adroit Manipulation benchmarks (Figure 4).
|
| 282 |
+
|
| 283 |
+
For example we demonstrate in Figure 4 that a representative SOTA methods FERM (uses expert demos along with RAD) struggles to perform well on Adroit Manipulation benchmark. On the contrary, RRL using Resnet features pre-trained on real world image dataset, delivers state comparable results on Adroit Manipulation benchmark while struggles on the DMControl (RRL+SAC: RRL using SAC and Resnet34 features [1]. This highlights large domain gap between the DMControl suite and the real-world.
|
| 284 |
+
|
| 285 |
+
We further note that the pretrained features learned by SOTA methods aren't as widely applicable. We use a pre-trained RAD encoder (pretrained on Cartpole) as fixed feature extractor (Fixed RAD encoder in Table 1) and retrain the policy using these features for all environments. The performance degrades for all the tasks except Cartpole. This highlights that the representation learned by RAD (even with various image augmentations) are task specific and fail to generalize to other tasks set with similar visuals. Furthermore, learning such task specific representations are easier on simpler scenes but their complexity grows drastically as the complexity of tasks and scenes increases. To ensure that important problems aren't overlooked, we emphasise the need for the community to move towards benchmarks representative of realistic real world tasks.
|
| 286 |
+
|
| 287 |
+
<table><tr><td>${500}\mathrm{\;K}$ Step Scores</td><td>RRL+SAC</td><td>RAD</td><td>Fixed RAD Encoder</td><td>CURL</td><td>SAC+AE</td><td>State SAC</td></tr><tr><td>Finger, Spin</td><td>${422} \pm {102}$</td><td>${947} \pm {101}$</td><td>${789} \pm {190}$</td><td>${926} \pm {45}$</td><td>${884} \pm {128}$</td><td>923 ± 211</td></tr><tr><td>Cartpole, Swing</td><td>${357} \pm {85}$</td><td>${863} \pm 9$</td><td>${875} \pm {01}$</td><td>${845} \pm {45}$</td><td>${735} \pm {63}$</td><td>${848} \pm {15}$</td></tr><tr><td>Reacher, Easy</td><td>${382} \pm {299}$</td><td>${955} \pm {71}$</td><td>${53} \pm {44}$</td><td>${929} \pm {44}$</td><td>${627} \pm {58}$</td><td>${923} \pm {24}$</td></tr><tr><td>Cheetah, Run</td><td>${154} \pm {23}$</td><td>${728} \pm {71}$</td><td>${203} \pm {31}$</td><td>${518} \pm {28}$</td><td>${550} \pm {34}$</td><td>${795} \pm {30}$</td></tr><tr><td>Walker, Walk</td><td>${148} \pm {12}$</td><td>${918} \pm {16}$</td><td>${182} \pm {40}$</td><td>${902} \pm {43}$</td><td>${847} \pm {48}$</td><td>${948} \pm {54}$</td></tr><tr><td>Cup, Catch</td><td>${447} \pm {132}$</td><td>${974} \pm {12}$</td><td>${719} \pm {70}$</td><td>${959} \pm {27}$</td><td>${794} \pm {58}$</td><td>${974} \pm {33}$</td></tr><tr><td>100K Step Scores</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Finger, Spin</td><td>${135} \pm {67}$</td><td>${856} \pm {73}$</td><td>${655} \pm {104}$</td><td>${767} \pm {56}$</td><td>${740} \pm {64}$</td><td>${811} \pm {46}$</td></tr><tr><td>Cartpole, Swing</td><td>${192} \pm {19}$</td><td>${828} \pm {27}$</td><td>${840} \pm {34}$</td><td>${582} \pm {146}$</td><td>${311} \pm {11}$</td><td>${835} \pm {22}$</td></tr><tr><td>Reacher, Easy</td><td>${322} \pm {285}$</td><td>${826} \pm {219}$</td><td>${162} \pm {40}$</td><td>${538} \pm {233}$</td><td>${274} \pm {14}$</td><td>${746} \pm {25}$</td></tr><tr><td>Cheetah, Run</td><td>${72} \pm {63}$</td><td>${447} \pm {88}$</td><td>${188} \pm {20}$</td><td>${299} \pm {48}$</td><td>${267} \pm {24}$</td><td>${616} \pm {18}$</td></tr><tr><td>Walker, Walk</td><td>${63} \pm {07}$</td><td>${504} \pm {191}$</td><td>${106} \pm {11}$</td><td>${403} \pm {24}$</td><td>${394} \pm {22}$</td><td>${891} \pm {82}$</td></tr><tr><td>Cup, Catch</td><td>${261} \pm {57}$</td><td>${840} \pm {179}$</td><td>${533} \pm {148}$</td><td>${769} \pm {43}$</td><td>${391} \pm {82}$</td><td>${746} \pm {91}$</td></tr></table>
|
| 288 |
+
|
| 289 |
+
TABLE I
|
| 290 |
+
|
| 291 |
+
Results on DMControl Benchmark. RAD outperforms all the baselines whereas RRL performs worse in the ${100}\mathrm{K}$ and ${500}\mathrm{к}$ Environmental step benchmark suggesting that it is quicker to learn task specific representation in simple tasks whereas Fixed RAD ENCODER HIGHLIGHTS THAT THE REPRESENTATIONS LEARNED BY RAD ARE NARROW AND TASK SPECIFIC.
|
| 292 |
+
|
| 293 |
+
## VI. STRENGTHS, LIMITATIONS & OPPORTUNITIES
|
| 294 |
+
|
| 295 |
+
This paper presents an intuitive idea bringing together advancements from the fields of representation learning, imitation learning, and reinforcement learning. We present a very simple method named RRL that leverages Resnet features as representation to learn complex behaviors directly from proprioceptive signals. The resulting algorithm approaches the performance of state-based methods in complex ADROIT dexterous manipulation suite.
|
| 296 |
+
|
| 297 |
+
Strengths: The strength of our insight lies in its simplicity, and applicability to almost any reinforcement or imitation learning algorithm that intends to learn directly from high dimensional proprioceptive signals. We present RRL, an instantiation of this insight on top of imitation + (on-policy) reinforcement learning methods called DAPG, to showcase its strength. It presents yet another demonstration that features learned by Resnet are quite general and are broadly applicable. Resnet features trained over 1000s of real-world images are more robust and resilient in comparison to the features learned by methods that learn representation and policies in tandem using only samples from the task distribution. The use of such general but frozen representations in conjunction with RL pipelines additionally avoids the non-stationary issues faced by competing methods that simultaneously optimizes reinforcement and representation objectives, leading to more stable algorithms. Additionally, not having to train your own features extractors results in a significant sample and compute gains, Refer to Figure 5.
|
| 298 |
+
|
| 299 |
+
Limitations: While this work demonstrates promises of using pre-trained features, it doesn't investigate the data mismatch problem that might exist. Real-world datasets used to train resnet features are from human-centric environments. While we desire robots to operate in similar settings, there are still differences in their morphology and mode of operations. Additionally, resent (and similar models) acquire features from data primarily comprised of static scenes. In contrast, embodied agents desire rich features of dynamic and interactive movements.
|
| 300 |
+
|
| 301 |
+
Opportunities: RRL uses a single pre-trained representation for solving all the complex and very different tasks. Unlike the domains of vision and language, there is a nontrivial cost associated with data in robotics. The possibility of having a standard shared representational space opens up avenues for leveraging data from various sources, building hardware-accelerated devices using feature compression, low latency and low bandwidth information transmission.
|
| 302 |
+
|
| 303 |
+
## REFERENCES
|
| 304 |
+
|
| 305 |
+
[1] Ronald J. Williams. "Simple statistical gradient-following algorithms for connectionist reinforcement learning". In: Machine Learning. 1992, pp. 229-256.
|
| 306 |
+
|
| 307 |
+
[2] S. Kakade. "A Natural Policy Gradient". In: NIPS. 2001.
|
| 308 |
+
|
| 309 |
+
[3] Sham M Kakade. "A natural policy gradient". In: Advances in neural information processing systems 14 (2001).
|
| 310 |
+
|
| 311 |
+
[4] Sergey Levine and Vladlen Koltun. "Guided Policy Search". In: Proceedings of the 30th International Conference on Machine Learning. Ed. by Sanjoy Dasgupta and David McAllester. Vol. 28. Proceedings of Machine Learning Research 3. Atlanta, Georgia, USA: PMLR, 17-19 Jun 2013, pp. 1-9. URL: http: //proceedings.mlr.press/v28/levine13 html
|
| 312 |
+
|
| 313 |
+
[5] Volodymyr Mnih et al. Playing Atari with Deep Reinforcement Learning. 2013. arXiv: 1312.5602 [cs.LG].
|
| 314 |
+
|
| 315 |
+
[6] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation Learning: A Review and New Perspectives. 2014. arXiv: 1206.5538 [cs.LG].
|
| 316 |
+
|
| 317 |
+
[7] Diederik P Kingma and Max Welling. Auto-Encoding
|
| 318 |
+
|
| 319 |
+
Variational Bayes. 2014. arXiv: 1312. 6114 [stat.ML].
|
| 320 |
+
|
| 321 |
+
[8] Kaiming He et al. Deep Residual Learning for Image Recognition. 2015. arXiv: 1512.03385 [cs.CV]
|
| 322 |
+
|
| 323 |
+
[9] Volodymyr Mnih et al. "Human-level control through deep reinforcement learning". In: Nature 518.7540 (Feb. 2015), pp. 529-533. ISSN: 00280836. URL: http: //dx.doi.org/10.1038/nature14236.
|
| 324 |
+
|
| 325 |
+
[10] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. 2015. arXiv: 1409.1556 [cs.CV].
|
| 326 |
+
|
| 327 |
+
[11] Jost Tobias Springenberg et al. Striving for Simplicity: The All Convolutional Net. 2015. arXiv: 1412.6806 [cs.LG].
|
| 328 |
+
|
| 329 |
+
[12] Christian Szegedy et al. Rethinking the Inception Architecture for Computer Vision. 2015. arXiv: 1512 00567 [cs.CV].
|
| 330 |
+
|
| 331 |
+
[13] Irina Higgins et al. "beta-vae: Learning basic visual concepts with a constrained variational framework". In: (2016).
|
| 332 |
+
|
| 333 |
+
[14] Sergey Levine et al. End-to-End Training of Deep Visuomotor Policies. 2016. arXiv: 1504. 00702 [cs.LG].
|
| 334 |
+
|
| 335 |
+
[15] Abhishek Gupta et al. Learning Dexterous Manipulation for a Soft Robotic Hand from Human Demonstration. 2017. arXiv: 1603.06348 [cs.LG]
|
| 336 |
+
|
| 337 |
+
[16] Todd Hester et al. Deep Q-learning from Demonstrations. 2017. arXiv: 1704.03732 [cs.AI].
|
| 338 |
+
|
| 339 |
+
[17] Aravind Rajeswaran et al. "Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations". In: CoRR abs/1709.10087 (2017). arXiv: 1709.10087. URL: http://arxiv.org/ abs/1709.10087
|
| 340 |
+
|
| 341 |
+
[18] John Schulman et al. Trust Region Policy Optimization. 2017. arXiv: 1502.05477 [cs.LG].
|
| 342 |
+
|
| 343 |
+
[19] Davide Silver et al. "Mastering the game of Go without human knowledge". In: Nature 550 (Oct. 2017), pp. 354-. URL: http://dx.doi.org/10.1038/ nature24270.
|
| 344 |
+
|
| 345 |
+
[20] Christopher P. Burgess et al. Understanding disentangling in $\beta$ -VAE. 2018. arXiv: 1804 . 03599 [stat.ML].
|
| 346 |
+
|
| 347 |
+
[21] Lasse Espeholt et al. IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. 2018. arXiv: 1802.01561 [cs.LG].
|
| 348 |
+
|
| 349 |
+
[22] David Ha and Jürgen Schmidhuber. Recurrent World Models Facilitate Policy Evolution. 2018. arXiv: 1809. 01999 [cs.LG].
|
| 350 |
+
|
| 351 |
+
[23] David Ha and Jürgen Schmidhuber. "World models". In: arXiv preprint arXiv:1803.10122 (2018).
|
| 352 |
+
|
| 353 |
+
[24] Tuomas Haarnoja et al. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. 2018. arXiv: 1801. 01290 [cs.LG].
|
| 354 |
+
|
| 355 |
+
[25] Irina Higgins et al. DARLA: Improving Zero-Shot Transfer in Reinforcement Learning. 2018. arXiv: 1707.08475 [stat.ML].
|
| 356 |
+
|
| 357 |
+
[26] Dmitry Kalashnikov et al. QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation. 2018. arXiv: 1806.10293 [cs.LG]
|
| 358 |
+
|
| 359 |
+
[27] Ningning Ma et al. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. 2018. arXiv: 1807.11164 [cs.CV].
|
| 360 |
+
|
| 361 |
+
[28] Ashvin Nair et al. Visual Reinforcement Learning with Imagined Goals. 2018. arXiv: 1807.04742 [cs.LG].
|
| 362 |
+
|
| 363 |
+
[29] Aravind Rajeswaran et al. Towards Generalization and Simplicity in Continuous Control. 2018. arXiv: 1703. 02660 [cs.LG].
|
| 364 |
+
|
| 365 |
+
[30] Pierre Sermanet et al. Time-Contrastive Networks: Self-Supervised Learning from Video. 2018. arXiv: 1704. 06888 [cs.CV].
|
| 366 |
+
|
| 367 |
+
[31] Yuval Tassa et al. DeepMind Control Suite. 2018. arXiv: 1801.00690 [cs.AI].
|
| 368 |
+
|
| 369 |
+
[32] Lilian Weng. "Policy Gradient Algorithms". In: lilianweng.github.io/lil-log (2018). URL: https:// lilianweng.github.io/lil-log/2018/ 04 / 08 / policy - gradient - algorithms html
|
| 370 |
+
|
| 371 |
+
[33] Henry Zhu et al. Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost. 2018. arXiv: 1810.06045 [cs.AI].
|
| 372 |
+
|
| 373 |
+
[34] Gabriel Dulac-Arnold, Daniel Mankowitz, and Todd Hester. "Challenges of real-world reinforcement learning". In: arXiv preprint arXiv:1904.12901 (2019).
|
| 374 |
+
|
| 375 |
+
[35] Tuomas Haarnoja et al. Soft Actor-Critic Algorithms and Applications. 2019. arXiv: 1812.05905 [cs.LG].
|
| 376 |
+
|
| 377 |
+
[36] Danijar Hafner et al. Learning Latent Dynamics for Planning from Pixels. 2019. arXiv: 1811.04551 [cs.LG].
|
| 378 |
+
|
| 379 |
+
[37] Max Jaderberg et al. "Human-level performance in 3D multiplayer games with population-based reinforcement learning". In: Science 364.6443 (May 2019), pp. 859-865. ISSN: 1095-9203. DOI: 10.1126/ science.aau6249.URL: http://dx.doi org/10.1126/science.aau6249
|
| 380 |
+
|
| 381 |
+
[38] Tejas Kulkarni et al. Unsupervised Learning of Object Keypoints for Perception and Control. 2019. arXiv: 1906.11883 [cs.CV].
|
| 382 |
+
|
| 383 |
+
[39] Lucas Manuelli et al. kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation. 2019. arXiv: 1903.06684 [cs.RO].
|
| 384 |
+
|
| 385 |
+
[40] Lucas Manuelli et al. "kpam: Keypoint affordances for category-level robotic manipulation". In: arXiv preprint arXiv:1903.06684 (2019).
|
| 386 |
+
|
| 387 |
+
[41] Anusha Nagabandi et al. Deep Dynamics Models for Learning Dexterous Manipulation. 2019. arXiv: 1909 11652 [cs.RO].
|
| 388 |
+
|
| 389 |
+
[42] OpenAI et al. Solving Rubik's Cube with a Robot Hand. 2019. arXiv: 1910.07113 [cs.LG]
|
| 390 |
+
|
| 391 |
+
[43] Zengyi Qin et al. KETO: Learning Keypoint Representations for Tool Manipulation. 2019. arXiv: 1910. 11977 [cs.RO].
|
| 392 |
+
|
| 393 |
+
[44] Mark Sandler et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. 2019. arXiv: 1801.04381 [cs.CV].
|
| 394 |
+
|
| 395 |
+
[45] Ramprasaath R. Selvaraju et al. "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization". In: International Journal of Computer Vision 128.2 (Oct. 2019), pp. 336-359. ISSN: 1573- 1405. DOI: 10.1007/s11263-019-01228-7. URL: http://dx.doi.org/10.1007/s11263- 019-01228-7.
|
| 396 |
+
|
| 397 |
+
[46] Michael Ahn et al. "ROBEL: RObotics BEnchmarks for Learning with low-cost robots". In: Conference on Robot Learning. PMLR. 2020, pp. 1300-1313.
|
| 398 |
+
|
| 399 |
+
[47] Danijar Hafner et al. Dream to Control: Learning Behaviors by Latent Imagination. 2020. arXiv: 1912. 01603 [cs.LG].
|
| 400 |
+
|
| 401 |
+
[48] Ilya Kostrikov, Denis Yarats, and Rob Fergus. Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels. 2020. arXiv: 2004.13649 [cs.LG].
|
| 402 |
+
|
| 403 |
+
[49] Michael Laskin et al. Reinforcement Learning with Augmented Data. 2020. arXiv: 2004.14990 [cs.LG].
|
| 404 |
+
|
| 405 |
+
[50] Aravind Rajeswaran, Igor Mordatch, and Vikash Kumar. A Game Theoretic Framework for Model Based Reinforcement Learning. 2020. arXiv: 2004.07804 [cs.LG].
|
| 406 |
+
|
| 407 |
+
[51] Aravind Srinivas, Michael Laskin, and Pieter Abbeel. CURL: Contrastive Unsupervised Representations for Reinforcement Learning. 2020. arXiv: 2004.04136 [cs.LG].
|
| 408 |
+
|
| 409 |
+
[52] Adam Stooke et al. Decoupling Representation Learning from Reinforcement Learning. 2020. arXiv: 2009. 08319 [cs.LG].
|
| 410 |
+
|
| 411 |
+
[53] A.K Subramanian. PyTorch-VAE. https://github com/AntixK/PyTorch-VAE.2020.
|
| 412 |
+
|
| 413 |
+
[54] Mingxing Tan and Quoc V. Le. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. 2020. arXiv: 1905.11946 [cs.LG].
|
| 414 |
+
|
| 415 |
+
[55] Denis Yarats and Ilya Kostrikov. Soft Actor-Critic (SAC) implementation in PyTorch. https:// github.com/denisyarats/pytorch_sac 2020.
|
| 416 |
+
|
| 417 |
+
[56] Denis Yarats et al. Improving Sample Efficiency in Model-Free Reinforcement Learning from Images. 2020. arXiv: 1910.01741 [cs.LG].
|
| 418 |
+
|
| 419 |
+
[57] Yang You et al. KeypointNet: A Large-scale 3D Keypoint Dataset Aggregated from Numerous Human Annotations. 2020. arXiv: 2002.12687 [cs.CV].
|
| 420 |
+
|
| 421 |
+
[58] Albert Zhan et al. A Framework for Efficient Robotic Manipulation. 2020. arXiv: 2012.07975 [cs.RO].
|
| 422 |
+
|
| 423 |
+
[59] Henry Zhu et al. The Ingredients of Real-World Robotic Reinforcement Learning. 2020. arXiv: 2004.12570 [cs.LG].
|
| 424 |
+
|
| 425 |
+
[60] Rewon Child. Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images. 2021. arXiv: 2011.10650 [cs.LG]
|
| 426 |
+
|
| 427 |
+
[61] Austin Stone et al. The Distracting Control Suite - A Challenging Benchmark for Reinforcement Learning from Pixels. 2021. arXiv: 2101.02722 [cs.RO].
|
| 428 |
+
|
| 429 |
+
[62] Chelsea Finn et al. "Learning Visual Feature Spaces for Robotic Manipulation with Deep Spatial Autoen-coders". In: ( ).
|
| 430 |
+
|
| 431 |
+
## VII. APPENDIX
|
| 432 |
+
|
| 433 |
+
## A. Project's webpage
|
| 434 |
+
|
| 435 |
+
Full details of the project (including video results, codebase, etc) are available at https://sites.google.com/view/abstractions4rl
|
| 436 |
+
|
| 437 |
+
## B. Overview of all methods used in baselines and ablations
|
| 438 |
+
|
| 439 |
+
The environmental setting and the feature extractor used in all the variations and different methods considered is summarized in Table VII-B
|
| 440 |
+
|
| 441 |
+
<table><tr><td rowspan="2"/><td colspan="3">Observation</td><td rowspan="2">Latent Features</td><td rowspan="2">Demos</td><td rowspan="2">Rewards</td></tr><tr><td>Vision (RGB)</td><td>Joint Encoders</td><td>Environment State</td></tr><tr><td>RRL(Ours)</td><td>✓</td><td>✓</td><td/><td>Resnet34</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL(Resnet18)</td><td>✓</td><td>✓</td><td/><td>Resnet18</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL(Resnet50)</td><td>✓</td><td>✓</td><td/><td>Resnet50</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL (VAE)</td><td>✓</td><td>✓</td><td/><td>VAE</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL(Vision)</td><td>✓</td><td/><td/><td>Resnet34</td><td>✓</td><td>Sparse</td></tr><tr><td>FERM</td><td>✓</td><td>✓</td><td/><td/><td>✓</td><td>Sparse</td></tr><tr><td>NPG(State)</td><td/><td>✓</td><td>✓</td><td/><td/><td>Sparse</td></tr><tr><td>NPG(Vision)</td><td>✓</td><td/><td/><td>Resnet34</td><td/><td>Sparse</td></tr><tr><td>DAPG(State)</td><td/><td>✓</td><td>✓</td><td/><td>✓</td><td>Sparse</td></tr><tr><td>RRL(Sparse)</td><td>✓</td><td>✓</td><td/><td>Resnet34</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL(Dense)</td><td>✓</td><td>✓</td><td/><td>Resnet34</td><td>✓</td><td>Dense</td></tr><tr><td>RRL(Noise)</td><td>✓</td><td>✓</td><td/><td>Resnet34</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL(Vision + Sensors)</td><td>✓</td><td>✓</td><td/><td>Resnet34</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL(ShuffleNet)</td><td>✓</td><td>✓</td><td/><td>ShuffleNet-v2</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL(MobileNet)</td><td>✓</td><td>✓</td><td/><td>MobileNet-v2</td><td>✓</td><td>Sparse</td></tr><tr><td>RRL(vdvae)</td><td>✓</td><td>✓</td><td/><td>Very Deep VAE</td><td>✓</td><td>Sparse</td></tr></table>
|
| 442 |
+
|
| 443 |
+
C. RRL(Ours)
|
| 444 |
+
|
| 445 |
+
<table><tr><td>Parameters</td><td>Setting</td></tr><tr><td>BC batch size</td><td>32</td></tr><tr><td>BC epochs</td><td>5</td></tr><tr><td>BC learning rate</td><td>0.001</td></tr><tr><td>Policy Size</td><td>(256, 256)</td></tr><tr><td>vf _batch_size</td><td>64</td></tr><tr><td>vf_epochs</td><td>2</td></tr><tr><td>rl_step_size</td><td>0.05</td></tr><tr><td>rl_gamma</td><td>0.995</td></tr><tr><td>rl_gae</td><td>0.97</td></tr><tr><td>lam_0</td><td>0.01</td></tr><tr><td>lam_1</td><td>0.95</td></tr></table>
|
| 446 |
+
|
| 447 |
+
TABLE II
|
| 448 |
+
|
| 449 |
+
HYPERPARAMETER DETAILS FOR ALL THE RRL VARIATIONS.
|
| 450 |
+
|
| 451 |
+
Same parameters are used across all the tasks (Pen, Door, Hammer, Relocate, PegInsertion, Reacher) unless explicitly mentioned. Sparse reward setting is used in all the hand manipulation environments as proposed by Rajeswaran et al. along with 25 expert demonstrations. We have directly used the parameters (summarize in Table II) provided by DAPG without any additional hyperparameter tuning except for the policy size (used same across all tasks). On the Adroit Manipulation task, 200 trajectories for Hammer-v0, Door-v0, Relocate-v0 whereas 400 trajectories for Pen-v0 per iteration are collected in each iteration.
|
| 452 |
+
|
| 453 |
+
## D. Results on MJRL Environment
|
| 454 |
+
|
| 455 |
+
We benchmark the performance of RRL on two of the MJRL environments [50], Reacher and Peg Insertion in Figure 10. These environments are quite low dimensional (7DoF Robotic arm) compared to the Adroit hand (24 DoF) but still require rich understanding of the task. In the peg insertion task, RRL delivers state comparable (DAPG(State)) results and significantly outperforms FERM. However, in the Reacher task, we notice that DAPG(State) and FERM perform surprisingly well whereas RRL struggles to perform initially. This highlights that using task specific representations in simple, low dimensional environments might be beneficial as it is easy to overfit the feature encoder for the task in hand while the Resnet features are quite generic. For the MJRL environment, shaped reward setting is used as provided in the repository 2 along with 200 expert demonstrations. For the Peg Insertion task 200 trajectories and for Reacher task 400 trajectories are collected per iteration.
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
|
| 459 |
+
Fig. 10. Results on MJRL Environment. RRL outperforms FERM and delivers results on par with DAPG(State) in the PegInsertion task. In Reacher, FERM outperforms RRL following that learning task specific representations is easier in simple tasks.
|
| 460 |
+
|
| 461 |
+
## E. Other variations of RRL
|
| 462 |
+
|
| 463 |
+
a) RRL(MobileNet), RRL(ShuffleNet) : The encoders (ShuffleNet [27] and MobileNet [44]) are pretrained on ImageNet Dataset using a classification objective. We pick the pretrained models from torchvision directly and freeze the parameters during the entire training of the RL agent. Similar to RRL(Ours), the last layer of the model is removed and a latent feature of dimension 1024 and 1280 in case of ShuffleNet and MobileNet respectively is used.
|
| 464 |
+
|
| 465 |
+
b) $\mathbf{{RRL}}$ (volvae) : We use a very recent state of the art hierarchical VAE [60] that is trained on ImageNet dataset. The code along with the pretrained weights are publically available ${}^{3}$ by the author. We use the intermediate features of the encoder of dimension 512. All the parameters are frozen similar to RRL(Ours).
|
| 466 |
+
|
| 467 |
+
## F. DMControl Experiment Details
|
| 468 |
+
|
| 469 |
+
For the RAD [49], CURL [51], SAC+AE [56] and State SAC [35], we report the numbers directly provided by Laskin et al. For SAC+RRL, Resnet34 is used as a fixed feature extractor and the past three output features (frame_stack $= 3$ ) are used as a representative of state information in SAC algorithm. For the fixed RAD encoder, we train the RL agent along with RAD encoder using the default hyperparameters provided by the authors for Cartpole environment. We used the trained encoder as a fixed feature extractor and retrain the policies for all the tasks. The frame_skip values are task specific as mentioned in [56] also outlined in Table IV. The hyperparameters used are summarized in the Table III where a grid search is made on actor_lr $= \{ {1e} - 3,{1e} - 4\}$ , critic_lr $= \{ {1e} - 3,{1e} - 4\}$ , critic_update_freq $= \{ 1,2\}$ , critic_tau $= \{ {0.01},{0.05},{0.1}\}$ and an average over 3 seeds is reported. SAC implementation in PyTorch courtesy [55].
|
| 470 |
+
|
| 471 |
+
## G. RRL(VAE)
|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
|
| 475 |
+
For training, we collected a dataset of 1 million images of size ${64} \times {64}$ . Out of the 1 million images collected, ${25}\%$ of the images are collected using an optimal course of actions (expert policy), 25% with a little noise (expert policy + small noise), 25% with even higher level of noise (expert policy + large noise) and remaining portion by randomly sampling actions (random actions). This is to ensure that the images collected sufficiently represents the distribution faced by policy during the training of the agent. We observed that this significantly helps compared to collecting data only from the expert policy. The variational auto-encoder(VAE) is trained using a reconstruction objective [7] for 10epochs. Figure 1 1 showcases the reconstructed images. We used a latent size of 512 for a fair comparison with Resnet. The weights of the encoder are freezed and used as feature extractors in place of Resnet in RRL. RRL(VAE) also uses the inputs from the pro-prioceptive sensors along with the encoded features. VAE implementation courtesy [53].
|
| 476 |
+
|
| 477 |
+
---
|
| 478 |
+
|
| 479 |
+
${}^{2}$ https://github.com/aravindr93/mjrl
|
| 480 |
+
|
| 481 |
+
${}^{3}$ https://github.com/openai/vdvae
|
| 482 |
+
|
| 483 |
+
---
|
| 484 |
+
|
| 485 |
+
<table><tr><td>Parameter</td><td>Setting</td></tr><tr><td>frame_stack</td><td>3</td></tr><tr><td>replay_buffer_capacity</td><td>100000</td></tr><tr><td>init_steps</td><td>1000</td></tr><tr><td>batch_size</td><td>128</td></tr><tr><td>hidden_dim</td><td>1024</td></tr><tr><td>critic_lr</td><td>1e-3</td></tr><tr><td>critic_beta</td><td>0.9</td></tr><tr><td>critic_tau</td><td>0.01</td></tr><tr><td>critic_target_update_freq</td><td>2</td></tr><tr><td>actor_lr</td><td>1e-3</td></tr><tr><td>actor_beta</td><td>0.9</td></tr><tr><td>actor_log_std_min</td><td>-10</td></tr><tr><td>actor_log_std_max</td><td>2</td></tr><tr><td>actor_update_freq</td><td>2</td></tr><tr><td>discount</td><td>0.99</td></tr><tr><td>init_temperature</td><td>0.1</td></tr><tr><td>alpha_lr</td><td>1e-4</td></tr><tr><td>alpha_beta</td><td>0.5</td></tr></table>
|
| 486 |
+
|
| 487 |
+
TABLE III SAC HYPERPARAMETERS.
|
| 488 |
+
|
| 489 |
+
<table><tr><td>Environment</td><td>action_repeat</td></tr><tr><td>Cartpole, Swing</td><td>8</td></tr><tr><td>Reacher, Easy</td><td>4</td></tr><tr><td>Cheetah, Run</td><td>4</td></tr><tr><td>Cup, Catch</td><td>4</td></tr><tr><td>Walker, Walk</td><td>2</td></tr><tr><td>Finger, Spin</td><td>2</td></tr></table>
|
| 490 |
+
|
| 491 |
+
TABLE IV
|
| 492 |
+
|
| 493 |
+
ACTION REPEAT VALUES FOR DMCONTROL SUITE
|
| 494 |
+
|
| 495 |
+
## H. Visual Distractor Evaluation details
|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
|
| 499 |
+
Fig. 12. COL1: Original images; COL2: Change in light position; COL3: Change in light direction; COL4: Randomizing object colors; COL5: Introducing a random object in the scene. All the parameters are randomly sampled every time in an episode.
|
| 500 |
+
|
| 501 |
+
In order to test the generalisation performance of RRL and FERM [58], we subject the environment to various kinds of visual distractions during inference (Figure 12). Note all parameters are freezed during this evaluation, an average performance over 75 rollouts is reported. Following distractors were used during inference to test robustness of the final policy -
|
| 502 |
+
|
| 503 |
+
- Random change in light position.
|
| 504 |
+
|
| 505 |
+
- Random change in light direction.
|
| 506 |
+
|
| 507 |
+
- Random object color. (Handle, door color for Door-v0; Different hammer parts and nail for Hammer-v0)
|
| 508 |
+
|
| 509 |
+
- Introducing a new object in scene - random color, position, size and geometry (Sphere, Capsule, Ellipsoid, Cylinder, Box).
|
| 510 |
+
|
| 511 |
+
## I. Compute Cost calculation
|
| 512 |
+
|
| 513 |
+
We calculate the actual compute cost involved for all the methods (RRL(Ours), FERM, RRL(Resnet-50), RRL(Resnet-18)) that we have considered. Since in a real-world scenario there is no simulation of the environment we do not include the cost of simulation into the calculation. For fair comparison we show the compute cost with same sample complexity ( 4 million steps) for all the methods. FERM is quite compute intensive (almost 5x RRL(Ours)) because (a) Data augmentation is applied at every step (b) The parameters of Actor and Critic are updated once/twice at every step (Compute results shown are with one update per step) whereas most of the computation of RRL goes in the encoding of features using Resnet. The cost of VAE pretraining in included in the over all cost. RRL(Ours) that uses Resnet-34 strikes a balance between the computational cost and performance. Note: No parallel processing is used while calculating the cost.
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/R-W8K2RyVp7/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,332 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ RRL: RESNET AS REPRESENTATION FOR REINFORCEMENT LEARNING
|
| 2 |
+
|
| 3 |
+
Rutav Shah ${}^{1}$ and Vikash Kumar ${}^{2,3}$
|
| 4 |
+
|
| 5 |
+
Abstract-Generalist robots capable of performing dexterous, contact-rich manipulation tasks will enhance productivity and provide care in un-instrumented settings like homes. Such tasks warrant operations in real-world only using the robot's proprioceptive sensor such as onboard cameras, joint encoders, etc which can be challenging for policy learning owing to the high dimensionality and partial observability issues. We propose RRL: Resnet as representation for Reinforcement Learning - a straightforward yet effective approach that can learn complex behaviors directly from proprioceptive inputs. RRL fuses features extracted from pre-trained Resnet into the standard reinforcement learning pipeline and delivers results comparable to learning directly from the state. In a simulated dexterous manipulation benchmark, where the state of the art methods fails to make significant progress, RRL delivers contact rich behaviors. The appeal of RRL lies in its simplicity in bringing together progress from the fields of Representation Learning, Imitation Learning, and Reinforcement Learning. Its effectiveness in learning behaviors directly from visual inputs with performance and sample efficiency matching learning directly from the state, even in complex high dimensional domains, is far from obvious.
|
| 6 |
+
|
| 7 |
+
§ I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Recently, Reinforcement learning (RL) has seen tremendous momentum and progress [9, 19, 37, 21] in learning complex behaviors from states [18, 24, 17]. Most success stories, however, are limited to simulations or instrumented laboratory conditions as real world doesn't provide direct access to its internal state. Not only learning with state-space, but visual observation spaces have also found reasonable success [26, 42]. However, the majority of these methods have been tested on low-dimensional, 2D tasks [31] that lack depth information. Contact rich manipulation tasks, on the other hand, are high dimensional and necessitate intricate details in order to be completed successfully. In order to deliver the promise presented by data-driven techniques, we need efficient techniques that can learn complex behaviors unobtrusively without the need for environment instrumentation.
|
| 10 |
+
|
| 11 |
+
Learning without environment instrumentation, especially in unstructured settings like homes, can be quite challenging [59, 34, 46]. Challenges include - (a) Decision making with incomplete information owing to partial observability as the agents must rely only on proprioceptive on-board sensors (vision, touch, joint position encoders, etc) to perceive and act. (b) The influx of sensory information makes the input space quite high dimensional. (c) Information contamination due to sensory noise and task-irrelevant conditions like lightning, shadows, etc. (d) Most importantly, the scene being flushed with information irrelevant to the task (background, clutter, etc). Agents learning under these constraints is forced to take a large number of samples simply to untangle these task-irrelevant details before it makes any progress on the true task objective. A common approach to handle these high dimensionality and multi-modality issues is to learn representations that distil information into low dimensional features and use them as inputs to the policy. While such ideas have found reasonable success [43, 40], designing such representations in a supervised manner requires a deep understanding of the problem and domain expertise. An alternative approach is to leverage unsupervised representation learning to autonomously acquire representations based on either reconstruction [13, 59, 56] or contrastive [51, 52] objective. These methods are quite brittle as the representations are acquired from narrow task-specific distributions [61], and hence, do not generalize well across different tasks Table II. Additionally, they acquire task-specific representations, often needing additional samples from the environment leading to poor sample efficiency or domains specific data-augmentations for training representations.
|
| 12 |
+
|
| 13 |
+
< g r a p h i c s >
|
| 14 |
+
|
| 15 |
+
Fig. 1. RRL Resnet as representation for Reinforcement Learning takes a small step in bridging the gap between Representation learning and Reinforcement learning. RRL pre-trains an encoder on a wide variety of real world classes like ImageNet dataset using a simple supervised classification objective. Since the encoder is exposed to a much wider distribution of images while pretraining, it remains effective whatever distribution the policy might induce during the training of the agent. This allows us to freeze the encoder after pretraining without any additional efforts.
|
| 16 |
+
|
| 17 |
+
The key idea behind our method stems from an intuitive observation over the desiderata of a good representation i.e. (a) it should be low dimensional for a compact representation. (b) it should be able to capture silent features encapsulating the diversity and the variability present in a real-world task for better generalization performance. (c) it should be robust to irrelevant information like noise, lighting, viewpoints, etc so that it is resilient to the changes in surroundings. (d) it should provide effective representation in the entire distribution that a policy can induce for effective learning. These requirements are quite harsh needing extreme domain expertise to manually design and an abundance of samples to automatically acquire. Can we acquire this representation without any additional effort? Our work takes a very small step in this direction.
|
| 18 |
+
|
| 19 |
+
${}^{1}$ Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, India rutavms@gmail.com
|
| 20 |
+
|
| 21 |
+
${}^{2}$ Department of Computer Science, University of Washington, Seattle, USA vikash@cs.washington.edu
|
| 22 |
+
|
| 23 |
+
${}^{3}$ Facebook AI Research, USA
|
| 24 |
+
|
| 25 |
+
The key insight behind our method (Figure 1) is embarrassingly simple - representations do not necessarily have to be trained on the exact task distribution; a representation trained on a sufficiently wide distribution of real-world scenarios, will remain effective on any distribution a policy optimizing a task in the real world might induce. While training over such wide distribution is demanding, this is precisely what the success of large image classification models [8, 10, 54, 12] in Computer Vision delivers - representations learned over a large family of real-world scenarios.
|
| 26 |
+
|
| 27 |
+
Our Contributions: We list the major contributions
|
| 28 |
+
|
| 29 |
+
1) We present a surprisingly simple method (RRL) at the intersection of representation learning, imitation learning (IL) and reinforcement learning (RL) that uses features from pre-trained image classification models (Resnet34) as representations in standard RL pipeline. Our method is quite general and can be incorporated with minimal changes to most state based RL/IL algorithms.
|
| 30 |
+
|
| 31 |
+
2) Task-specific representations learned by supervised as well as unsupervised methods are usually brittle and suffer from distribution mismatch. We demonstrate that features learned by image classification models are general towards different task (Figure 2), robust to visual distractors, and when used in conjunction with standard IL and RL pipelines can efficiently acquire policies directly from proprioceptive inputs.
|
| 32 |
+
|
| 33 |
+
3) While competing methods have restricted results primarily to planar tasks devoid of depth perspectives, on a rich collection of simulated high dimensional dexterous manipulation tasks, where state-of-the-art methods struggle, we demonstrate that RRL can learn rich behaviors directly from visual inputs with performance & sample efficiency approaching state-based methods.
|
| 34 |
+
|
| 35 |
+
4) Additionally, we underline the performance gap between the SOTA approaches and RRL on simple low dimensional tasks as well as high dimensional more realistic tasks. Furthermore, we experimentally establish that the commonly used environments for studying image based continuous control methods are not a true representative of real-world scenario.
|
| 36 |
+
|
| 37 |
+
§ II. RELATED WORK
|
| 38 |
+
|
| 39 |
+
RRL rests on recent developments from the fields of Representation Learning, Imitation Learning and Reinforcement Learning. In this section, we outline related works leveraging representation learning for visual reinforcement and imitation learning.
|
| 40 |
+
|
| 41 |
+
§ A. LEARNING WITHOUT EXPLICIT REPRESENTATION
|
| 42 |
+
|
| 43 |
+
A common approach is to learn behaviors in an end to end fashion - from pixels to actions - without explicit distinction between feature representation and policy representations. Success stories in this categories range from seminal work [5] mastering Atari 2600 computer games using only raw pixels as input, to [14] which learns trajectory-centric local policies using Guided Policy Search [4] for diverse continuous control manipulation tasks in the real world learned directly from camera inputs. More recently, [35] has demonstrated success in acquiring multi-finger dexterous manipulation [33] and agile locomotion behaviors using off-policy action critic methods [24]. While learning directly from pixels has found reasonable success, it requires training large networks with high input dimensionality. Agents require a prohibitively large number of samples to untangle task-relevant information in order to acquire behaviors, limiting their application to simulations or constrained lab settings. RRL maintains an explicit representation network to extract low dimensional features. Decoupling representation learning from policy learning delivers results with large gains in efficiency. Next, we outline related works that use explicit representations.
|
| 44 |
+
|
| 45 |
+
< g r a p h i c s >
|
| 46 |
+
|
| 47 |
+
Fig. 2. Visualization of Layer 4 of Resnet model of the top 1 class using Grad-CAM [45][Top] and Guided Backpropogation [11][Bottom]. This indicates that Resnet is indeed looking for the right features in our task images (right) in spite of such high distributional shift.
|
| 48 |
+
|
| 49 |
+
§ B. LEARNING WITH SUPERVISED REPRESENTATIONS
|
| 50 |
+
|
| 51 |
+
Another approach is to first acquire representations using expert supervision, and use features extracted from representation as inputs in standard policy learning pipelines. A predominant idea is to learn representative keypoints encapsulating task details from the input images and using the extracted keypoints as a replacement of the state information [38]. Using these techniques, [43, 39] demonstrated tool manipulation behaviors in rich scenes flushed with task-irrelevant details. [41] demonstrated simultaneous manipulation of multiple objects in the task of Baoding ball tasks on a high dimensional dexterous manipulation hand. Along with the inbuilt proprioceptive sensing at each joint, they use an RGB stereo image pair that is fed into a separate pre-trained tracker to produce 3D position estimates [57] for the two Baoding balls. These methods, while powerful, learn task-specific features and requires expert supervision, making it harder to (a) translate to variation in tasks/environments, and (b) scale with increasing task diversity. RRL, on the other hand, uses single task-agnostic representations with better generalization capability making it easy to scale.
|
| 52 |
+
|
| 53 |
+
§ C. LEARNING WITH UNSUPERVISED REPRESENTATIONS
|
| 54 |
+
|
| 55 |
+
With the ambition of being scalable, this group of methods intends to acquire representation via unsupervised techniques. [30] uses contrastive learning to time-align visual features across different embodiment to demonstrate behavior transfer from human to a Fetch robot. [20], [62, 59] use variational inference $\left\lbrack {7,{20}}\right\rbrack$ to learn compressed latent representations and use it as input to standard RL pipeline to demonstrate rich manipulation behaviors. [47] additionally learns dynamics models directly in the latent space and use model-based RL to acquire behaviors on simulated tasks. On similar tasks, [36] uses multi-step variational inference to learn world dynamic as well as rewards models for off-policy RL. [51] use image augmentation with variational inference to construct features to be used in standard RL pipeline and demonstrate performance at par with learning directly from the state. [49, 48] demonstrate comparable results by assimilating updates over features acquired only via image augmentation. Similar to supervised methods, unsupervised methods often learns task-specific brittle representations as they break when subjected to small variations in the surroundings and often suffers challenges from non-stationarity arising from the mismatch between the distribution representations are learned on and the distribution policy induces. To induce stability, RRL uses pre-trained stationary representations trained on distribution with wider support than what policy can induce. Additionally, representations learned over a wide distribution of real-world samples are robust to noise and irrelevant information like lighting, illumination, etc.
|
| 56 |
+
|
| 57 |
+
§ D. LEARNING WITH REPRESENTATIONS AND DEMONSTRATIONS
|
| 58 |
+
|
| 59 |
+
Learning from demonstrations has a rich history. We focus our discussion on DAPG [17], a state-based method which optimizes for the natural gradient [2] of a joint loss with imitation as well as reinforcement objective. DAPG has been demonstrated to outperform competing methods [15, 16] on the high dimensional ADROIT dexterous manipulation task suite we test on. RRL extends DAPG to solve the task suite directly from proprioceptive signals with performance and sample efficiency comparable to state-DAPG. Unlike DAPG which is on-policy, FERM [58] is a closely related off-policy actor-critic methods combining learning from demonstrations with RL. FERM builds on RAD [49] and inherits its challenges like learning task-specific representations. We demonstrate via experiments that RRL is more stable, more robust to various distractors, and convincingly outperforms FERM since RRL uses a fixed feature extractor pre-trained over wide variety of real world images and avoids learning task specific representations.
|
| 60 |
+
|
| 61 |
+
§ III. BACKGROUND
|
| 62 |
+
|
| 63 |
+
RRL solves a standard Markov decision process (Section III-A) by combining three fundamental building blocks - (a) Policy gradient algorithm (Section III-B), (b) Demonstration bootstrapping (Section III-C), and (c) Representation learning (Section III-D). We briefly outline these fundamentals before detailing our method in Section IV.
|
| 64 |
+
|
| 65 |
+
§ A. PRELIMINARIES: MDP
|
| 66 |
+
|
| 67 |
+
We model the control problem as a Markov decision process (MDP), which is defined using the tuple: $\mathcal{M} =$ $\left( {\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},{\rho }_{0},\gamma }\right) .\mathcal{S} \in {\mathbb{R}}^{n}$ and $\mathcal{A} \in {\mathbb{R}}^{m}$ represent the state and actions. $\mathcal{R} : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is the reward function. In the ideal case, this function is simply an indicator for task completion (sparse reward setting). $\mathcal{T} : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ is the transition dynamics, which can be stochastic. In model-free RL, we do not assume any knowledge about the transition function and require only sampling access to this function. ${\rho }_{0}$ is the probability distribution over initial states and $\gamma \in \lbrack 0,1)$ is the discount factor. We wish to solve for a stochastic policy of the form $\pi : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ which optimizes the expected sum of rewards:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\eta \left( \pi \right) = {\mathbb{E}}_{\pi ,\mathcal{M}}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t}}\right\rbrack \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
§ B. POLICY GRADIENT
|
| 74 |
+
|
| 75 |
+
The goal of the RL agent is to maximise the expected discounted return $\eta \left( \pi \right)$ (Equation 1) under the distribution induced by the current policy $\pi$ . Policy Gradient algorithms optimize the policy ${\pi }_{\theta }\left( {a \mid s}\right)$ directly, where $\theta$ is the function parameter by estimating $\nabla \eta \left( \pi \right)$ . First we introduce a few standard notations, Value function : ${V}^{\pi }\left( s\right) ,\mathrm{Q}$ function : ${Q}^{\pi }\left( {s,a}\right)$ and the advantage function : ${A}^{\pi }\left( {s,a}\right)$ . The advantage function can be considered as another version of Q-value with lower variance by taking the state-value off as the baseline.
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{V}^{\pi }\left( s\right) = {\mathbb{E}}_{\pi \mathcal{M}}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t} \mid {s}_{0} = s}\right\rbrack
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{Q}^{\pi }\left( {s,a}\right) = {\mathbb{E}}_{\mathcal{M}}\left\lbrack {\mathcal{R}\left( {s,a}\right) }\right\rbrack + {\mathbb{E}}_{{s}^{\prime } \sim \mathcal{T}\left( {f,d}\right) }\left\lbrack {{V}^{\pi }\left( {s}^{\prime }\right) }\right\rbrack
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{A}^{\pi }\left( {s,a}\right) = {Q}^{\pi }\left( {s,a}\right) - {V}^{\pi }\left( s\right)
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
(2)
|
| 90 |
+
|
| 91 |
+
The gradient can be estimated using the Likelihood ratio approach and Markov property of the problem [1] and using a sampling based strategy,
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\nabla \eta \left( \pi \right) = g = \frac{1}{NT}\mathop{\sum }\limits_{{i = 0}}^{N}\mathop{\sum }\limits_{{t = 0}}^{T}{\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t}^{i} \mid {s}_{t}^{i}}\right) {\widehat{A}}^{\pi }\left( {{s}_{t}^{i},{a}_{t}^{i},t}\right) \tag{3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Amongst the wide collection of policy gradient algorithms, we build upon Natural Policy Gradient (NPG) [2] to solve our MDP formulation owing to its stability and effectiveness in solving complex problems. We refer to [32] for a detailed background on different policy gradient approaches. In the next section, we describe how human demonstrations can be effectively used along with NPG to aid policy optimization.
|
| 98 |
+
|
| 99 |
+
§ C. DEMO AUGMENTED POLICY GRADIENT
|
| 100 |
+
|
| 101 |
+
Policy Gradients with appropriately shaped rewards can solve arbitrarily complex tasks. However, real-world environments seldom provide shaped rewards, and it must be manually specified by domain experts. Learning with sparse signals, such as task completion indicator functions, can relax domain expertise in reward shaping but it results in extremely high sample complexity due to exploration challenges. DAPG ([17]) combines policy gradients with few demonstrations in two ways to mitigate this issue and effectively learn from them. We represent the demonstration dataset using ${\rho }_{D} = \left\{ \left( {{s}_{t}^{\left( i\right) },{a}_{t}^{\left( i\right) },{s}_{t + 1}^{\left( i\right) },{r}_{t}^{\left( i\right) }}\right) \right\}$ where $t$ indexes time and $i$ indexes different trajectories.
|
| 102 |
+
|
| 103 |
+
(1) Warm up the policy using few demonstrations (25 in our setting) using a simple Mean Squared Error(MSE) loss, i.e, initialize the policy using behavior cloning [Eq 4]. This provides an informed policy initialization that helps in resolving the early exploration issue as it now pays attention to task relevant state-action pairs and thereby, reduces the sample complexity.
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{L}_{BC}\left( \theta \right) = \frac{1}{2}\mathop{\sum }\limits_{{i,t \in \text{ minibatch }}}{\left( {\pi }_{\theta }\left( {s}_{t}^{\left( i\right) }\right) - {a}_{t}^{\left( i\right) H}\right) }^{2} \tag{4}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where, $\theta$ are the agent parameters and ${a}_{t}^{\left( i\right) H}$ represents the action taken by the human/expert.
|
| 110 |
+
|
| 111 |
+
(2) DAPG builds upon on-policy NPG algorithm [2] which uses a normalized gradient ascent procedure where the normalization is under the Fischer metric.
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\theta }_{k + 1} = {\theta }_{k} + \sqrt{\frac{\delta }{{g}^{T}{\widehat{F}}_{{\theta }_{k}}^{-1}g}}{\widehat{F}}_{{\theta }_{k}}^{-1}g \tag{5}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where ${\widehat{F}}_{{\theta }_{k}}$ is the Fischer Information Metric at the current iterate ${\theta }_{k}$ ,
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{\widehat{F}}_{{\theta }_{k}} = \frac{1}{T}\mathop{\sum }\limits_{{t = 0}}^{T}{\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {s}_{t}}\right) {\nabla }_{\theta }\log {\pi }_{\theta }{\left( {a}_{t} \mid {s}_{t}\right) }^{T} \tag{6}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
and $g$ is the sample based estimate of the policy gradient [Eq [3]. To make the best use of available demonstrations, DAPG proposes a joint loss ${g}_{\text{ aug }}$ combining task as well as imitation objective. The imitation objective asymptotically decays over time allowing the agent to learn behaviors surpassing the expert.
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{g}_{\text{ aug }} = \mathop{\sum }\limits_{{\left( {s,a}\right) \in {\rho }_{\pi }}}{\nabla }_{\theta }\ln {\pi }_{\theta }\left( {a \mid s}\right) {A}^{\pi }\left( {s,a}\right) \tag{7}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
+ \mathop{\sum }\limits_{{\left( {s,a}\right) \in {\rho }_{D}}}{\nabla }_{\theta }\ln {\pi }_{\theta }\left( {a \mid s}\right) w\left( {s,a}\right)
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where, ${\rho }_{\pi }$ is the dataset obtained by executing the current policy, ${\rho }_{D}$ is the demonstration data and $w\left( {s,a}\right)$ is the heuristic weighting function defined as :
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
w\left( {s,a}\right) = {\lambda }_{0}{\lambda }_{1}^{k}\mathop{\max }\limits_{{\left( {{s}^{\prime },{a}^{\prime }}\right) \in {\rho }_{\pi }}}{A}^{\pi }\left( {{s}^{\prime },{a}^{\prime }}\right) \;\forall \;\left( {s,a}\right) \in {\rho }_{D} \tag{8}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
DAPG has proven to be successful in learning policy for the dexterous manipulation tasks with reasonable sample complexity.
|
| 140 |
+
|
| 141 |
+
§ D. REPRESENTATION LEARNING
|
| 142 |
+
|
| 143 |
+
DAPG has thus far only been demonstrated to be effective with access to low-level state information which is not readily available in real-world. DAPG is based on NPG which works well but faces issues with input dimensionality and hence, cannot be directly used with the input images acquired from onboard cameras. Representation learning [6] is learning representations of input data typically by transforming it or extracting features from it, which makes it easier to perform the task (in our case it can be used in place of the exact state of the environment). Let $I \in {\mathbb{R}}^{n}$ represents the high dimensional input image, then
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
h = {f}_{\rho }\left( I\right) \tag{9}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where $f$ represents the feature extractor, $\rho$ is the distribution over which $f$ is valid and $h \in {\mathbb{R}}^{d}$ with $d < < n$ is the compact, low dimensional representation of $I$ . In the next section, we outline our method that scales DAPG to solve directly from visual information.
|
| 150 |
+
|
| 151 |
+
§ IV. RRL: RESNET AS REPRESENTATION FOR RL
|
| 152 |
+
|
| 153 |
+
In an ideal RL setting, the agent interacts with the environment based on the current state, and in return, the environment outputs the next state and the reward obtained. This works well in a simulated environment but in a real-world scenario, we do not have access to this low-level state information. Instead we get the information from cameras $\left( {I}_{t}\right)$ and other onboard sensors like joint encoders $\left( {\delta }_{t}\right)$ . To overcome the challenges associated with learning from high dimensional inputs, we use representations that project information into a lower-dimensional manifolds. These representations can be (a) learned in tandem with the RL objective. However, this leads to non-stationarity issue where the distribution induced by the current policy ${\pi }_{i}$ may lie outside the expressive power of $f,{\pi }_{i} ⊄ {\rho }_{i}$ at any step $i$ during training. (b) decoupled from RL by pre-training $f$ . For this to work effectively, the feature extractor must be trained on a sufficiently wide distribution such that it covers any distribution that the policy might induce during training, ${\pi }_{i} \subset \rho \forall i$ . Getting hold of such task specific training data beforehand becomes increasingly difficult as the complexity and diversity of the task increases. To this end, we propose to use a fixed feature extractor (Section V-B) that is pretrained on a wide variety of real world scenarios like ImageNet dataset [Highlighted in purple in Figure 1]. We experimentally demonstrate that the diversity (Section V-C) of the such feature extractor allows us to use it across all tasks we considered. The use of pre-trained representations induces stability to RRL as our representations are frozen and do-not face the non-stationarity issues encountered while learning policy and representation in tandem.
|
| 154 |
+
|
| 155 |
+
The features $\left( {h}_{t}\right)$ obtained from the above feature extractor are appended with the information obtained from the internal joint encoders of the Adroit Hand $\left( {\delta }_{t}\right)$ . As a substitute of the exact state $\left( {s}_{t}\right)$ , we empirically show that $\left\lbrack {{h}_{t},{\delta }_{t}}\right\rbrack$ can be used as an input to the policy. In principle any RL algorithm can be deployed to learn the policy, in RRL we build upon Natural Policy Gradients [3] owing to effectiveness in solving complex high dimensional tasks [17]. We present our full algorithm in Algorithm-1.
|
| 156 |
+
|
| 157 |
+
Algorithm 1 RRL
|
| 158 |
+
|
| 159 |
+
Input: 25 Human Demonstrations ${\rho }_{D}$
|
| 160 |
+
|
| 161 |
+
Initialize using Behavior Cloning [Eq. 4].
|
| 162 |
+
|
| 163 |
+
repeat
|
| 164 |
+
|
| 165 |
+
for $\mathrm{i} = 1$ to $\mathrm{n}$ do
|
| 166 |
+
|
| 167 |
+
for $\mathrm{t} = 1$ to horizon do
|
| 168 |
+
|
| 169 |
+
Take action
|
| 170 |
+
|
| 171 |
+
${a}_{t} = {\pi }_{\theta }\left( \left\lbrack {\operatorname{Encoder}\left( {I}_{t}\right) ,{\delta }_{t}}\right\rbrack \right)$
|
| 172 |
+
|
| 173 |
+
and receive ${I}_{t + 1},{\delta }_{t + 1},{r}_{t + 1}$
|
| 174 |
+
|
| 175 |
+
from the environment.
|
| 176 |
+
|
| 177 |
+
end for
|
| 178 |
+
|
| 179 |
+
end for
|
| 180 |
+
|
| 181 |
+
Compute ${\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {s}_{t}}\right)$ for each $\left( {s,a}\right) \in {\rho }_{\pi },{\rho }_{D}$
|
| 182 |
+
|
| 183 |
+
Compute ${A}^{\pi }\left( {s,a}\right)$ for each $\left( {s,a}\right) \in {\rho }_{\pi }$ and $w\left( {s,a}\right)$
|
| 184 |
+
|
| 185 |
+
for each $\left( {s,a}\right) \in {\rho }_{D}$ according to Equations 2,8
|
| 186 |
+
|
| 187 |
+
Calculate policy gradient according to [7]
|
| 188 |
+
|
| 189 |
+
Compute Fisher matrix [6]
|
| 190 |
+
|
| 191 |
+
Take the gradient ascent step according to 5 .
|
| 192 |
+
|
| 193 |
+
Update the parameters of the value function in order
|
| 194 |
+
|
| 195 |
+
to approximate 2 $: {V}_{k}^{\pi }\left( {s}_{t}^{\left( n\right) }\right) \approx \mathop{\sum }\limits_{{{t}^{\prime } = t}}^{T}{\gamma }^{{t}^{\prime } - t}{r}_{t}^{\left( n\right) }$
|
| 196 |
+
|
| 197 |
+
until Satisfactory performance
|
| 198 |
+
|
| 199 |
+
§ V. EXPERIMENTAL EVALUATIONS
|
| 200 |
+
|
| 201 |
+
Our experimental evaluations aims to address the following questions: (1) Does pre-tained representations acquired via large real world image dataset allow RRL to learn complex tasks directly from proprioceptive signals (camera inputs and joint encoders)? (2) How does RRL's performance and efficiency compare against other state-of-the-art methods? (3) How various representational choices influence the generality and versatility of the resulting behaviors? (5) What are the effects of various design decisions on RRL? (6) Are commonly used benchmarks for studying image based continuous control methods effective?
|
| 202 |
+
|
| 203 |
+
§ A. TASKS
|
| 204 |
+
|
| 205 |
+
Applicability of prior proprioception based RL methods $\left\lbrack {{49},{48},{47}}\right\rbrack$ have been limited to simple low dimensional tasks like Cartpole, Cheetah, Reacher, Finger spin, Walker, Ball in cup, etc. Moving beyond these simple domains, we investigate RRL on Adroit manipulation suite [17] which consists of contact-rich high-dimensional dexterous manipulation tasks (Figure 3) that have found to be challenging ever for state $\left( {s}_{t}\right)$ based methods. Furthermore, unlike prior task sets, which are fundamentally planar and devoid of depth perspective, the Adroit manipulation suite consists of visually-rich physically-realistic tasks that demand representations untangling complex depth information.
|
| 206 |
+
|
| 207 |
+
§ B. IMPLEMENTATION DETAILS
|
| 208 |
+
|
| 209 |
+
We use standard Resnet-34 model as RRL's feature extractor. The model is pre-trained on the ImageNet dataset which consists of 1000 classes. It is trained on 1.28 million images on the classification task of ImageNet. The last layer of the model is removed to recover a 512 dimensional feature space and all the parameters are frozen throughout the training of the RL agent. During inference, the observations obtained from the environment are of size ${256} \times {256}$ , a center crop of size ${224} \times {224}$ is fed into the model. We also evaluate our model using different Resnet sizes (Figure 7). All the hyperparameters used for training are summarized in Appendix(Table II). We report an average performance over three random seeds for all the experiments.
|
| 210 |
+
|
| 211 |
+
< g r a p h i c s >
|
| 212 |
+
|
| 213 |
+
Fig. 3. ADROIT manipulation suite consisting of complex dexterous manipulation tasks involving object relocation, in hand manipulation (pen repositioning), tool use (hammering a nail), and interacting with human centric environments (opening a door).
|
| 214 |
+
|
| 215 |
+
§ C. RESULTS
|
| 216 |
+
|
| 217 |
+
In Figure 4, we contrast the performance of RRL against the state of the art baselines. We begin by observing that NPG [3] struggles to solve the suite even with full state information, which establishes the difficulty of our task suite. DAPG(State) [17] uses privileged state information and a few demonstrations from the environment to solve the tasks and pose as the best case oracle. RRL demonstrates good performance on all the tasks, relocate being the hardest, and often approaches performance comparable to our strongest oracle-DAPG(State).
|
| 218 |
+
|
| 219 |
+
A competing baseline FERM [58] is quite unstable in these tasks. It starts strong for hammer and door tasks but saturates in performance. It makes slow progress in pen, and completely fails for relocate. In Figure 5 [Left] we compare the computational footprint of FERM (along with other methods, discussed in later sections) with RRL. We note that our method not only outperforms FERM but also is approximately five times more compute-efficient.
|
| 220 |
+
|
| 221 |
+
${}^{1}$ Reporting best performance amongst over 30 configurations per task we tried in consultation with the FERM authors.
|
| 222 |
+
|
| 223 |
+
< g r a p h i c s >
|
| 224 |
+
|
| 225 |
+
Fig. 4. Performance on ADROIT dexterous manipulation suite [17]: State of the art policy gradient method NPG(State) [29] struggles to solve the suite even with privileged low level state information, establishing the difficulty of the suite. Amongst demonstration accelerated methods, RRL(Ours) demonstrates stable performance and approaches performance of DAPG(State) [17] (upper bound), a demonstration accelerated method using privileged state information. A competing baseline FERM [58] makes good initial, but unstable, progress in a few tasks and often saturates in performance before exhausting our computational budget (40 hours/ task/ seed).
|
| 226 |
+
|
| 227 |
+
< g r a p h i c s >
|
| 228 |
+
|
| 229 |
+
Fig. 5. LEFT: Comparison of the computational cost of RRL with Resnet34 i.e RRL(Ours), FERM - Strongest baseline, RRL with Resnet 18, RRL with Resnet 50, RRL (VAE), RRL with ShuffleNet, RRL with MobileNet and RRL with Very Deep VAE baseline. CENTER, RIGHT: Influence of various environment distractions (lightning condition, object color) on RRL(Ours), and FERM. RRL(Ours) consistently performs better than FERM in all the variations we considered.
|
| 230 |
+
|
| 231 |
+
§ D. EFFECTS OF VISUAL DISTRACTORS
|
| 232 |
+
|
| 233 |
+
In Figure 5 [Center, Right] we probe the robustness of the final policies by injecting visual distractors in the environment during inference. We note that the resilience of the resnet features induces robustness to RRL's policies. On the other hand, task-specific features learned by FERM are brittle leading to larger degradation in performance. In addition to improved sample and time complexity resulting from the use of pre-trained features, the resilience, robustness, and versatility of Resnet features lead to policies that are also robust to visual distractors, clutter in the scene. More details about the experiment setting is provided in Section VII-H in Appendix.
|
| 234 |
+
|
| 235 |
+
§ E. EFFECT OF REPRESENTATION
|
| 236 |
+
|
| 237 |
+
Is Resnet lucky? To investigate if architectural choice of Resnet is lucky, in Figure 6 we test different models pretrained on ImageNet dataset as RRL's feature extractors - MobileNetV2 [44], ShuffleNet [27] and state of the art hierarchical VAE [60] [Refer Section VII-E in Appendix for more details]. Not much degradation in performance is observed with respect to the Resnet model. This highlights that it is not the architecture choices in particular, rather the dataset on which models are being pre-trained, that delivers generic features effective for the RL agents.
|
| 238 |
+
|
| 239 |
+
Task-specific vs Task-agnostic representation: In Figure 7, we compare the performance between (a) learning task specific representations (VAE) (b) generic representation trained on a very wide distribution (Resnet). We note that RRL using Resnet34 significantly outperforms a variant RRL(VAE) (see appendix for details Section VII-G) that learns features via commonly used variational inference techniques on a task specific dataset [22, 23, 25, 28]. This indicates that pre-trained Resnet provides task agnostic and superior features compared to methods that explicitly learn brittle (Section-V-H) and task-specific features using additional samples from the environment. It is important to note that the latent dimension of the Resnet34 and VAE are kept same (512) for a fair comparison, however, the model sizes are different as one operates on a very wide distribution while the other on a much narrower task specific dataset. Additionally, we summarize the compute cost of both the methods RRL(Ours) and RRL(VAE) in Figrue 5 [Left]. We notice that even though RRL(VAE) is the cheapest, its performance is quite low (Figure 7). RRL(Ours) strikes a balance between compute and efficiency.
|
| 240 |
+
|
| 241 |
+
< g r a p h i c s >
|
| 242 |
+
|
| 243 |
+
Fig. 6. Effect of different types of Feature extractor pretrained on ImageNet dataset, highlighting that not just Resnet but any feature extractor pretrained on a sufficiently wide distribution of data remains effective.
|
| 244 |
+
|
| 245 |
+
< g r a p h i c s >
|
| 246 |
+
|
| 247 |
+
Fig. 7. Influence of representation: RRL(Ours), using resnet34 features, outperforms commonly used representation (RRL(VAE)) learning method VAE. Amongst different Resnet variations, Resnet34 strikes the balance between representation capacity and computational overhead. NPG(Resnet34) showcases the performance with Resnet34 features but without demonstration bootstrapping, indicating that only representational choices are not enough to solve the task suite.
|
| 248 |
+
|
| 249 |
+
F. Effects of proprioception choices and sensor noise
|
| 250 |
+
|
| 251 |
+
< g r a p h i c s >
|
| 252 |
+
|
| 253 |
+
Fig. 8. Influence of proprioceptive signals on RRL(Vision+sensors-Ours): RRL(Noise) demonstrates that RRL remains effectiveness in presence of noisy (2%) proprioception. RRL(Vision) demonstrates that RRL remains performant with (only) visual inputs as well.
|
| 254 |
+
|
| 255 |
+
While it's hard to envision a robot without proprioceptive joint sensing, harsh conditions of the real-world can lead to noisy sensing, even sensor failures. In Figure 8, we subjected RRL to (a) signals with $2\%$ noise in the information received from the joint encoders RRL(Noise), and (b) only visual inputs are used as proprioceptive signals RRL(Vision). In both these cases, our methods remained performant with slight to no degradation in performance.
|
| 256 |
+
|
| 257 |
+
§ G. ABLATIONS AND ANALYSIS OF DESIGN DECISIONS
|
| 258 |
+
|
| 259 |
+
In our next set of experiments, we evaluate the effect of various design decisions on our method. In Figure 7, we study the effect of different Resnet features as our representation. Resnet34, though computationally more demanding (Figure 5) than Resnet18, delivers better performance owing to its improved representational capacity and feature expressivity. A further boost in capacity (Resnet50) degrades performance, likely due to the incorporation of less useful features and an increase in samples required to train the resulting larger policy network.
|
| 260 |
+
|
| 261 |
+
< g r a p h i c s >
|
| 262 |
+
|
| 263 |
+
Fig. 9. LEFT: Influence of rewards signals: RRL(Ours), using sparse rewards, remains performant with a variation ${\mathrm{{RRL}}}_{\text{ dense }}$ using well-shaped dense rewards. RIGHT: Effect of policy size on the performance of RRL. We observe that it is quite stable with respect to a wide range of policy sizes.
|
| 264 |
+
|
| 265 |
+
Reward design, especially for complex high dimensional tasks, requires domain expertise. RRL replaces the needs of well-shaped rewards by using a few demonstrations (to curb the exploration challenges in high dimensional space) and sparse rewards (indicating task completion). This significantly lowers the domain expertise required for our methods. In Figure 9-LEFT, we observe that RRL (using sparse rewards) delivers competitive performance to a variant of our methods that uses well-shaped dense rewards while being resilient to variation in policy network capacity (Figure 9-RIGHT).
|
| 266 |
+
|
| 267 |
+
§ H. RETHINKING BENCHMARKING FOR VISUAL ${RL}$
|
| 268 |
+
|
| 269 |
+
DMControl [31] is a widely used benchmark for proprioception based RL methods - RAD [49], SAC+AE [56], CURL [51], DrQ [48]. While these methods perform well (Table 1) on such simple DMControl tasks, their progress struggles to scale when met with task representative of real world complexities such as realistic Adroit Manipulation benchmarks (Figure 4).
|
| 270 |
+
|
| 271 |
+
For example we demonstrate in Figure 4 that a representative SOTA methods FERM (uses expert demos along with RAD) struggles to perform well on Adroit Manipulation benchmark. On the contrary, RRL using Resnet features pre-trained on real world image dataset, delivers state comparable results on Adroit Manipulation benchmark while struggles on the DMControl (RRL+SAC: RRL using SAC and Resnet34 features [1]. This highlights large domain gap between the DMControl suite and the real-world.
|
| 272 |
+
|
| 273 |
+
We further note that the pretrained features learned by SOTA methods aren't as widely applicable. We use a pre-trained RAD encoder (pretrained on Cartpole) as fixed feature extractor (Fixed RAD encoder in Table 1) and retrain the policy using these features for all environments. The performance degrades for all the tasks except Cartpole. This highlights that the representation learned by RAD (even with various image augmentations) are task specific and fail to generalize to other tasks set with similar visuals. Furthermore, learning such task specific representations are easier on simpler scenes but their complexity grows drastically as the complexity of tasks and scenes increases. To ensure that important problems aren't overlooked, we emphasise the need for the community to move towards benchmarks representative of realistic real world tasks.
|
| 274 |
+
|
| 275 |
+
max width=
|
| 276 |
+
|
| 277 |
+
${500}\mathrm{\;K}$ Step Scores RRL+SAC RAD Fixed RAD Encoder CURL SAC+AE State SAC
|
| 278 |
+
|
| 279 |
+
1-7
|
| 280 |
+
Finger, Spin ${422} \pm {102}$ ${947} \pm {101}$ ${789} \pm {190}$ ${926} \pm {45}$ ${884} \pm {128}$ 923 ± 211
|
| 281 |
+
|
| 282 |
+
1-7
|
| 283 |
+
Cartpole, Swing ${357} \pm {85}$ ${863} \pm 9$ ${875} \pm {01}$ ${845} \pm {45}$ ${735} \pm {63}$ ${848} \pm {15}$
|
| 284 |
+
|
| 285 |
+
1-7
|
| 286 |
+
Reacher, Easy ${382} \pm {299}$ ${955} \pm {71}$ ${53} \pm {44}$ ${929} \pm {44}$ ${627} \pm {58}$ ${923} \pm {24}$
|
| 287 |
+
|
| 288 |
+
1-7
|
| 289 |
+
Cheetah, Run ${154} \pm {23}$ ${728} \pm {71}$ ${203} \pm {31}$ ${518} \pm {28}$ ${550} \pm {34}$ ${795} \pm {30}$
|
| 290 |
+
|
| 291 |
+
1-7
|
| 292 |
+
Walker, Walk ${148} \pm {12}$ ${918} \pm {16}$ ${182} \pm {40}$ ${902} \pm {43}$ ${847} \pm {48}$ ${948} \pm {54}$
|
| 293 |
+
|
| 294 |
+
1-7
|
| 295 |
+
Cup, Catch ${447} \pm {132}$ ${974} \pm {12}$ ${719} \pm {70}$ ${959} \pm {27}$ ${794} \pm {58}$ ${974} \pm {33}$
|
| 296 |
+
|
| 297 |
+
1-7
|
| 298 |
+
100K Step Scores X X X X X X
|
| 299 |
+
|
| 300 |
+
1-7
|
| 301 |
+
Finger, Spin ${135} \pm {67}$ ${856} \pm {73}$ ${655} \pm {104}$ ${767} \pm {56}$ ${740} \pm {64}$ ${811} \pm {46}$
|
| 302 |
+
|
| 303 |
+
1-7
|
| 304 |
+
Cartpole, Swing ${192} \pm {19}$ ${828} \pm {27}$ ${840} \pm {34}$ ${582} \pm {146}$ ${311} \pm {11}$ ${835} \pm {22}$
|
| 305 |
+
|
| 306 |
+
1-7
|
| 307 |
+
Reacher, Easy ${322} \pm {285}$ ${826} \pm {219}$ ${162} \pm {40}$ ${538} \pm {233}$ ${274} \pm {14}$ ${746} \pm {25}$
|
| 308 |
+
|
| 309 |
+
1-7
|
| 310 |
+
Cheetah, Run ${72} \pm {63}$ ${447} \pm {88}$ ${188} \pm {20}$ ${299} \pm {48}$ ${267} \pm {24}$ ${616} \pm {18}$
|
| 311 |
+
|
| 312 |
+
1-7
|
| 313 |
+
Walker, Walk ${63} \pm {07}$ ${504} \pm {191}$ ${106} \pm {11}$ ${403} \pm {24}$ ${394} \pm {22}$ ${891} \pm {82}$
|
| 314 |
+
|
| 315 |
+
1-7
|
| 316 |
+
Cup, Catch ${261} \pm {57}$ ${840} \pm {179}$ ${533} \pm {148}$ ${769} \pm {43}$ ${391} \pm {82}$ ${746} \pm {91}$
|
| 317 |
+
|
| 318 |
+
1-7
|
| 319 |
+
|
| 320 |
+
TABLE I
|
| 321 |
+
|
| 322 |
+
Results on DMControl Benchmark. RAD outperforms all the baselines whereas RRL performs worse in the ${100}\mathrm{K}$ and ${500}\mathrm{к}$ Environmental step benchmark suggesting that it is quicker to learn task specific representation in simple tasks whereas Fixed RAD ENCODER HIGHLIGHTS THAT THE REPRESENTATIONS LEARNED BY RAD ARE NARROW AND TASK SPECIFIC.
|
| 323 |
+
|
| 324 |
+
§ VI. STRENGTHS, LIMITATIONS & OPPORTUNITIES
|
| 325 |
+
|
| 326 |
+
This paper presents an intuitive idea bringing together advancements from the fields of representation learning, imitation learning, and reinforcement learning. We present a very simple method named RRL that leverages Resnet features as representation to learn complex behaviors directly from proprioceptive signals. The resulting algorithm approaches the performance of state-based methods in complex ADROIT dexterous manipulation suite.
|
| 327 |
+
|
| 328 |
+
Strengths: The strength of our insight lies in its simplicity, and applicability to almost any reinforcement or imitation learning algorithm that intends to learn directly from high dimensional proprioceptive signals. We present RRL, an instantiation of this insight on top of imitation + (on-policy) reinforcement learning methods called DAPG, to showcase its strength. It presents yet another demonstration that features learned by Resnet are quite general and are broadly applicable. Resnet features trained over 1000s of real-world images are more robust and resilient in comparison to the features learned by methods that learn representation and policies in tandem using only samples from the task distribution. The use of such general but frozen representations in conjunction with RL pipelines additionally avoids the non-stationary issues faced by competing methods that simultaneously optimizes reinforcement and representation objectives, leading to more stable algorithms. Additionally, not having to train your own features extractors results in a significant sample and compute gains, Refer to Figure 5.
|
| 329 |
+
|
| 330 |
+
Limitations: While this work demonstrates promises of using pre-trained features, it doesn't investigate the data mismatch problem that might exist. Real-world datasets used to train resnet features are from human-centric environments. While we desire robots to operate in similar settings, there are still differences in their morphology and mode of operations. Additionally, resent (and similar models) acquire features from data primarily comprised of static scenes. In contrast, embodied agents desire rich features of dynamic and interactive movements.
|
| 331 |
+
|
| 332 |
+
Opportunities: RRL uses a single pre-trained representation for solving all the complex and very different tasks. Unlike the domains of vision and language, there is a nontrivial cost associated with data in robotics. The possibility of having a standard shared representational space opens up avenues for leveraging data from various sources, building hardware-accelerated devices using feature compression, low latency and low bandwidth information transmission.
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,327 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Learning to Grasp the Ungraspable with Emergent Extrinsic Dexterity
|
| 2 |
+
|
| 3 |
+
Wenxuan Zhou ${}^{1}$ and David Held ${}^{1}$
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Fig. 1: We study the task of "Occluded Grasping" with extrinsic dexterity. The goal of this task is to reach an occluded grasp configuration (indicated by a transparent gripper attached to the object in the top row). The figure shows the emergent behavior of the trained policy which uses the wall of the bin to rotate the object to reach a grasp.
|
| 8 |
+
|
| 9 |
+
Abstract-A robot can solve more complex manipulation tasks beyond the limitations of its body if it can utilize the external environment such as pushing the object against the table or a vertical wall. These behaviors are known as "Extrinsic Dexterity." Previous work in extrinsic dexterity usually relies on hand-crafted primitives or careful assumptions about contacts. In this work, we explore the use of reinforcement learning (RL) on the extrinsic dexterity with the task of "Occluded Grasping". The goal of the task is to grasp the object in configurations that are initially occluded; the robot must interact with the object and the extrinsic environment to move the object into a configuration from which these grasps can be achieved. To accomplish this task, we train a policy to co-optimize pre-grasp and grasping motions; this results in emergent behavior of pushing the object against the wall in order to rotate and then grasp it. We demonstrate the generality of the learned policy across environment variations in simulation and evaluate it on a real robot with zero-shot sim2real transfer. Videos can be found at https://sites.google.com/view/grasp-ungraspable.
|
| 10 |
+
|
| 11 |
+
## I. INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Humans have dexterous multi-fingered hands; however, similarly dexterous robot hands are expensive and fragile. Instead, robots can achieve dexterous manipulation with a simple hand by leveraging the environment, known as "Extrinsic Dexterity" [1]. For example, a simple gripper can rotate an object in-hand by pushing it against the table [2], or lifting an object by sliding it along a vertical surface [3]. With the exploitation of external resources such as contact surfaces or gravity, even simple grippers can perform skillful maneuvers that are typically studied with a multi-fingered dexterous hand. Different from a common practice of considering the robot and an object of interest in isolation, extrinsic dexterity focuses on a holistic view of considering the interactions among the robot, the object, and the external environment.
|
| 14 |
+
|
| 15 |
+
Previous work in extrinsic dexterity has demonstrated a variety of tasks such as in-hand reorientation with a simple gripper, prehensile pushing or shared grasping [1], [2], [3]. However, the underlying approaches come with several limitations such as relying on hand-designed primitives, making assumptions about contact locations and contact modes, or requiring specific gripper design. Instead, we use reinforcement learning (RL) to remove these limitations. With reinforcement learning, the agent can learn a closed-loop policy of how the robot should interact with the object and the environment to solve the task. In addition, when trained with domain randomization, the policy can learn to be robust to different variations of physics. These properties of RL can enable extrinsic dexterity in a more general setting.
|
| 16 |
+
|
| 17 |
+
We study "Occluded Grasping" as an example of a task that requires extrinsic dexterity. Occluded Grasping is defined with the goal of grasping an object in poses that are initially occluded. Consider, for example, a robot that needs to grasp a cereal box lying on its side on a table; the desired grasp is not reachable because it is partially occluded by the table (Figure 1). To achieve this grasp with a parallel gripper, the robot might rotate the object by pushing it against a vertical wall to expose the desired grasp. This task is in contrast with existing grasping tasks which mostly focus on reaching an unoccluded grasp in free space with a static or near-static scene [4], [5], [6]. Prior work has attempted to design pre-grasp motions of exposing occluded grasp poses with primitives or special gripper design [7]. In our work, the pre-grasp motion is an emergent behavior through a novel reward function that co-optimizes exposing the grasp pose and achieving the grasp pose. In addition, we frame the task as a goal-conditioned RL problem, in which the policy is conditioned on the selected grasp. During training, the policy learns to reach as many grasp poses as possible with an automatic curriculum [8]. During testing, given a set of grasps, the policy can select one of them as a goal to execute.
|
| 18 |
+
|
| 19 |
+
In summary, we present a system for "Occluded Grasping" as an example of the combination of reinforcement learning and extrinsic dexterity. We provide a comprehensive evaluation of the system both in simulation and on a real Franka Emika Panda robot. We showcase the importance of each components and the generalization of the learned policy across environment variations in simulation and real.
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
${}^{1}$ Robotics Institute, Carnegie Mellon University
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## II. RELATED WORK
|
| 28 |
+
|
| 29 |
+
## A. Extrinsic dexterity
|
| 30 |
+
|
| 31 |
+
"Extrinsic dexterity" is a type of manipulation skills that enhance the intrinsic capability of a hand using external resources including external contacts, gravity, or dynamic motions of the arm [1]. Previous work in extrinsic dexterity has demonstrated complex manipulation tasks with a simple gripper including in-hand reorientation [1], [9], prehensile pushing [2], [10], shared grasping [3], etc. In this work, we study a different task that can further demonstrate the benefit of extrinsic dexterity. Extrinsic dexterity usually involves contact-rich behaviors which poses difficulties in planning and control. Previous work has used hand-crafted trajectories [1], task-specific motion primitives [9], [3] or motion planning over contact mode switches [2], [10], [11], [12]. They come with the restrictions on the contact modes between the finger and the object which will limit the motion and the design of the gripper. In this work, we take an alternative approach of using reinforcement learning to learn a closed-loop policy that considers both planning and control.
|
| 32 |
+
|
| 33 |
+
## B. Reinforcement Learning for Manipulation
|
| 34 |
+
|
| 35 |
+
Previous work that uses reinforcement learning for manipulation tasks treat the object and the robot in isolation without considering extrinsic dexterity [13], [14], [8]. In our work, we demonstrate that the agent can benefit from extrinsic dexterity when solving the occluded grasping task.
|
| 36 |
+
|
| 37 |
+
## C. Grasping
|
| 38 |
+
|
| 39 |
+
Grasping has been an important task in robot manipulation and has been studied from various aspects.
|
| 40 |
+
|
| 41 |
+
Grasp generation: One area of study in grasping is to generate stable grasp configurations [15], [16], [17], [4], [18], [5], [19]. We assume that we will use the grasps generated by any grasp generation method as input to our system.
|
| 42 |
+
|
| 43 |
+
Grasp execution: To execute a grasp following grasp generation, a motion planner is usually used to generate a collision-free path towards the desired grasp configuration. If there is a set of desired grasps, integrated grasp and motion planning could be considered [20], [21], [6]. [22] uses imitation learning and reinforcement learning to finetune the trajectories from the planner. All of these works aim at achieving the unoccluded grasp configurations in static or near-static scenes. Instead, our work focuses on a complementary direction of achieving occluded grasp locations by interacting with the object of interest.
|
| 44 |
+
|
| 45 |
+
Pre-Grasp manipulation: To deal with occluded grasp configurations, prior work has studied pre-grasps as a preparatory stage [23], [24], [25], [7]. [7] is the most related to our work, but they use a specially designed end-effector to perform the pre-grasp motion and then use a second gripper to grasp the object. We demonstrate that the full grasping task can be solved with a single gripper without special requirements on the end-effector. These previous work typically separates pre-grasp motion and grasp execution into two stages and impose restrictions on the transitions of the stages. In our work, we co-optimize pre-grasp and grasp execution within an episode without explicit separation of the stages. The pre-grasping behavior emerges through learning without restrictions on object or gripper motions.
|
| 46 |
+
|
| 47 |
+
End-to-end grasping: Another line of work use an end-to-end pipeline for grasping with reinforcement learning [26] or imitation learning [27]. The policy performs an arbitrary grasp of the object without the possibility of specifying a certain set of grasps. Also, there has not been any emergent behavior of exposing occluded grasp pose in existing work.
|
| 48 |
+
|
| 49 |
+
## III. TASK DEFINITION: OCCLUDED GRASPING
|
| 50 |
+
|
| 51 |
+
Our work is designed to be used in a pipeline that follows a grasp pose generation method such as [4], [5], [19]. Given a rigid object, we assume a desired grasp $g$ as input to the system. A grasp configuration $g \in {SE}\left( 3\right)$ is defined to be the desired $6\mathrm{D}$ pose of the end-effector in the object frame $O$ . The grasp is fixed with respect to the object, and it will move when the object moves. On the top row of Figure 1, an example of a desired grasp is shown as a transparent gripper attached to the object. The goal of our work is to learn grasp execution which is to move the end-effector $E$ close to a given $g$ with a pose difference metric $\Delta \left( {g, E}\right)$ . In this paper, the task is defined to be successful if the position difference ${\Delta T}\left( {g, E}\right)$ and the orientation difference ${\Delta \theta }\left( {g, E}\right)$ are less than the pre-defined thresholds ${\varepsilon }_{T}$ and ${\varepsilon }_{P}$ respectively at the end of an episode. After successfully reaching the desired grasp pose, the gripper will be closed to complete the grasp. We define an "Occluded Grasping" task to be the case where the grasp $g$ is initially occluded (not in free space). When a set of grasps $G = \left\{ {g}_{i}\right\}$ are available, we may select a grasp ${g}_{i}$ from the set $G$ to execute (Appendix VII).
|
| 52 |
+
|
| 53 |
+
## IV. LEARNING OCCLUDED GRASPING WITH REINFORCEMENT LEARNING
|
| 54 |
+
|
| 55 |
+
We study the use of reinforcement learning (RL) to train a closed-loop policy for the occluded grasping task defined above. In this section, we will first discuss important design choices of the system considering a single target grasp including the extrinsic environment and the design of the RL problem. Then, we will also discuss how to improve the generalization of the policy using Automatic Domain Randomization [8]. Training and evaluation procedures that process a set of grasps can be found in Appendix VII,
|
| 56 |
+
|
| 57 |
+
## A. Extrinsic Environment
|
| 58 |
+
|
| 59 |
+
To showcase the benefits of extrinsic dexterity from object-scene interaction in this task, we construct the scene of the task as having an object in a bin, instead of leaving the object on the table (Figure 2). In Section V, we will show that the emergent policy will utilize the wall of the bin to rotate the object. Without the wall, it is not able to find a strategy that can successfully perform the task.
|
| 60 |
+
|
| 61 |
+
### B.RL Problem Design
|
| 62 |
+
|
| 63 |
+
We discuss the design of the RL problem in this section. More details can be found in Appendix 1. We train a goal-conditioned policy $\pi \left( {{a}_{t} \mid {s}_{t}, g}\right)$ for this task where the goal is a target grasp configuration $g.{s}_{t}$ includes the pose of the end-effector and the object pose. The action space of the policy is the delta pose of the end-effector ${\Delta E}$ which will be sent to a low-level Operational Space Controller (OSC). The choice of OSC allows compliant movement for such a contact-rich task (See Appendix 1 for more discussion). The reward function is designed to co-optimize the pre-grasp motion as well as grasp execution:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
r = {\alpha D}\left( {g, E}\right) + \beta \mathop{\sum }\limits_{i}P\left( {m}_{i}\right) \tag{1}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
|
| 71 |
+
Fig. 2: $E$ denotes the $6\mathrm{D}$ pose of the end-effector. $g$ denotes the target grasp defined in the object frame. Marker locations ${m}_{i}$ in green on the target grasp are used to calculate the occlusion penalty.
|
| 72 |
+
|
| 73 |
+
where
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
D\left( {g, E}\right) = {\alpha }_{1}{\Delta T}\left( {g, E}\right) + {\alpha }_{2}{\Delta \theta }\left( {g, E}\right) \tag{2}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
${\alpha }_{1},{\alpha }_{2}$ and $\beta$ are the weights for the reward terms. The first term of Equation 1, $D\left( {g, E}\right)$ , is the pose difference between the target grasp and the current end-effector pose. This term is expanded in Equation 2 to include the translational and rotational distance, as described in Section III. The second term of Equation 1 is the target grasp occlusion penalty which is to penalize the gripper if it is occluded by the table. We set several marker points on the target gripper (Figure 2) denoted as ${m}_{i}$ and compare the height of the markers with the table top. If a marker is below the table top, the height difference will be used as the penalty. Having the occlusion penalty can effectively reduce the local optima where the gripper will reach close to the target grasp (which is occluded) without trying to move the object.
|
| 80 |
+
|
| 81 |
+
To summarize, the first term of Equation 1 is to optimize for successful grasp execution and the second term is to encourage pre-grasp motions to move the object such that the grasp $g$ becomes unoccluded. An important difference from previous work is that pre-grasp and grasp execution components are optimized together instead of being separated into two stages. We did not have any reward terms that are explicitly related to extrinsic dexterity. In our system, the use of extrinsic dexterity is an emergent behavior of policy optimization given our objective and environmental setup.
|
| 82 |
+
|
| 83 |
+
## C. Policy Generalization
|
| 84 |
+
|
| 85 |
+
One benefit of using RL is that it generates a closed-loop policy instead of an open-loop trajectory. A closed-loop policy can ideally generalize to a wider range of state distributions which implies better performance over the variations of the environment properties such as object size, density, and friction coefficient, etc. The generalization can be improved further by training with domain randomization on the environment variations. This can also benefit sim-to-real transfer. We use Automatic Domain Randomization (ADR) [8] to improve the generalization of the policy. More implementation details can be found in Appendix 1,
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
|
| 89 |
+
Fig. 3: Left: Ablations on the reward function and the walls. Right: Evaluation on the generalization of the policies by sampling 100 environments.
|
| 90 |
+
|
| 91 |
+
## V. EXPERIMENTS
|
| 92 |
+
|
| 93 |
+
## A. Training Curves and Ablations
|
| 94 |
+
|
| 95 |
+
Details of the experiment setup can be found in Appendix III. In this section, we train the policies with a single desired grasp in the default environment without randomization of the physical parameters. From the training curve shown in Figure 3a, the policy trained with the complete system can reach a success rate of 1 before 4000 episodes which corresponds to 160000 environment steps. We performed an ablations analysis on the design choices to determine which components were the most important to the success of the system. First, we experiment with removing the wall of the bin to evaluate the importance of using the wall for extrinsic dexterity. As shown in Figure 3a, the resulting policy has $0\%$ success rate and pushes the object outside of the table. Second, we performed an ablation on the reward function. When we remove the grasp pose occlusion penalty (the second term of Equation 1), the policy is more likely to get stuck at a local optima of only trying to match the position and orientation of the gripper and thus the average success rate across random seeds becomes lower. An alternative is to use a $\{ - 1,0\}$ sparse reward according to the success criteria defined in Section III instead of the reward that we define in Equation 1. With a sparse reward, the policy learns much slower. Training this task with sparse reward makes the exploration task of the policy much more difficult.In addition, ablations on the choice of controller can be found in Appendix V. We also include results for multi-grasp training and multi-grasp selection in Appendix VII,
|
| 96 |
+
|
| 97 |
+
## B. Emergent Behaviors
|
| 98 |
+
|
| 99 |
+
Figure 1 shows a typical strategy of the successful policies. The strategy involves multiple stages of contact switches. The gripper first moves close to the object and makes contact on the side of the object with the left finger. It then pushes the object against the wall to rotate it. During this stage, the gripper maintains a fixed or rolling contact with the object. The object is usually under sliding contact with the wall and the ground of the bin at some of the corners. After the gripper has rotated a bit further and the right fingertip is below the object, the left finger will slide on the object or simply leave the object to let the object drop on the right finger. After the object lies on the right finger, the gripper will try to match the desired pose more precisely. At this point, the policy has executed the grasp successfully and it is ready to close the gripper. We include more visualizations of emergent behaviors in Appendix D. including another type of successful strategy, local optima behavior and multi-grasp behaviors. Videos can be found on the website [1]
|
| 100 |
+
|
| 101 |
+
## C. Policy Generalization
|
| 102 |
+
|
| 103 |
+
In this section, we analyze the performance of the policy across environment variations. The robustness over environment variations might come from the policy being closed-loop and the randomization of the physical parameters during training. Thus, we evaluate over open loop trajectories (Open Loop), policies trained over a fixed environment (Fixed Env) and policies trained with ADR (With ADR). The open loop trajectories are obtained by rolling out the Fixed Env policies in the default environment. We also turn off the randomization of the initial gripper pose for Open Loop; otherwise, the success rate is too low to compare with even in the default environment. We sample 100 environments from the training range of the ADR policies (Appendix II) and plot the percentage of environments that are above a certain performance metric (Figure 3b). The closed-loop policies are much better than open-loop trajectories across environment variations. The policy trained over a fixed environment is able to generalize to a wide range of variations. With ADR, the generalization can be improved even further. We also modify the important physical parameters one at a time to understand the sensitivity of these parameters in Appendix VI.
|
| 104 |
+
|
| 105 |
+
## D. Real-robot experiment
|
| 106 |
+
|
| 107 |
+
To further evaluate the generalization of the policies and demonstrate the feasibility of the proposed system, we execute the policies on the real robot with zero-shot sim2real transfer over 6 test cases shown in Figure 4. There are four box-shape objects with different sizes, density and surface friction. Box-1 has the same size and density as the default object trained in simulation. Box-2 is larger than the training range in the y-direction. Box-3 is larger than the training range in the z-direction. The surface friction are very different for different boxes. For example, Box- 3 has tape on its surface which has much higher friction than the others (which can be shown in the videos on the website ${}^{\square }$ ). However, we do not have access to the true friction coefficients of the objects to compare with the values in simulation. In addition, we evaluate Box-1 with additional weights by putting four or eight erasers inside of the box. Note that the erasers will move in the box during execution, which is not modeled in simulation. We evaluate two types of single grasp policies trained in simulation: one policy is trained with Automatic Domain Randomization as described in Section IV-C; another policy is trained on a fixed default environment without domain randomization.
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Fig. 4: Test cases for real robot experiments.
|
| 112 |
+
|
| 113 |
+
TABLE I: Real robot evaluations.
|
| 114 |
+
|
| 115 |
+
<table><tr><td>Object-ID</td><td>Size (cm)</td><td>Weight (g)</td><td>Success $\mathbf{w}/$ ADR</td><td>Success $\mathbf{w}/\mathbf{o}\mathbf{{ADR}}$</td></tr><tr><td>Box-1</td><td>(15.0,20.0,5.0)</td><td>128</td><td>10/10</td><td>10/10</td></tr><tr><td>Box-1 + 4 erasers</td><td>(15.0,20.0,5.0)</td><td>237</td><td>8/10</td><td>7/10</td></tr><tr><td>Box-1 + 8 erasers</td><td>(15.0,20.0,5.0)</td><td>345</td><td>6/10</td><td>4/10</td></tr><tr><td>Box-2</td><td>(15.4,29.2,5.8)</td><td>130</td><td>8/10</td><td>8/10</td></tr><tr><td>Box-3</td><td>(15.3,22.2,7.4)</td><td>113</td><td>10/10</td><td>4/10</td></tr><tr><td>Box-4</td><td>(15.3, 22.2, 7.4)</td><td>50</td><td>7/10</td><td>0/10</td></tr><tr><td>Average</td><td/><td/><td>0.82</td><td>0.55</td></tr></table>
|
| 116 |
+
|
| 117 |
+
We evaluate 10 episodes for each test case and summarize the results in Table 1. Videos of the real robot experiments can be found on the website ${}^{\square }$ . Overall, the policy with ADR achieves a success rate of ${82}\%$ while the policy without ADR achieves ${55}\%$ . ADR effectively improves the performance over a wider range of object variations. Note that both policies are evaluated on out-of-distribution objects: Box- 1 with 8 erasers, Box-3 and Box-4 are out of the training distribution of ADR (See Appendix II); All of the test cases except the first one (Box-1) are out-of-distribution for the policy without ADR. This demonstrates the robustness of the closed-loop policies of the proposed pipeline on such a dynamic manipulation task.
|
| 118 |
+
|
| 119 |
+
## VI. CONCLUSION
|
| 120 |
+
|
| 121 |
+
We study the "Occluded Grasping" task of reaching a desired grasp configuration that is initially occluded. With a parallel gripper, the robot has to use extrinsic dexterity to solve this task. We present a system that learns a closed-loop policy for this task with reinforcement learning. In the experiments, we demonstrate that the wall, the choice of controller, and the design of the reward function are all essential components. The policy can generalize across a wide range of environment variations and can be executed on the real robot. One potential extension of our work is to train the policy with a wide variety of object shapes which may require image-based policies. Also, the pipeline can potentially be applied to other extrinsic dexterity tasks.
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
https://sites.google.com/view/grasp-ungraspable
|
| 126 |
+
|
| 127 |
+
---
|
| 128 |
+
|
| 129 |
+
## REFERENCES
|
| 130 |
+
|
| 131 |
+
[1] N. C. Dafle, A. Rodriguez, R. Paolini, B. Tang, S. S. Srinivasa,
|
| 132 |
+
|
| 133 |
+
M. Erdmann, M. T. Mason, I. Lundberg, H. Staab, and T. Fuhlbrigge, "Extrinsic dexterity: In-hand manipulation with external forces," in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 1578-1585.
|
| 134 |
+
|
| 135 |
+
[2] N. Chavan-Dafle and A. Rodriguez, "Sampling-based planning of in-hand manipulation with external pushes," 2017.
|
| 136 |
+
|
| 137 |
+
[3] Y. Hou, Z. Jia, and M. Mason, "Manipulation with shared grasping," in Robotics: Science and Systems, 2020.
|
| 138 |
+
|
| 139 |
+
[4] A. Mousavian, C. Eppner, and D. Fox, "6-dof graspnet: Variational grasp generation for object manipulation," in International Conference on Computer Vision (ICCV), 2019.
|
| 140 |
+
|
| 141 |
+
[5] A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox, "6-dof grasping for target-driven object manipulation in clutter," 2020.
|
| 142 |
+
|
| 143 |
+
[6] L. Wang, Y. Xiang, and D. Fox, "Manipulation trajectory optimization with online grasp synthesis and selection," in Robotics: Science and Systems (RSS), 2020.
|
| 144 |
+
|
| 145 |
+
[7] Z. Sun, K. Yuan, W. Hu, C. Yang, and Z. Li, "Learning pregrasp manipulation of objects from ungraspable poses," 2020.
|
| 146 |
+
|
| 147 |
+
[8] OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan, W. Zaremba, and L. Zhang, "Solving rubik's cube with a robot hand," 2019.
|
| 148 |
+
|
| 149 |
+
[9] Y. Hou, Z. Jia, and M. T. Mason, "Fast planning for 3d any-pose-reorienting using pivoting," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 1631-1638.
|
| 150 |
+
|
| 151 |
+
[10] N. Chavan-Dafle, R. Holladay, and A. Rodriguez, "In-hand manipulation via motion cones," 2019.
|
| 152 |
+
|
| 153 |
+
[11] X. Cheng, E. Huang, Y. Hou, and M. T. Mason, "Contact mode guided sampling-based planning for quasistatic dexterous manipulation in 2d," in 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 6520-6526.
|
| 154 |
+
|
| 155 |
+
[12] ——, “Contact mode guided motion planning for quasidynamic dexterous manipulation in 3d," arXiv preprint arXiv:2105.14431, 2021.
|
| 156 |
+
|
| 157 |
+
[13] S. Levine, C. Finn, T. Darrell, and P. Abbeel, "End-to-end training of deep visuomotor policies," 2016.
|
| 158 |
+
|
| 159 |
+
[14] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, "Domain randomization for transferring deep neural networks from simulation to the real world," 2017.
|
| 160 |
+
|
| 161 |
+
[15] K. B. Shimoga, "Robot grasp synthesis algorithms: A survey," The International Journal of Robotics Research, vol. 15, no. 3, pp. 230- 266, 1996.
|
| 162 |
+
|
| 163 |
+
[16] V.-D. Nguyen, "Constructing force-closure grasps," The International Journal of Robotics Research, vol. 7, no. 3, pp. 3-16, 1988.
|
| 164 |
+
|
| 165 |
+
[17] L. Pinto and A. Gupta, "Supersizing self-supervision: Learning to grasp from ${50}\mathrm{k}$ tries and 700 robot hours," in 2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016, pp. 3406-3413.
|
| 166 |
+
|
| 167 |
+
[18] J. Bohg, A. Morales, T. Asfour, and D. Kragic, "Data-driven grasp synthesis-a survey," IEEE Transactions on Robotics, vol. 30, no. 2, pp. 289-309, 2013.
|
| 168 |
+
|
| 169 |
+
[19] A. Murali, W. Liu, K. Marino, S. Chernova, and A. Gupta, "Same object, different grasps: Data and semantic knowledge for task-oriented grasping," 2020.
|
| 170 |
+
|
| 171 |
+
[20] N. Vahrenkamp, M. Do, T. Asfour, and R. Dillmann, "Integrated grasp and motion planning," in 2010 IEEE International Conference on Robotics and Automation, 2010, pp. 2883-2888.
|
| 172 |
+
|
| 173 |
+
[21] J. Fontanals, B.-A. Dang-Vu, O. Porges, J. Rosell, and M. A. Roa, "Integrated grasp and motion planning using independent contact regions," in 2014 IEEE-RAS International Conference on Humanoid Robots, 2014, pp. 887-893.
|
| 174 |
+
|
| 175 |
+
[22] L. Wang, Y. Xiang, W. Yang, A. Mousavian, and D. Fox, "Goal-auxiliary actor-critic for 6d robotic grasping with point clouds," 2021.
|
| 176 |
+
|
| 177 |
+
[23] L. Y. Chang, S. S. Srinivasa, and N. S. Pollard, "Planning pre-grasp manipulation for transport tasks," in 2010 IEEE International Conference on Robotics and Automation. IEEE, 2010, pp. 2697- 2704.
|
| 178 |
+
|
| 179 |
+
[24] J. King, M. Klingensmith, C. Dellin, M. Dogar, P. Velagapudi, N. Pollard, and S. Srinivasa, "Pregrasp manipulation as trajectory optimization," in Proceedings of Robotics: Science and Systems, Berlin, Germany, June 2013.
|
| 180 |
+
|
| 181 |
+
[25] K. Hang, A. S. Morgan, and A. M. Dollar, "Pre-grasp sliding manipulation of thin objects using soft, compliant, or underactuated hands," IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 662-669, 2019.
|
| 182 |
+
|
| 183 |
+
[26] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine, "Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation," 2018.
|
| 184 |
+
|
| 185 |
+
[27] S. Song, A. Zeng, J. Lee, and T. Funkhouser, "Grasping in the wild: Learning 6dof closed-loop grasping from low-cost demonstrations," Robotics and Automation Letters, 2020.
|
| 186 |
+
|
| 187 |
+
[28] O. Khatib, "A unified approach for motion and force control of robot manipulators: The operational space formulation," IEEE Journal on Robotics and Automation, vol. 3, no. 1, pp. 43-53, 1987.
|
| 188 |
+
|
| 189 |
+
[29] R. Martín-Martín, M. A. Lee, R. Gardner, S. Savarese, J. Bohg, and A. Garg, "Variable impedance control in end-effector space: An action space for reinforcement learning in contact-rich tasks," in 2019 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 1010-1017.
|
| 190 |
+
|
| 191 |
+
[30] Y. Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín, "robosuite: A modular simulation framework and benchmark for robot learning," arXiv preprint arXiv:2009.12293, 2020.
|
| 192 |
+
|
| 193 |
+
[31] E. Todorov, T. Erez, and Y. Tassa, "Mujoco: A physics engine for model-based control," in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp. 5026-5033.
|
| 194 |
+
|
| 195 |
+
[32] K. Zhang, M. Sharma, J. Liang, and O. Kroemer, "A modular robotic arm control stack for research: Franka-interface and frankapy," arXiv preprint arXiv:2011.02398, 2020.
|
| 196 |
+
|
| 197 |
+
[33] S. Rusinkiewicz and M. Levoy, "Efficient variants of the icp algorithm," in Proceedings third international conference on 3-D digital imaging and modeling. IEEE, 2001, pp. 145-152.
|
| 198 |
+
|
| 199 |
+
[34] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor," in International conference on machine learning. PMLR, 2018, pp. 1861-1870.
|
| 200 |
+
|
| 201 |
+
[35] D. Ghosh, A. Singh, A. Rajeswaran, V. Kumar, and S. Levine, "Divide-and-conquer reinforcement learning," arXiv preprint arXiv:1711.09874, 2017.
|
| 202 |
+
|
| 203 |
+
[36] T. Yu, S. Kumar, A. Gupta, S. Levine, K. Hausman, and C. Finn, "Gradient surgery for multi-task learning," arXiv preprint arXiv:2001.06782, 2020.
|
| 204 |
+
|
| 205 |
+
## Appendix I MORE DETAILS OF RL PROBLEM DESIGN
|
| 206 |
+
|
| 207 |
+
Observations: We train a goal-conditioned policy $\pi \left( {{a}_{t} \mid {s}_{t},\eta }\right)$ for this task where the goal $\eta$ is a target grasp configuration $g$ . Note that the policy only takes one grasp as input but we will discuss how to deal with a set of grasps in Appendix VII. ${s}_{t}$ includes the pose of the end-effector in the world frame ${}^{W}E$ and the object pose in the world frame ${}^{W}O$ . We also include the pose of the end-effector in the object frame ${}^{O}E = {\left( {}^{W}O\right) }^{-1}\left( {{}^{W}E}\right)$ because we found that it sometimes speeds up learning. Each pose is represented as a 3D translation vector and a 4D quarternion representation of the rotation. In summary, the input to the policy includes $\left( {g,{}^{W}E,{}^{W}O,{}^{O}E}\right)$ which has a dimension of 28 in total.
|
| 208 |
+
|
| 209 |
+
Actions: An outline of the policy execution pipeline is shown in Figure 5. The action space of the policy is the delta pose of the end-effector ${\Delta E}$ in its local frame represented by a vector of translation $p \in {\mathbb{R}}^{3}$ and a 3D vector of rotation $q \in$ ${SO}\left( 3\right)$ with axis-angle representation. Thus, the dimension of the action space is $6.{\Delta E}$ and the current gripper pose $E$ form a desired pose ${E}_{d}$ at timestep $t$ which will be sent to a low-level Operational Space Controller which will be discussed in the next section.
|
| 210 |
+
|
| 211 |
+
If the corresponding joint configuration of the desired pose is going to reach joint limits, we will overwrite the policy action to output the desired pose of the previous timestep to the low-level controller. In detail, we use the Jacobian $J$ to estimate the joint configuration of the desired pose:
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
{\theta }_{\text{joints }}^{t + 1} = {\theta }_{\text{joints }}^{t} + {J}^{-1} \cdot {\Delta E} \tag{3}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+
Fig. 5: Outline of policy execution: Given the goal and the observation, the policy outputs a delta movement of the end-effector. If the desired pose is within the joint limit of the robot, it will be sent to the lower level controller.
|
| 220 |
+
|
| 221 |
+
where ${\theta }_{\text{joints }}$ are the joint angles. If any joint in ${\theta }_{\text{joints }}^{t + 1}$ is close to the limit, the low-level controller will use the previous desired pose instead.
|
| 222 |
+
|
| 223 |
+
Low-level controller: We use Operation Space Control (OSC) as the lower-level controller to achieve the desired pose [28]. Given a desired pose of the end-effector, OSC first calculates the corresponding force and torque at the end-effector to minimize the pose error according to a PD controller with gain ${K}_{p}$ and ${K}_{d}$ . Then, the desired force and torque of the end-effector will be converted into desired joint torques according to the model of the robot. OSC will operate at a higher frequency(100Hz)than the policy $\pi \left( {2\mathrm{{Hz}}}\right)$ .
|
| 224 |
+
|
| 225 |
+
This choice of controller is very important for this task due to the fact that we expect the agent to use extrinsic dexterity to solve the task which involves contacts among the gripper, the object and the bin. There are two benefits of OSC in contact-rich manipulation. First, being compliant in end-effector space allows safe execution of the motions without smashing the gripper on the objects or the bin. Limiting the delta pose and selecting proper gains ${K}_{p},{K}_{d}$ will limit the final force and torque output of the end-effector. If we use a controller that is compliant in the joint configuration space instead, we will not have direct control over the maximum force the end-effector might have on the object and the bin. Second, as shown in [29], using OSC as the low-level controller might speed up RL training and improve sim2real transfer for contact-rich manipulation.
|
| 226 |
+
|
| 227 |
+
## Appendix II DETAILS OF AUTOMATIC DOMAIN RANDOMIZATION
|
| 228 |
+
|
| 229 |
+
As discussed in Section IV-C, we use Automatic Domain Randomization [8] to improve policy generalization across environment variations. In ADR, the policy is first trained with an environment with very little randomization, and then we gradually expand the variations based on the evaluation performance. For a set of environment parameters ${\lambda }_{i}$ , each ${\lambda }_{i}$ is sampled from a uniform distribution ${\lambda }_{i} \sim U\left( {{\phi }_{i}^{L},{\phi }_{i}^{H}}\right)$ at the beginning of each episode. During training, the policy will be evaluated at these boundary values ${\lambda }_{i} = {\phi }_{i}^{L}$ or ${\lambda }_{i} = {\phi }_{i}^{H}$ . If the performance is higher than a threshold, the boundary value will be expanded by an increment $\Delta$ . For example, if the performance at ${\lambda }_{i} = {\phi }_{i}^{H}$ is higher than the threshold, the training distribution becomes ${\lambda }_{i} \sim U\left( {{\phi }_{i}^{L},{\phi }_{i}^{H} + \Delta }\right)$ in the next iteration. Compared to directly training the policy with the entire variations, Automatic Domain Randomization can reduce the need of manually tuning a suitable range of variations for each environment parameter.
|
| 230 |
+
|
| 231 |
+
Table II summarized the simulation parameters in the experiment. They start from a single initial value and gradually expand to a wider range according to the pre-specific increment step $+ \Delta$ on the upper bound and the decrement step $- \Delta$ at the lower bound. We include the final range from ADR expansion in the last column. These ranges are used when we sample 100 environments for evaluation in Section V-C. All the parameters are uniformly sampled from these ranges at the beginning of each episode.
|
| 232 |
+
|
| 233 |
+
<table><tr><td/><td>Initial Value</td><td>+Δ</td><td>$- \Delta$</td><td>Final Range</td></tr><tr><td>Object size x (m)</td><td>0.15</td><td>0.01</td><td>-0.01</td><td>$\left\lbrack {{0.14},{0.16}}\right\rbrack$</td></tr><tr><td>Object size $\mathrm{z}\left( \mathrm{m}\right)$</td><td>0.05</td><td>0.01</td><td>-0.01</td><td>$\left\lbrack {{0.04},{0.06}}\right\rbrack$</td></tr><tr><td>Table friction</td><td>0.3</td><td>0.1</td><td>-0.1</td><td>$\left\lbrack {{0.1},{0.5}}\right\rbrack$</td></tr><tr><td>Gripper friction</td><td>3</td><td>/</td><td>-1</td><td>$\left\lbrack {2,3}\right\rbrack$</td></tr><tr><td>Object Density $\left( {g/{m}^{3}}\right)$</td><td>86</td><td>86</td><td>43</td><td>$\left\lbrack {{43},{172}}\right\rbrack$</td></tr><tr><td>Action translation scale (m)</td><td>0.03</td><td>/</td><td>-0.005</td><td>$\left\lbrack {{0.02},{0.03}}\right\rbrack$</td></tr><tr><td>Action rotation scale (rad)</td><td>0.2</td><td>/</td><td>-0.05</td><td>$\left\lbrack {{0.1},{0.2}}\right\rbrack$</td></tr><tr><td>Initial distance to wall ( $\mathrm{m}$ )</td><td>0</td><td>0.01</td><td>/</td><td>$\left\lbrack {0,{0.02}}\right\rbrack$</td></tr><tr><td>Table offset $\mathrm{x}\left( \mathrm{m}\right)$</td><td>0.5</td><td>0.01</td><td>-0.01</td><td>$\left\lbrack {{0.48},{0.52}}\right\rbrack$</td></tr><tr><td>Table offset $\mathrm{z}\left( \mathrm{m}\right)$</td><td>0.07</td><td>0.01</td><td>0.01</td><td>$\left\lbrack {{0.055},{0.075}}\right\rbrack$</td></tr></table>
|
| 234 |
+
|
| 235 |
+
TABLE II: Simulation parameters in Automatic Domain Randomization
|
| 236 |
+
|
| 237 |
+
## Appendix III EXPERIMENT SETUP
|
| 238 |
+
|
| 239 |
+
Simulation: We build the simulation environment with Robosuite [30] in the MuJoCo simulator [31]. We use a box-shaped object in this task with a default grasp location shown in Figure 1. The object is placed in a bin in front of the robot. We use single grasp training by default; the results related to multi-grasp can be found in Appendix VII. Each episode has a length of 40 timesteps which corresponds to 20 seconds of real time execution. The initial joint configuration of the robot is randomized with a Gaussian of 0.02 rad.
|
| 240 |
+
|
| 241 |
+
Real robot experiment: The policy is trained in the simulator and zero-shot transferred on a physical Franka Emika Panda robot. The code for controlling the robot is built on top of FrankaPy [32]. For real robot experiments, we use Iterative Closest Point (ICP) for pose estimation of the object which matches a template point cloud of the object to the current point cloud [33]. An example of ICP result is shown in Figure 7.
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+
Fig. 6: Emergent behavior of the policy for the occluded grasping task involves multiple stages of contact mode transitions among the gripper, the object and the bin. The figure shows the corresponding stages in simulation versus the real robot execution of the policy.
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+
Fig. 7: Illustration of object pose estimation with ICP at three different timesteps of an episode. The blue points are observed point cloud which includes both the gripper and the object. The red points are the template model of the object.
|
| 250 |
+
|
| 251 |
+
Evaluation metrics: We compare the policies across 5 random seeds of each method and plot the average performance with standard deviation across seeds. Our main evaluation metric is the success rate at the final step of the episode computed as $\mathbb{1}\left( {{\Delta T} < 3\mathrm{\;{cm}}}\right) \cdot \mathbb{1}\left( {{\Delta \theta } < {10}\mathrm{{deg}}}\right)$ (See Section III for definitions). We use 10 episodes for each evaluation setting.
|
| 252 |
+
|
| 253 |
+
Implementation details: We use Soft Actor Critic [34] to train the RL policy with the impementation from rlkit. Both the policy network and the Q-function are parameterized as a multi-layer perceptron (MLP) with 3 layers of 512 neurons.
|
| 254 |
+
|
| 255 |
+
## Appendix IV ADDITIONAL RESULTS ON EMERGENT BEHAVIORS
|
| 256 |
+
|
| 257 |
+
In Section V-B, we discuss a typical emergent strategy of solving this task as a result of the design of the full system. Figure 6 includes a more detailed view of this strategy across multiple stages in simulation and on the real robot.
|
| 258 |
+
|
| 259 |
+
One of the key decisions in this strategy is to use the left finger to rotate the object instead of the right finger. One might suppose an alternative approach which is to use the right finger to scoop the object against the wall and then directly roll the finger underneath the object to reach the grasp. However, this strategy is not physically feasible on the parallel gripper due to the limited degree of freedom of the finger. We observe that the policies that follow this strategy during exploration usually get stuck at a local optima without successfully reaching the grasp (Figure 8a).
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+
(a) Local optima: The gripper uses the right finger to lift the object and get stuck at a local optima.
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
|
| 267 |
+
(b) Standing object: One of the successful strategies is to flip the object until it stands on the side and then reach the grasp.
|
| 268 |
+
|
| 269 |
+
Fig. 8: More visualizations on the emergent behavior of the policies.
|
| 270 |
+
|
| 271 |
+
Another type of successful strategy from some of the seeds is to flip the object to stand on its side and then move to the grasp (Figure 8b). This strategy overfits to the box object because it relies on the fact that the object remains stable after the flip. If the agent is trained on a more diverse set of objects without such stable poses, it might learn to avoid this strategy; however, for a box object, this is also a viable approach.
|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
|
| 275 |
+
Fig. 9: Ablations on the choice of controller.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+
Fig. 10: Evaluation on the generalization of the policies by changing one physical parameter at a time.
|
| 280 |
+
|
| 281 |
+
## Appendix V ABLATIONS ON LOW-LEVEL CONTROLLER
|
| 282 |
+
|
| 283 |
+
We compare our method to different types of controllers to demonstrate that the choice of Operational Space Controller (OSC) is critical for extrinsic dexterity. From Figure 9, both joint torque and joint position control lead to worse performance which indicates the importance of using end-effector coordinates for the action space. We also try increasing the gain of OSC so that it becomes roughly equivalent to position control. The success rate becomes lower which demonstrates that being compliant is important for the success of contact-rich tasks in addition to the importance of compliance for safety considerations.
|
| 284 |
+
|
| 285 |
+
## Appendix VI MORE RESULTS ON POLICY GENERALIZATION
|
| 286 |
+
|
| 287 |
+
To further analyze the robustness of the policy across environment variations, we modify the important physical parameters one at a time to understand the sensitivity of the policies to these parameters. Following Section V-C, we include the comparison of open loop trajectories (Open Loop), policies trained over a fixed environment (Fixed Env) and policies trained with ADR (With ADR). The closed-loop policies with ADR can deal with much wider variations of physical parameters than open loop trajectories.
|
| 288 |
+
|
| 289 |
+

|
| 290 |
+
|
| 291 |
+
(b) MultiGrasp-Side:The policy can use another side of the wall to rotate the object and reach the desired grasp.
|
| 292 |
+
|
| 293 |
+
Fig. 11: Visualizations of the multi-grasp policies.
|
| 294 |
+
|
| 295 |
+
## Appendix VII MULTIGRASP TRAINING AND SELECTION
|
| 296 |
+
|
| 297 |
+
In previous sections, we only consider the scenario when a single grasp is given for each episode. In this section, we consider the scenario in which a set of desired grasp configurations $G = {g}_{i}$ are given. We will first discuss the method for multi-grasp training and selection and then provide the experimental results.
|
| 298 |
+
|
| 299 |
+
MultiGrasp Training with Curriculum: During training, we aim at covering as many grasp configurations from ${G}_{\text{train }}$ as possible. The straight-forward approach is to uniformly sample a goal $g \sim {G}_{\text{train }}$ for each episode. However, previous work has shown that learning directly over such a diverse set of goals might create a difficulty for policy learning [35], [36]. Instead, we use an automatic curriculum following [8] to gradually expand the set of grasps to be trained with. We start the training with just a single fixed grasp; after the policy has achieved a success rate larger than a threshold, it will be trained on a slightly larger set with grasps close to the initial grasp location.
|
| 300 |
+
|
| 301 |
+
MultiGrasp Selection: During testing, a set of grasps ${G}_{\text{test }}$ is provided. Our method selects the best grasp within the set that maximizes the learned Q-function for the current observation: ${g}^{ * } = \arg \mathop{\max }\limits_{{g \sim {G}_{\text{test }}}}Q\left( {{s}_{t},{a}_{t},{g}_{t}}\right)$ . Selecting the best grasp from the set (instead of just using a single grasp) can improve the performance of the grasping task, following previous work in integrated grasp and motion planning [20], [21], [6]. The learned Q-function can select the grasp that is most easily reached with the trained policy; which grasp is selected thus depends both on the environmental configuration as well as how well the policy has learned to achieve different grasp configurations.
|
| 302 |
+
|
| 303 |
+
MultiGrasp Training Results: In this experiment, we train the policy to reach a range of grasp locations with curriculum as described above. Given the box object, we generate the grasp configurations around the box and parameterized the grasps into a continuous scalar grasp ID in the range of $\left\lbrack {0,4}\right\rbrack$ (Figure 12a). Grasp ID 1.5 is the default grasp we use in the single grasp experiments. The policy is trained with an automatic curriculum. When the success rate of policy on a boundary case of the training range is above 0.8 , it will expand the range of grasps by 0.25 . For example, if the policy is currently training with grasps [1,2], and the success rate evaluated at grasp ID 1 is above 0.8 , the new training range will be $\left\lbrack {{0.75},2}\right\rbrack$ . We train two types of multi-grasp policies starting from two different grasp poses: MultiGrasp-Front which starts the training from ID 1.5 and MultiGrasp-Side which starts the training from ID 2.5. As a baseline, we also train a policy by uniformly sampling from the entire set of grasps without using ADR, named All Grasp.
|
| 304 |
+
|
| 305 |
+

|
| 306 |
+
|
| 307 |
+
Fig. 12: Multi-grasp training: Left: Visualization of the range of grasp configurations and the grasp IDs used in multi-grasp training. Right: Performance of the multi-grasp policies across grasp configurations.
|
| 308 |
+
|
| 309 |
+
Figure 11a and Figure 11b include qualitative examples of the behaviors of MultiGrasp-Front and MultiGrasp-Side. The policy will rotate the object first and then try to match the pose more precisely. MultiGrasp-Side will use a different wall of the bin to rotate the object than MultiGrasp-Front. Figure 12b shows the performance of these policies evaluated over across grasp configuration IDs. We found that both MultiGrasp-Front and MultiGrasp-Side are able to expand from a single grasp to most of the grasps on one side of the object based on the curriculum. The policies have difficulties in reaching other sides potentially due to exploration issues or limited policy capacity. It may require a completely different strategy to reach different grasp configurations (Figure 11) which is difficult to learn with a single policy (related to [35]). In contrast, All Grasp has difficulties in learning any of the grasp configurations, thus showing the importance of using a curriculum for multi-grasp training.
|
| 310 |
+
|
| 311 |
+
MultiGrasp Selection Results: To compare grasp selection methods, at the beginning of each episode, we sample 50 grasp configurations from the training range of the policy. The grasp selection methods will use it as the set of desired grasps. We evaluate the following grasp selection options:
|
| 312 |
+
|
| 313 |
+
- Argmax $Q$ : passes all the possible grasp configurations into the Q-function and select the one that corresponds to the highest Q-value.
|
| 314 |
+
|
| 315 |
+
- PoseDiff: selects the grasp by the closest distance to the current gripper location according to Equation 2 (with the same weights as the reward function).
|
| 316 |
+
|
| 317 |
+
TABLE III: Comparison of grasp selection methods in two scenarios: front grasps and side grasps. When grasping from the side, the policy achieves better performance when using the $\mathrm{Q}$ -function to select the grasp.
|
| 318 |
+
|
| 319 |
+
<table><tr><td/><td>MultiGrasp-Front</td><td>MultiGrasp-Side</td></tr><tr><td>ArgmaxQ</td><td>${1.00} \pm {0.00}$</td><td>${1.00} \pm {0.00}$</td></tr><tr><td>ArgmaxQ- ${t}_{0}$</td><td>${1.00} \pm {0.00}$</td><td>${1.00} \pm {0.00}$</td></tr><tr><td>PoseDiff</td><td>${1.00} \pm {0.00}$</td><td>${0.96} \pm {0.08}$</td></tr><tr><td>PoseDiff- ${t}_{0}$</td><td>${1.00} \pm {0.00}$</td><td>${0.50} \pm {0.43}$</td></tr><tr><td>Uniform</td><td>${0.54} \pm {0.16}$</td><td>${0.90} \pm {0.06}$</td></tr></table>
|
| 320 |
+
|
| 321 |
+
- Argmax $Q - {t}_{0}$ : selects the grasp according to Argmax $Q$ only during the first timestep of the episode instead of selecting it at every timestep.
|
| 322 |
+
|
| 323 |
+
- PoseDiff- ${t}_{0}$ : selects the grasp according to PoseDiff only during the first timestep of the episode instead of selecting it at every timestep.
|
| 324 |
+
|
| 325 |
+
- Uniform: samples a grasp from the set uniformly.
|
| 326 |
+
|
| 327 |
+
The result is summarized in Table III. For MultiGrasp-Front, all of the methods other than Uniform achieve 100% success rate. In this case, the best grasp according to the Q-function does correspond to the grasp that is the closest to the gripper at grasp ID 1.5. For MultiGrasp-Side, ArgmaxQ- ${t}_{0}$ has higher success rate than PoseDiff- ${t}_{0}$ . The policy has a more complicated maneuver to reach the side grasp so the Q-function may capture the difficulty of the goal better than pose difference. At the beginning of the episode, the Q-function selects ID $= {2.5}$ while the pose difference selects $\mathrm{{ID}} = 2$ . If we keep this goal throughout the episode, PoseDiff- ${t}_{0}$ has a much lower success rate than the other baselines. If the policy can select the goal throughout the episode instead (PoseDiff), the performance can be improved compared to PoseDiff- ${t}_{0}$ .
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Zrp4wpa9lqh/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LEARNING TO GRASP THE UNGRASPABLE WITH EMERGENT EXTRINSIC DEXTERITY
|
| 2 |
+
|
| 3 |
+
Wenxuan Zhou ${}^{1}$ and David Held ${}^{1}$
|
| 4 |
+
|
| 5 |
+
< g r a p h i c s >
|
| 6 |
+
|
| 7 |
+
Fig. 1: We study the task of "Occluded Grasping" with extrinsic dexterity. The goal of this task is to reach an occluded grasp configuration (indicated by a transparent gripper attached to the object in the top row). The figure shows the emergent behavior of the trained policy which uses the wall of the bin to rotate the object to reach a grasp.
|
| 8 |
+
|
| 9 |
+
Abstract-A robot can solve more complex manipulation tasks beyond the limitations of its body if it can utilize the external environment such as pushing the object against the table or a vertical wall. These behaviors are known as "Extrinsic Dexterity." Previous work in extrinsic dexterity usually relies on hand-crafted primitives or careful assumptions about contacts. In this work, we explore the use of reinforcement learning (RL) on the extrinsic dexterity with the task of "Occluded Grasping". The goal of the task is to grasp the object in configurations that are initially occluded; the robot must interact with the object and the extrinsic environment to move the object into a configuration from which these grasps can be achieved. To accomplish this task, we train a policy to co-optimize pre-grasp and grasping motions; this results in emergent behavior of pushing the object against the wall in order to rotate and then grasp it. We demonstrate the generality of the learned policy across environment variations in simulation and evaluate it on a real robot with zero-shot sim2real transfer. Videos can be found at https://sites.google.com/view/grasp-ungraspable.
|
| 10 |
+
|
| 11 |
+
§ I. INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Humans have dexterous multi-fingered hands; however, similarly dexterous robot hands are expensive and fragile. Instead, robots can achieve dexterous manipulation with a simple hand by leveraging the environment, known as "Extrinsic Dexterity" [1]. For example, a simple gripper can rotate an object in-hand by pushing it against the table [2], or lifting an object by sliding it along a vertical surface [3]. With the exploitation of external resources such as contact surfaces or gravity, even simple grippers can perform skillful maneuvers that are typically studied with a multi-fingered dexterous hand. Different from a common practice of considering the robot and an object of interest in isolation, extrinsic dexterity focuses on a holistic view of considering the interactions among the robot, the object, and the external environment.
|
| 14 |
+
|
| 15 |
+
Previous work in extrinsic dexterity has demonstrated a variety of tasks such as in-hand reorientation with a simple gripper, prehensile pushing or shared grasping [1], [2], [3]. However, the underlying approaches come with several limitations such as relying on hand-designed primitives, making assumptions about contact locations and contact modes, or requiring specific gripper design. Instead, we use reinforcement learning (RL) to remove these limitations. With reinforcement learning, the agent can learn a closed-loop policy of how the robot should interact with the object and the environment to solve the task. In addition, when trained with domain randomization, the policy can learn to be robust to different variations of physics. These properties of RL can enable extrinsic dexterity in a more general setting.
|
| 16 |
+
|
| 17 |
+
We study "Occluded Grasping" as an example of a task that requires extrinsic dexterity. Occluded Grasping is defined with the goal of grasping an object in poses that are initially occluded. Consider, for example, a robot that needs to grasp a cereal box lying on its side on a table; the desired grasp is not reachable because it is partially occluded by the table (Figure 1). To achieve this grasp with a parallel gripper, the robot might rotate the object by pushing it against a vertical wall to expose the desired grasp. This task is in contrast with existing grasping tasks which mostly focus on reaching an unoccluded grasp in free space with a static or near-static scene [4], [5], [6]. Prior work has attempted to design pre-grasp motions of exposing occluded grasp poses with primitives or special gripper design [7]. In our work, the pre-grasp motion is an emergent behavior through a novel reward function that co-optimizes exposing the grasp pose and achieving the grasp pose. In addition, we frame the task as a goal-conditioned RL problem, in which the policy is conditioned on the selected grasp. During training, the policy learns to reach as many grasp poses as possible with an automatic curriculum [8]. During testing, given a set of grasps, the policy can select one of them as a goal to execute.
|
| 18 |
+
|
| 19 |
+
In summary, we present a system for "Occluded Grasping" as an example of the combination of reinforcement learning and extrinsic dexterity. We provide a comprehensive evaluation of the system both in simulation and on a real Franka Emika Panda robot. We showcase the importance of each components and the generalization of the learned policy across environment variations in simulation and real.
|
| 20 |
+
|
| 21 |
+
${}^{1}$ Robotics Institute, Carnegie Mellon University
|
| 22 |
+
|
| 23 |
+
§ II. RELATED WORK
|
| 24 |
+
|
| 25 |
+
§ A. EXTRINSIC DEXTERITY
|
| 26 |
+
|
| 27 |
+
"Extrinsic dexterity" is a type of manipulation skills that enhance the intrinsic capability of a hand using external resources including external contacts, gravity, or dynamic motions of the arm [1]. Previous work in extrinsic dexterity has demonstrated complex manipulation tasks with a simple gripper including in-hand reorientation [1], [9], prehensile pushing [2], [10], shared grasping [3], etc. In this work, we study a different task that can further demonstrate the benefit of extrinsic dexterity. Extrinsic dexterity usually involves contact-rich behaviors which poses difficulties in planning and control. Previous work has used hand-crafted trajectories [1], task-specific motion primitives [9], [3] or motion planning over contact mode switches [2], [10], [11], [12]. They come with the restrictions on the contact modes between the finger and the object which will limit the motion and the design of the gripper. In this work, we take an alternative approach of using reinforcement learning to learn a closed-loop policy that considers both planning and control.
|
| 28 |
+
|
| 29 |
+
§ B. REINFORCEMENT LEARNING FOR MANIPULATION
|
| 30 |
+
|
| 31 |
+
Previous work that uses reinforcement learning for manipulation tasks treat the object and the robot in isolation without considering extrinsic dexterity [13], [14], [8]. In our work, we demonstrate that the agent can benefit from extrinsic dexterity when solving the occluded grasping task.
|
| 32 |
+
|
| 33 |
+
§ C. GRASPING
|
| 34 |
+
|
| 35 |
+
Grasping has been an important task in robot manipulation and has been studied from various aspects.
|
| 36 |
+
|
| 37 |
+
Grasp generation: One area of study in grasping is to generate stable grasp configurations [15], [16], [17], [4], [18], [5], [19]. We assume that we will use the grasps generated by any grasp generation method as input to our system.
|
| 38 |
+
|
| 39 |
+
Grasp execution: To execute a grasp following grasp generation, a motion planner is usually used to generate a collision-free path towards the desired grasp configuration. If there is a set of desired grasps, integrated grasp and motion planning could be considered [20], [21], [6]. [22] uses imitation learning and reinforcement learning to finetune the trajectories from the planner. All of these works aim at achieving the unoccluded grasp configurations in static or near-static scenes. Instead, our work focuses on a complementary direction of achieving occluded grasp locations by interacting with the object of interest.
|
| 40 |
+
|
| 41 |
+
Pre-Grasp manipulation: To deal with occluded grasp configurations, prior work has studied pre-grasps as a preparatory stage [23], [24], [25], [7]. [7] is the most related to our work, but they use a specially designed end-effector to perform the pre-grasp motion and then use a second gripper to grasp the object. We demonstrate that the full grasping task can be solved with a single gripper without special requirements on the end-effector. These previous work typically separates pre-grasp motion and grasp execution into two stages and impose restrictions on the transitions of the stages. In our work, we co-optimize pre-grasp and grasp execution within an episode without explicit separation of the stages. The pre-grasping behavior emerges through learning without restrictions on object or gripper motions.
|
| 42 |
+
|
| 43 |
+
End-to-end grasping: Another line of work use an end-to-end pipeline for grasping with reinforcement learning [26] or imitation learning [27]. The policy performs an arbitrary grasp of the object without the possibility of specifying a certain set of grasps. Also, there has not been any emergent behavior of exposing occluded grasp pose in existing work.
|
| 44 |
+
|
| 45 |
+
§ III. TASK DEFINITION: OCCLUDED GRASPING
|
| 46 |
+
|
| 47 |
+
Our work is designed to be used in a pipeline that follows a grasp pose generation method such as [4], [5], [19]. Given a rigid object, we assume a desired grasp $g$ as input to the system. A grasp configuration $g \in {SE}\left( 3\right)$ is defined to be the desired $6\mathrm{D}$ pose of the end-effector in the object frame $O$ . The grasp is fixed with respect to the object, and it will move when the object moves. On the top row of Figure 1, an example of a desired grasp is shown as a transparent gripper attached to the object. The goal of our work is to learn grasp execution which is to move the end-effector $E$ close to a given $g$ with a pose difference metric $\Delta \left( {g,E}\right)$ . In this paper, the task is defined to be successful if the position difference ${\Delta T}\left( {g,E}\right)$ and the orientation difference ${\Delta \theta }\left( {g,E}\right)$ are less than the pre-defined thresholds ${\varepsilon }_{T}$ and ${\varepsilon }_{P}$ respectively at the end of an episode. After successfully reaching the desired grasp pose, the gripper will be closed to complete the grasp. We define an "Occluded Grasping" task to be the case where the grasp $g$ is initially occluded (not in free space). When a set of grasps $G = \left\{ {g}_{i}\right\}$ are available, we may select a grasp ${g}_{i}$ from the set $G$ to execute (Appendix VII).
|
| 48 |
+
|
| 49 |
+
§ IV. LEARNING OCCLUDED GRASPING WITH REINFORCEMENT LEARNING
|
| 50 |
+
|
| 51 |
+
We study the use of reinforcement learning (RL) to train a closed-loop policy for the occluded grasping task defined above. In this section, we will first discuss important design choices of the system considering a single target grasp including the extrinsic environment and the design of the RL problem. Then, we will also discuss how to improve the generalization of the policy using Automatic Domain Randomization [8]. Training and evaluation procedures that process a set of grasps can be found in Appendix VII,
|
| 52 |
+
|
| 53 |
+
§ A. EXTRINSIC ENVIRONMENT
|
| 54 |
+
|
| 55 |
+
To showcase the benefits of extrinsic dexterity from object-scene interaction in this task, we construct the scene of the task as having an object in a bin, instead of leaving the object on the table (Figure 2). In Section V, we will show that the emergent policy will utilize the wall of the bin to rotate the object. Without the wall, it is not able to find a strategy that can successfully perform the task.
|
| 56 |
+
|
| 57 |
+
§ B.RL PROBLEM DESIGN
|
| 58 |
+
|
| 59 |
+
We discuss the design of the RL problem in this section. More details can be found in Appendix 1. We train a goal-conditioned policy $\pi \left( {{a}_{t} \mid {s}_{t},g}\right)$ for this task where the goal is a target grasp configuration $g.{s}_{t}$ includes the pose of the end-effector and the object pose. The action space of the policy is the delta pose of the end-effector ${\Delta E}$ which will be sent to a low-level Operational Space Controller (OSC). The choice of OSC allows compliant movement for such a contact-rich task (See Appendix 1 for more discussion). The reward function is designed to co-optimize the pre-grasp motion as well as grasp execution:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
r = {\alpha D}\left( {g,E}\right) + \beta \mathop{\sum }\limits_{i}P\left( {m}_{i}\right) \tag{1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
< g r a p h i c s >
|
| 66 |
+
|
| 67 |
+
Fig. 2: $E$ denotes the $6\mathrm{D}$ pose of the end-effector. $g$ denotes the target grasp defined in the object frame. Marker locations ${m}_{i}$ in green on the target grasp are used to calculate the occlusion penalty.
|
| 68 |
+
|
| 69 |
+
where
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
D\left( {g,E}\right) = {\alpha }_{1}{\Delta T}\left( {g,E}\right) + {\alpha }_{2}{\Delta \theta }\left( {g,E}\right) \tag{2}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
${\alpha }_{1},{\alpha }_{2}$ and $\beta$ are the weights for the reward terms. The first term of Equation 1, $D\left( {g,E}\right)$ , is the pose difference between the target grasp and the current end-effector pose. This term is expanded in Equation 2 to include the translational and rotational distance, as described in Section III. The second term of Equation 1 is the target grasp occlusion penalty which is to penalize the gripper if it is occluded by the table. We set several marker points on the target gripper (Figure 2) denoted as ${m}_{i}$ and compare the height of the markers with the table top. If a marker is below the table top, the height difference will be used as the penalty. Having the occlusion penalty can effectively reduce the local optima where the gripper will reach close to the target grasp (which is occluded) without trying to move the object.
|
| 76 |
+
|
| 77 |
+
To summarize, the first term of Equation 1 is to optimize for successful grasp execution and the second term is to encourage pre-grasp motions to move the object such that the grasp $g$ becomes unoccluded. An important difference from previous work is that pre-grasp and grasp execution components are optimized together instead of being separated into two stages. We did not have any reward terms that are explicitly related to extrinsic dexterity. In our system, the use of extrinsic dexterity is an emergent behavior of policy optimization given our objective and environmental setup.
|
| 78 |
+
|
| 79 |
+
§ C. POLICY GENERALIZATION
|
| 80 |
+
|
| 81 |
+
One benefit of using RL is that it generates a closed-loop policy instead of an open-loop trajectory. A closed-loop policy can ideally generalize to a wider range of state distributions which implies better performance over the variations of the environment properties such as object size, density, and friction coefficient, etc. The generalization can be improved further by training with domain randomization on the environment variations. This can also benefit sim-to-real transfer. We use Automatic Domain Randomization (ADR) [8] to improve the generalization of the policy. More implementation details can be found in Appendix 1,
|
| 82 |
+
|
| 83 |
+
< g r a p h i c s >
|
| 84 |
+
|
| 85 |
+
Fig. 3: Left: Ablations on the reward function and the walls. Right: Evaluation on the generalization of the policies by sampling 100 environments.
|
| 86 |
+
|
| 87 |
+
§ V. EXPERIMENTS
|
| 88 |
+
|
| 89 |
+
§ A. TRAINING CURVES AND ABLATIONS
|
| 90 |
+
|
| 91 |
+
Details of the experiment setup can be found in Appendix III. In this section, we train the policies with a single desired grasp in the default environment without randomization of the physical parameters. From the training curve shown in Figure 3a, the policy trained with the complete system can reach a success rate of 1 before 4000 episodes which corresponds to 160000 environment steps. We performed an ablations analysis on the design choices to determine which components were the most important to the success of the system. First, we experiment with removing the wall of the bin to evaluate the importance of using the wall for extrinsic dexterity. As shown in Figure 3a, the resulting policy has $0\%$ success rate and pushes the object outside of the table. Second, we performed an ablation on the reward function. When we remove the grasp pose occlusion penalty (the second term of Equation 1), the policy is more likely to get stuck at a local optima of only trying to match the position and orientation of the gripper and thus the average success rate across random seeds becomes lower. An alternative is to use a $\{ - 1,0\}$ sparse reward according to the success criteria defined in Section III instead of the reward that we define in Equation 1. With a sparse reward, the policy learns much slower. Training this task with sparse reward makes the exploration task of the policy much more difficult.In addition, ablations on the choice of controller can be found in Appendix V. We also include results for multi-grasp training and multi-grasp selection in Appendix VII,
|
| 92 |
+
|
| 93 |
+
§ B. EMERGENT BEHAVIORS
|
| 94 |
+
|
| 95 |
+
Figure 1 shows a typical strategy of the successful policies. The strategy involves multiple stages of contact switches. The gripper first moves close to the object and makes contact on the side of the object with the left finger. It then pushes the object against the wall to rotate it. During this stage, the gripper maintains a fixed or rolling contact with the object. The object is usually under sliding contact with the wall and the ground of the bin at some of the corners. After the gripper has rotated a bit further and the right fingertip is below the object, the left finger will slide on the object or simply leave the object to let the object drop on the right finger. After the object lies on the right finger, the gripper will try to match the desired pose more precisely. At this point, the policy has executed the grasp successfully and it is ready to close the gripper. We include more visualizations of emergent behaviors in Appendix D. including another type of successful strategy, local optima behavior and multi-grasp behaviors. Videos can be found on the website [1]
|
| 96 |
+
|
| 97 |
+
§ C. POLICY GENERALIZATION
|
| 98 |
+
|
| 99 |
+
In this section, we analyze the performance of the policy across environment variations. The robustness over environment variations might come from the policy being closed-loop and the randomization of the physical parameters during training. Thus, we evaluate over open loop trajectories (Open Loop), policies trained over a fixed environment (Fixed Env) and policies trained with ADR (With ADR). The open loop trajectories are obtained by rolling out the Fixed Env policies in the default environment. We also turn off the randomization of the initial gripper pose for Open Loop; otherwise, the success rate is too low to compare with even in the default environment. We sample 100 environments from the training range of the ADR policies (Appendix II) and plot the percentage of environments that are above a certain performance metric (Figure 3b). The closed-loop policies are much better than open-loop trajectories across environment variations. The policy trained over a fixed environment is able to generalize to a wide range of variations. With ADR, the generalization can be improved even further. We also modify the important physical parameters one at a time to understand the sensitivity of these parameters in Appendix VI.
|
| 100 |
+
|
| 101 |
+
§ D. REAL-ROBOT EXPERIMENT
|
| 102 |
+
|
| 103 |
+
To further evaluate the generalization of the policies and demonstrate the feasibility of the proposed system, we execute the policies on the real robot with zero-shot sim2real transfer over 6 test cases shown in Figure 4. There are four box-shape objects with different sizes, density and surface friction. Box-1 has the same size and density as the default object trained in simulation. Box-2 is larger than the training range in the y-direction. Box-3 is larger than the training range in the z-direction. The surface friction are very different for different boxes. For example, Box- 3 has tape on its surface which has much higher friction than the others (which can be shown in the videos on the website ${}^{\square }$ ). However, we do not have access to the true friction coefficients of the objects to compare with the values in simulation. In addition, we evaluate Box-1 with additional weights by putting four or eight erasers inside of the box. Note that the erasers will move in the box during execution, which is not modeled in simulation. We evaluate two types of single grasp policies trained in simulation: one policy is trained with Automatic Domain Randomization as described in Section IV-C; another policy is trained on a fixed default environment without domain randomization.
|
| 104 |
+
|
| 105 |
+
< g r a p h i c s >
|
| 106 |
+
|
| 107 |
+
Fig. 4: Test cases for real robot experiments.
|
| 108 |
+
|
| 109 |
+
TABLE I: Real robot evaluations.
|
| 110 |
+
|
| 111 |
+
max width=
|
| 112 |
+
|
| 113 |
+
Object-ID Size (cm) Weight (g) Success $\mathbf{w}/$ ADR Success $\mathbf{w}/\mathbf{o}\mathbf{{ADR}}$
|
| 114 |
+
|
| 115 |
+
1-5
|
| 116 |
+
Box-1 (15.0,20.0,5.0) 128 10/10 10/10
|
| 117 |
+
|
| 118 |
+
1-5
|
| 119 |
+
Box-1 + 4 erasers (15.0,20.0,5.0) 237 8/10 7/10
|
| 120 |
+
|
| 121 |
+
1-5
|
| 122 |
+
Box-1 + 8 erasers (15.0,20.0,5.0) 345 6/10 4/10
|
| 123 |
+
|
| 124 |
+
1-5
|
| 125 |
+
Box-2 (15.4,29.2,5.8) 130 8/10 8/10
|
| 126 |
+
|
| 127 |
+
1-5
|
| 128 |
+
Box-3 (15.3,22.2,7.4) 113 10/10 4/10
|
| 129 |
+
|
| 130 |
+
1-5
|
| 131 |
+
Box-4 (15.3, 22.2, 7.4) 50 7/10 0/10
|
| 132 |
+
|
| 133 |
+
1-5
|
| 134 |
+
Average X X 0.82 0.55
|
| 135 |
+
|
| 136 |
+
1-5
|
| 137 |
+
|
| 138 |
+
We evaluate 10 episodes for each test case and summarize the results in Table 1. Videos of the real robot experiments can be found on the website ${}^{\square }$ . Overall, the policy with ADR achieves a success rate of ${82}\%$ while the policy without ADR achieves ${55}\%$ . ADR effectively improves the performance over a wider range of object variations. Note that both policies are evaluated on out-of-distribution objects: Box- 1 with 8 erasers, Box-3 and Box-4 are out of the training distribution of ADR (See Appendix II); All of the test cases except the first one (Box-1) are out-of-distribution for the policy without ADR. This demonstrates the robustness of the closed-loop policies of the proposed pipeline on such a dynamic manipulation task.
|
| 139 |
+
|
| 140 |
+
§ VI. CONCLUSION
|
| 141 |
+
|
| 142 |
+
We study the "Occluded Grasping" task of reaching a desired grasp configuration that is initially occluded. With a parallel gripper, the robot has to use extrinsic dexterity to solve this task. We present a system that learns a closed-loop policy for this task with reinforcement learning. In the experiments, we demonstrate that the wall, the choice of controller, and the design of the reward function are all essential components. The policy can generalize across a wide range of environment variations and can be executed on the real robot. One potential extension of our work is to train the policy with a wide variety of object shapes which may require image-based policies. Also, the pipeline can potentially be applied to other extrinsic dexterity tasks.
|
| 143 |
+
|
| 144 |
+
https://sites.google.com/view/grasp-ungraspable
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Learning active tactile perception through belief-space control
|
| 2 |
+
|
| 3 |
+
Jean-François Tremblay, Johanna Hansen, David Meger, Francois Hogan, Gregory Dudek
|
| 4 |
+
|
| 5 |
+
Abstract- Robots operating in an open world can encounter novel objects with unknown physical properties, such as mass, friction, or size. It is desirable to be able to sense those property through contact-rich interaction, before performing downstream tasks with the objects. We propose a method for autonomously learning active tactile perception policies, by learning a generative world model leveraging a differentiable bayesian filtering algorithm, and designing an information-gathering model predictive controller. We test the method on two simulated tasks: mass estimation and height estimation. Our method is able to discover policies which gather information about the desired property in an intuitive manner.
|
| 6 |
+
|
| 7 |
+
## I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Robots operating in an open world can encounter arbitrary, unseen objects and are expected to manipulate them effectively. To achieve this, robots must have the ability to infer the physical properties of unknown objects through physical interactions. The online measurement of these properties is key for robots to operate robustly in the real-world with open-ended object categories.
|
| 10 |
+
|
| 11 |
+
Psychology literature refers to the way humans measure these properties as exploratory procedures [1]. These procedures, for example, include pressing to test for object hardness and lifting to estimate object mass. These exploratory procedures are challenging to hand-engineer and vary based on the object class. This work focuses on learning such exploratory procedures to estimate object properties through belief-space control. Using a combination of 1) learning-based state-estimation to infer the property from a sequence of observations and actions 2) information-gathering model-predictive control (MPC), we demonstrate that it is possible to learn to execute actions that are informative about the property of interest and to discover exploratory procedure without any human priors.
|
| 12 |
+
|
| 13 |
+
## II. RELATED WORKS
|
| 14 |
+
|
| 15 |
+
## A. Learning for state-estimation
|
| 16 |
+
|
| 17 |
+
There are several works proposing the fusion of Bayesian filtering methods with deep learning, where the dynamics and observation models used are learned neural networks.
|
| 18 |
+
|
| 19 |
+
Lee et al. [2] provide a good overview of learning Bayesian filtering models for robotics applications, and release torchfilter, a library of algorithm for this purpose which we build on for our belief-space control algorithm.
|
| 20 |
+
|
| 21 |
+
In [3], the authors present the Backprop Kalman filter described as a discriminative approach to filtering. Discriminative filtering does away with learning an observation model (a mapping from state to observation) and learns a mapping from observation to state instead. Here, we argue that learning a generative observation model, while more computationally challenging, is key to predicting future state uncertainty and planning for informative actions.
|
| 22 |
+
|
| 23 |
+
Burkhart et al. [4] present the discriminative Kalman filter concurrently to [3]. This approach assumes linear dynamics and models the prior over observations as Gaussian. It can only handle stationary observation processes.
|
| 24 |
+
|
| 25 |
+
## B. Active perception
|
| 26 |
+
|
| 27 |
+
Active perception consist of acting in a way that assists perception and can incorporate learning, including the learning methods above. Denil et al. [5] use reinforcement learning for "Which is Heavier" and "Tower" environments, where the goal of the former is to push blocks and, after a certain interaction period, take a "labelling action" to guess which block is heavier. You get a reward if the label is correct. They then train a recurrent deep reinforcement learning policy on that environment. The action space for these problems is constrained and designed to act such that the blocks are pushed with a fixed force towards their center of mass. While this method enables the robot to effectively retrieve mass using human priors and intuition, our work differs where the robot is tasked with discovering such behaviors autonomously with unconstrained action spaces.
|
| 28 |
+
|
| 29 |
+
More specifically to robotics, Wang et al. [6] introduce SwingBot, a robotic system that swings up an object with changing physical properties (moments, center of mass). Before the swing up phase, the system follows a hand-engineered exploratory procedure that shakes and tilts the object in the hand to extract the necessary information for a successful swing up. Rather than engineering the exploration phase, we propose a generic framework for extracting such information before accomplishing a given task.
|
| 30 |
+
|
| 31 |
+
## III. METHODS
|
| 32 |
+
|
| 33 |
+
We are in a controlled hidden Markov model (HMM) setting (a partially observable Markov decision process (POMDP) without a reward function), where each observation ${o}_{t}$ gives us partial information about the state of the robot and object we are interested in. More formally a controlled HMM is a tuple $\left( {\mathcal{S},\mathcal{A}, p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right) ,\Omega , p\left( {{o}_{t} \mid {s}_{t}}\right) }\right)$ , where the state, action and observation space $(\mathcal{S},\mathcal{A}$ and $\Omega$ respectively) are in ${\mathbb{R}}^{n},{\mathbb{R}}^{m},{\mathbb{R}}^{d}$ respectively. It is important to note that in this context, the state can contain robot pose and velocity, object pose and velocity, object properties, and all properties that describes the environment and that are subject to change either during or in between episodes. The representation for the state will be learned in a self-supervised fashion, as described in $§$ III-A, and will be learned in such a way that the first element of the state represents the object property of interest:
|
| 34 |
+
|
| 35 |
+
$$
|
| 36 |
+
{s}_{t} = \left( {{m}_{t},{z}_{t}}\right) ,{m}_{t} \in \mathbb{R},{z}_{t} \in {\mathbb{R}}^{n - 1}. \tag{1}
|
| 37 |
+
$$
|
| 38 |
+
|
| 39 |
+
We are in an episodic setting with ending timestep $T$ , and where at each episode the object is randomized. For mass estimation as an example, at each episode, an object with a different mass is presented and the goal is to infer the mass of this new object.
|
| 40 |
+
|
| 41 |
+
In $\sharp$ III-A we describe how to infer the belief state (containing an estimate of the object property of interest) ${b}_{t} \approx p\left( {{s}_{t} \mid {a}_{0},\ldots ,{a}_{t - 1},{o}_{1},\ldots {o}_{t}}\right) ,{\bar{b}}_{t} \approx$ $p\left( {{s}_{t} \mid {a}_{0},\ldots ,{a}_{t - 1},{o}_{1},\ldots {o}_{t - 1}}\right)$ . In $§$ III-B we use that estimate to design an information-gathering controller. Finally, in $§$ III-C we present how to integrate these two things in a data-collection/training and control loop.
|
| 42 |
+
|
| 43 |
+
## A. Learning-based Kalman filter
|
| 44 |
+
|
| 45 |
+
Here the goal is to learn a dynamics and observation model while performing belief-state inference. The dynamics model representing $p\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1}}\right)$ is
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
{s}_{t} = {f}_{\theta }\left( {{s}_{t - 1},{a}_{t - 1}}\right) + {\sum }_{\theta }\left( {{s}_{t - 1},{a}_{t - 1}}\right) {w}_{t} \tag{2}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where ${w}_{t}$ are independent and identically distributed (IID) standard Gaussian random variable in ${\mathbb{R}}^{n}$ .
|
| 52 |
+
|
| 53 |
+
Generative filtering (as opposed to discriminative filtering $\left\lbrack {2,3}\right\rbrack )$ implies learning a generative world-model, able to fully simulate the system and generate observations via the equation
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{o}_{t} = {h}_{\theta }\left( {s}_{t}\right) + {\Gamma }_{\theta }\left( {s}_{t}\right) {v}_{t}. \tag{3}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where ${v}_{t}$ are IID standard Gaussian random variables in ${\mathbb{R}}^{d}$ . While learning this model can be more challenging in the face of high-dimensional and complex observation spaces (e.g. images), it opens up new avenues for forward belief-space planning.
|
| 60 |
+
|
| 61 |
+
Using an explicit-likelihood (Gaussian state-space model) setting, we train the model in an self-predictive manner. In (8), we present the derivation for the loss of the generative observation model. This derivation is adapted from [7] Chapter 12, where we integrate action variables.
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
p\left( {{o}_{1},\ldots ,{o}_{T} \mid \theta ,{a}_{0},\ldots ,{a}_{T - 1}}\right) \tag{4}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
= \mathop{\prod }\limits_{{t = 1}}^{T}p\left( {{o}_{t} \mid \theta ,{o}_{1},\ldots ,{o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right) \tag{5}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
= \mathop{\prod }\limits_{{t = 1}}^{T}{\int }_{{\mathbb{R}}^{n}}p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) p\left( {{s}_{t} \mid \theta ,{o}_{1},\ldots ,{o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right) d{s}_{t}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
(6)
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\approx \mathop{\prod }\limits_{{t = 1}}^{T}{\int }_{{\mathbb{R}}^{n}}p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) {\bar{b}}_{t}\left( {{s}_{t} \mid \theta }\right) d{s}_{t} \tag{7}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
= \mathop{\prod }\limits_{{t = 1}}^{T}{\mathbf{E}}_{{s}_{t} \sim \bar{{b}_{t}}\left( {{s}_{t} \mid \theta }\right) }p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) \tag{8}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Here ${\bar{b}}_{t}$ is the output of the predict step of our filter with input ${b}_{t - 1}$ and ${a}_{t - 1}$ . It is only an approximation of $p\left( {{s}_{t} \mid \theta ,{o}_{1},\ldots {o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right)$ . If we take the $\log$ , get a lower bound from Jensen's inequality and compute the empirical mean, we get:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\log p\left( {{o}_{1},\ldots ,{o}_{T} \mid \theta ,{a}_{0},\ldots ,{a}_{T - 1}}\right) \tag{9}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\gtrapprox \mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\log p\left( {{o}_{t} \mid \theta ,{s}_{t}^{i}}\right) \;{s}_{t}^{i} \sim {\bar{b}}_{t}\left( {{s}_{t} \mid \theta }\right) \tag{10}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathrel{\text{:=}} \text{ELBO} \tag{11}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
Equation 9 gives us a lower bound of the log likelihood (similarly to the ELBO loss in VAEs [8]) to train our model leveraging the differentiable approximate inference used to compute ${\bar{b}}_{t}$ . Because ${\bar{b}}_{t} = \mathcal{N}\left( {{s}_{t} \mid {\bar{\mu }}_{t},{\bar{\sum }}_{t}}\right)$ , we can use the reparametrization trick to sample ${s}_{t}^{i}$ by sampling ${\xi }^{i}$ from a $n$ -dimensional standard Gaussian, and then letting
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{s}_{t}^{i} = {\bar{\mu }}_{t} + {\bar{\sum }}_{t}{\xi }^{i} \tag{12}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
$\theta$ represents the parameters for $f,\sum , h,\Gamma$ which are neural networks. We jointly perform state-estimation and parameter optimization by estimating ${b}_{t} = \left( {{\mu }_{t},{\sum }_{t}}\right)$ using a extended Kalman filter (EKF), the operation of which are all differentiable (as shown for example by Lee et al. [2]), and maximizing the likelihood of the ground-truth object property of interest. For example, if mass is of interest, the loss for one timestep for an episode where the ground-truth mass is $m$ would be:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{\mathcal{L}}_{m} = - \mathop{\sum }\limits_{{t = 1}}^{T}\log \mathcal{N}\left( {m \mid {\mu }_{t}^{1},{\sum }_{t}^{11}}\right) \tag{13}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
Where $\mathcal{N}\left( {\cdot \mid \mu ,\sigma }\right)$ is a Gaussian pdf with mean $\mu$ and variance $\sigma$ . The first element of the state represents the mass, and we are maximizing its log-likelihood.
|
| 112 |
+
|
| 113 |
+
The loss we minimize is a combination of the self-predictive loss for the observation, and the likelihood of the mass in the state representation:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\mathcal{L} = \mathrm{{ELBO}} + {\mathcal{L}}_{m} \tag{14}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
In practice, we sample sequences of length less that $T$ , and initialize the filter using stored beliefs in the dataset, in a truncated backpropagation through time fashion.
|
| 120 |
+
|
| 121 |
+
## B. Information-gathering model-predictive controller
|
| 122 |
+
|
| 123 |
+
The goal is to control the belief space process in a way that collects information about the property we're trying to perceive. The belief space for continuous systems is generally infinite dimensional (the space of probability distributions over the state space) thus intractable to work with using traditional control tools. However, by approximating the belief space using a parametric family (a Gaussian in our case), the problem can be formulated as a standard finite-dimensional continuous control problem. This is what we tackle here.
|
| 124 |
+
|
| 125 |
+
a) Belief dynamics: We can use the learned world model to simulate the belief space dynamics, as illustrated in Figure 1. The key is to be able to use the learned observation model to predict the future uncertainty about the state, rather than merely predict future states.
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
|
| 129 |
+
Fig. 1. Illustration of the sampling process for belief-space planning using a generative model. First, states are sampled from the current belief. We can then use our dynamics model and candidate actions to sample future states. These future states are given to our generative observation model to generate observations. We can then feed the generated observations and candidate actions to the state estimator to simulate the belief-space dynamics.
|
| 130 |
+
|
| 131 |
+
b) Cost function: We want our controller to minimize the entropy $H$ of the system:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
J = \mathop{\sum }\limits_{{t = 1}}^{T}H\left( {b}_{t}^{1}\right) \tag{15}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
to minimize the uncertainty about the property of the object as soon as possible in the episode (compared to a final cost formulation). Minimizing this cost, for a Gaussian belief ${b}_{t} = \left( {{\mu }_{t},{\sum }_{t}}\right)$ , is equivalent to minimizing the cost
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
J = \mathop{\sum }\limits_{{t = 1}}^{T}\log {\sum }_{t}^{11} \tag{16}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
c) Optimizer: In this work, we used a sampling-based optimizer which selected the randomly-generated sequence of actions, minimizing the cost. The actions were generated using a Gaussian random walk in three dimensions, with a standard deviation of ${10}\mathrm{\;{cm}}$ . Following the model-predictive control framework, we only execute the first action of the sequence and then re-optimize.
|
| 144 |
+
|
| 145 |
+
## C. Full training and control loop
|
| 146 |
+
|
| 147 |
+
During training, we follow the procedure:
|
| 148 |
+
|
| 149 |
+
1) Collect data using current controller for one epoch (randomizing the object property of interest), saving the observations, actions and estimated beliefs as well as the ground truth object property for this epoch
|
| 150 |
+
|
| 151 |
+
2) Train the state estimator using the dataset
|
| 152 |
+
|
| 153 |
+
3) Update stored beliefs in the dataset (by replaying the actions and observations)
|
| 154 |
+
|
| 155 |
+
Step 3) does not have to be done every epoch and can be costly as the dataset grows, but it is important to perform truncated backpropagation through time and initialize our state estimate during training.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+
Fig. 2. MAE for the property estimation tasks, at the end of the episode averaged over 5 runs, as learning progresses. The hand engineered policy gives an upper bound on what can be achieved when the behavior must not be discovered, and we simply have to extract the mass from a sequence of sensor readings.
|
| 160 |
+
|
| 161 |
+
## IV. EXPERIMENTS
|
| 162 |
+
|
| 163 |
+
We set up a custom robosuite [9] environment for our experiments. The robot is a Franka Emika arm with a palm-shaped end-effector (as shown in Figure 3) and a force-torque sensor at the wrist. At each episode, a cube of the same size and visual appearance is laid down at the same location, with only its mass changing. We use position control, only translation. The observations are low-level for now: joint pose and velocity, object pose, force and torque at the wrist.
|
| 164 |
+
|
| 165 |
+
## A. Mass estimation
|
| 166 |
+
|
| 167 |
+
The first task is to learn to estimate the mass of a cube. The cube has constant size and friction coefficient, but its mass changes randomly between $1\mathrm{\;{kg}}$ and $2\mathrm{\;{kg}}$ in between episodes. Because the robot has no gripper, just a palm, it can't pick up the object, but it should be able to push it and extract mass from the force and torque readings generated by the push.
|
| 168 |
+
|
| 169 |
+
## B. Height estimation
|
| 170 |
+
|
| 171 |
+
The second task is to learn to estimate the height of a block, randomized between $1\mathrm{\;{cm}}$ and ${15}\mathrm{\;{cm}}$ . The force torque sensor, in this scenario, also acts has a contact detector. The expected behavior would be to come down until contact is made, at which point you can extract the height from forward kinematics (keep in mind that our method has no concept of forward kinematics embedded into it). One subtlety is that the arm must position itself above the box before moving down, as it can otherwise make contact with the table instead.
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
|
| 175 |
+
Fig. 3. Demonstration of the learned controller for mass estimation. We can see that it learns to stably push the object to extract mass from force torque readings. Notice how the uncertainty goes down as the arm starts pushing the block.
|
| 176 |
+
|
| 177 |
+
## V. Results
|
| 178 |
+
|
| 179 |
+
Every 5000 environment steps, we run the evaluation procedure. It consists of running 5 episodes with randomized object property, and computing the MAE, where the absolute error is computed using the estimate at the last timestep of the episode. The training curve, showing the evolution of the MAE for the different tasks is shown in Figure 2. In the graph, a line is shown where a information-gathering policy was hand-coded by a human and we trained the state-estimator; straight pushing for mass and coming down to touch the block for height. It is meant as an approximate upper-bound for the information-gathering controller.
|
| 180 |
+
|
| 181 |
+
We can see that as learning progresses, two things happen concurrently:
|
| 182 |
+
|
| 183 |
+
1) the agent learn to perform informative actions. In the case of mass estimation, the policy pushes the block stably as shown in Figure 3. In the case of height estimation, the policy goes down in a straight line until it touches the blocks.
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+
Fig. 4. Demonstration of the learned controller for height estimation. We can see that it learns to come down and adjust its estimate as it moves through free space, until touching the block.
|
| 188 |
+
|
| 189 |
+
2) the state-estimator learns to extract mass from the raw observations generated by the informative actions. For example during height estimation, the uncertainty remains high until the end-effector touches the block, at which point the estimate peaks at the correct height.
|
| 190 |
+
|
| 191 |
+
It is important to note that the pushing strategy is in no way encoded in the agent; initial trajectories are simply random walks in the workspace.
|
| 192 |
+
|
| 193 |
+
## VI. CONCLUSION
|
| 194 |
+
|
| 195 |
+
With the goal of discovering active tactile perception behaviors to measure object properties, we designed a learning-based state estimator and an information-gathering controller. Together, these two pieces allowed a simulated robot to discover a pushing strategy for mass estimation and a top-down patting strategy for height estimation, without any prior on what should the trajectory be. This opens up the door to learning more complex information-gathering policies, such as those for estimating the center of mass, hardness, friction coefficient and more.
|
| 196 |
+
|
| 197 |
+
## REFERENCES
|
| 198 |
+
|
| 199 |
+
[1] S. J. Lederman and R. L. Klatzky. "Hand movements: A window into haptic object recognition". In: Cognitive Psychology 19.3 (1987), pp. 342-368.
|
| 200 |
+
|
| 201 |
+
[2] M. A. Lee, B. Yi, R. Martín-Martín, S. Savarese, and J. Bohg. "Multimodal Sensor Fusion with Differentiable Filters". In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2020, pp. 10444-10451.
|
| 202 |
+
|
| 203 |
+
[3] T. Haarnoja, A. Ajay, S. Levine, and P. Abbeel. "Backprop KF: Learning Discriminative Deterministic State Estimators". In: Advances in Neural Information Processing Systems. Ed. by D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett. Vol. 29. Curran Associates, Inc., 2016.
|
| 204 |
+
|
| 205 |
+
[4] M. C. Burkhart, D. M. Brandman, B. Franco, L. R. Hochberg, and M. T. Harrison. "The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models". In: Neural Computation 32.5 (2020), pp. 969-1017.
|
| 206 |
+
|
| 207 |
+
[5] M. Denil, P. Agrawal, T. D. Kulkarni, T. Erez, P. Battaglia, and N. De Freitas. "Learning to perform physics experiments via deep reinforcement learning". In: ICLR (2017).
|
| 208 |
+
|
| 209 |
+
[6] C. Wang, S. Wang, B. Romero, F. Veiga, and E. Adelson. "SwingBot: Learning Physical Features from In-hand Tactile Exploration for Dynamic Swing-up Manipulation". In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2020, pp. 5633-5640.
|
| 210 |
+
|
| 211 |
+
[7] S. Särkkä. Bayesian filtering and smoothing. Cambridge university press, 2013.
|
| 212 |
+
|
| 213 |
+
[8] D. P. Kingma and M. Welling. "Auto-encoding variational bayes". In: International Conference on Learning Representations (ICLR) (2013).
|
| 214 |
+
|
| 215 |
+
[9] Y. Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín. "robosuite: A Modular Simulation Framework and Benchmark for Robot Learning". In: arXiv preprint arXiv:2009.12293. 2020.
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/_4tcqR3nQII/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LEARNING ACTIVE TACTILE PERCEPTION THROUGH BELIEF-SPACE CONTROL
|
| 2 |
+
|
| 3 |
+
Jean-François Tremblay, Johanna Hansen, David Meger, Francois Hogan, Gregory Dudek
|
| 4 |
+
|
| 5 |
+
Abstract- Robots operating in an open world can encounter novel objects with unknown physical properties, such as mass, friction, or size. It is desirable to be able to sense those property through contact-rich interaction, before performing downstream tasks with the objects. We propose a method for autonomously learning active tactile perception policies, by learning a generative world model leveraging a differentiable bayesian filtering algorithm, and designing an information-gathering model predictive controller. We test the method on two simulated tasks: mass estimation and height estimation. Our method is able to discover policies which gather information about the desired property in an intuitive manner.
|
| 6 |
+
|
| 7 |
+
§ I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Robots operating in an open world can encounter arbitrary, unseen objects and are expected to manipulate them effectively. To achieve this, robots must have the ability to infer the physical properties of unknown objects through physical interactions. The online measurement of these properties is key for robots to operate robustly in the real-world with open-ended object categories.
|
| 10 |
+
|
| 11 |
+
Psychology literature refers to the way humans measure these properties as exploratory procedures [1]. These procedures, for example, include pressing to test for object hardness and lifting to estimate object mass. These exploratory procedures are challenging to hand-engineer and vary based on the object class. This work focuses on learning such exploratory procedures to estimate object properties through belief-space control. Using a combination of 1) learning-based state-estimation to infer the property from a sequence of observations and actions 2) information-gathering model-predictive control (MPC), we demonstrate that it is possible to learn to execute actions that are informative about the property of interest and to discover exploratory procedure without any human priors.
|
| 12 |
+
|
| 13 |
+
§ II. RELATED WORKS
|
| 14 |
+
|
| 15 |
+
§ A. LEARNING FOR STATE-ESTIMATION
|
| 16 |
+
|
| 17 |
+
There are several works proposing the fusion of Bayesian filtering methods with deep learning, where the dynamics and observation models used are learned neural networks.
|
| 18 |
+
|
| 19 |
+
Lee et al. [2] provide a good overview of learning Bayesian filtering models for robotics applications, and release torchfilter, a library of algorithm for this purpose which we build on for our belief-space control algorithm.
|
| 20 |
+
|
| 21 |
+
In [3], the authors present the Backprop Kalman filter described as a discriminative approach to filtering. Discriminative filtering does away with learning an observation model (a mapping from state to observation) and learns a mapping from observation to state instead. Here, we argue that learning a generative observation model, while more computationally challenging, is key to predicting future state uncertainty and planning for informative actions.
|
| 22 |
+
|
| 23 |
+
Burkhart et al. [4] present the discriminative Kalman filter concurrently to [3]. This approach assumes linear dynamics and models the prior over observations as Gaussian. It can only handle stationary observation processes.
|
| 24 |
+
|
| 25 |
+
§ B. ACTIVE PERCEPTION
|
| 26 |
+
|
| 27 |
+
Active perception consist of acting in a way that assists perception and can incorporate learning, including the learning methods above. Denil et al. [5] use reinforcement learning for "Which is Heavier" and "Tower" environments, where the goal of the former is to push blocks and, after a certain interaction period, take a "labelling action" to guess which block is heavier. You get a reward if the label is correct. They then train a recurrent deep reinforcement learning policy on that environment. The action space for these problems is constrained and designed to act such that the blocks are pushed with a fixed force towards their center of mass. While this method enables the robot to effectively retrieve mass using human priors and intuition, our work differs where the robot is tasked with discovering such behaviors autonomously with unconstrained action spaces.
|
| 28 |
+
|
| 29 |
+
More specifically to robotics, Wang et al. [6] introduce SwingBot, a robotic system that swings up an object with changing physical properties (moments, center of mass). Before the swing up phase, the system follows a hand-engineered exploratory procedure that shakes and tilts the object in the hand to extract the necessary information for a successful swing up. Rather than engineering the exploration phase, we propose a generic framework for extracting such information before accomplishing a given task.
|
| 30 |
+
|
| 31 |
+
§ III. METHODS
|
| 32 |
+
|
| 33 |
+
We are in a controlled hidden Markov model (HMM) setting (a partially observable Markov decision process (POMDP) without a reward function), where each observation ${o}_{t}$ gives us partial information about the state of the robot and object we are interested in. More formally a controlled HMM is a tuple $\left( {\mathcal{S},\mathcal{A},p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right) ,\Omega ,p\left( {{o}_{t} \mid {s}_{t}}\right) }\right)$ , where the state, action and observation space $(\mathcal{S},\mathcal{A}$ and $\Omega$ respectively) are in ${\mathbb{R}}^{n},{\mathbb{R}}^{m},{\mathbb{R}}^{d}$ respectively. It is important to note that in this context, the state can contain robot pose and velocity, object pose and velocity, object properties, and all properties that describes the environment and that are subject to change either during or in between episodes. The representation for the state will be learned in a self-supervised fashion, as described in $§$ III-A, and will be learned in such a way that the first element of the state represents the object property of interest:
|
| 34 |
+
|
| 35 |
+
$$
|
| 36 |
+
{s}_{t} = \left( {{m}_{t},{z}_{t}}\right) ,{m}_{t} \in \mathbb{R},{z}_{t} \in {\mathbb{R}}^{n - 1}. \tag{1}
|
| 37 |
+
$$
|
| 38 |
+
|
| 39 |
+
We are in an episodic setting with ending timestep $T$ , and where at each episode the object is randomized. For mass estimation as an example, at each episode, an object with a different mass is presented and the goal is to infer the mass of this new object.
|
| 40 |
+
|
| 41 |
+
In $\sharp$ III-A we describe how to infer the belief state (containing an estimate of the object property of interest) ${b}_{t} \approx p\left( {{s}_{t} \mid {a}_{0},\ldots ,{a}_{t - 1},{o}_{1},\ldots {o}_{t}}\right) ,{\bar{b}}_{t} \approx$ $p\left( {{s}_{t} \mid {a}_{0},\ldots ,{a}_{t - 1},{o}_{1},\ldots {o}_{t - 1}}\right)$ . In $§$ III-B we use that estimate to design an information-gathering controller. Finally, in $§$ III-C we present how to integrate these two things in a data-collection/training and control loop.
|
| 42 |
+
|
| 43 |
+
§ A. LEARNING-BASED KALMAN FILTER
|
| 44 |
+
|
| 45 |
+
Here the goal is to learn a dynamics and observation model while performing belief-state inference. The dynamics model representing $p\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1}}\right)$ is
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
{s}_{t} = {f}_{\theta }\left( {{s}_{t - 1},{a}_{t - 1}}\right) + {\sum }_{\theta }\left( {{s}_{t - 1},{a}_{t - 1}}\right) {w}_{t} \tag{2}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where ${w}_{t}$ are independent and identically distributed (IID) standard Gaussian random variable in ${\mathbb{R}}^{n}$ .
|
| 52 |
+
|
| 53 |
+
Generative filtering (as opposed to discriminative filtering $\left\lbrack {2,3}\right\rbrack )$ implies learning a generative world-model, able to fully simulate the system and generate observations via the equation
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{o}_{t} = {h}_{\theta }\left( {s}_{t}\right) + {\Gamma }_{\theta }\left( {s}_{t}\right) {v}_{t}. \tag{3}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where ${v}_{t}$ are IID standard Gaussian random variables in ${\mathbb{R}}^{d}$ . While learning this model can be more challenging in the face of high-dimensional and complex observation spaces (e.g. images), it opens up new avenues for forward belief-space planning.
|
| 60 |
+
|
| 61 |
+
Using an explicit-likelihood (Gaussian state-space model) setting, we train the model in an self-predictive manner. In (8), we present the derivation for the loss of the generative observation model. This derivation is adapted from [7] Chapter 12, where we integrate action variables.
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
p\left( {{o}_{1},\ldots ,{o}_{T} \mid \theta ,{a}_{0},\ldots ,{a}_{T - 1}}\right) \tag{4}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
= \mathop{\prod }\limits_{{t = 1}}^{T}p\left( {{o}_{t} \mid \theta ,{o}_{1},\ldots ,{o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right) \tag{5}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
= \mathop{\prod }\limits_{{t = 1}}^{T}{\int }_{{\mathbb{R}}^{n}}p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) p\left( {{s}_{t} \mid \theta ,{o}_{1},\ldots ,{o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right) d{s}_{t}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
(6)
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\approx \mathop{\prod }\limits_{{t = 1}}^{T}{\int }_{{\mathbb{R}}^{n}}p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) {\bar{b}}_{t}\left( {{s}_{t} \mid \theta }\right) d{s}_{t} \tag{7}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
= \mathop{\prod }\limits_{{t = 1}}^{T}{\mathbf{E}}_{{s}_{t} \sim \bar{{b}_{t}}\left( {{s}_{t} \mid \theta }\right) }p\left( {{o}_{t} \mid \theta ,{s}_{t}}\right) \tag{8}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Here ${\bar{b}}_{t}$ is the output of the predict step of our filter with input ${b}_{t - 1}$ and ${a}_{t - 1}$ . It is only an approximation of $p\left( {{s}_{t} \mid \theta ,{o}_{1},\ldots {o}_{t - 1},{a}_{0},\ldots ,{a}_{t - 1}}\right)$ . If we take the $\log$ , get a lower bound from Jensen's inequality and compute the empirical mean, we get:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\log p\left( {{o}_{1},\ldots ,{o}_{T} \mid \theta ,{a}_{0},\ldots ,{a}_{T - 1}}\right) \tag{9}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\gtrapprox \mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\log p\left( {{o}_{t} \mid \theta ,{s}_{t}^{i}}\right) \;{s}_{t}^{i} \sim {\bar{b}}_{t}\left( {{s}_{t} \mid \theta }\right) \tag{10}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathrel{\text{ := }} \text{ ELBO } \tag{11}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
Equation 9 gives us a lower bound of the log likelihood (similarly to the ELBO loss in VAEs [8]) to train our model leveraging the differentiable approximate inference used to compute ${\bar{b}}_{t}$ . Because ${\bar{b}}_{t} = \mathcal{N}\left( {{s}_{t} \mid {\bar{\mu }}_{t},{\bar{\sum }}_{t}}\right)$ , we can use the reparametrization trick to sample ${s}_{t}^{i}$ by sampling ${\xi }^{i}$ from a $n$ -dimensional standard Gaussian, and then letting
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{s}_{t}^{i} = {\bar{\mu }}_{t} + {\bar{\sum }}_{t}{\xi }^{i} \tag{12}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
$\theta$ represents the parameters for $f,\sum ,h,\Gamma$ which are neural networks. We jointly perform state-estimation and parameter optimization by estimating ${b}_{t} = \left( {{\mu }_{t},{\sum }_{t}}\right)$ using a extended Kalman filter (EKF), the operation of which are all differentiable (as shown for example by Lee et al. [2]), and maximizing the likelihood of the ground-truth object property of interest. For example, if mass is of interest, the loss for one timestep for an episode where the ground-truth mass is $m$ would be:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{\mathcal{L}}_{m} = - \mathop{\sum }\limits_{{t = 1}}^{T}\log \mathcal{N}\left( {m \mid {\mu }_{t}^{1},{\sum }_{t}^{11}}\right) \tag{13}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
Where $\mathcal{N}\left( {\cdot \mid \mu ,\sigma }\right)$ is a Gaussian pdf with mean $\mu$ and variance $\sigma$ . The first element of the state represents the mass, and we are maximizing its log-likelihood.
|
| 112 |
+
|
| 113 |
+
The loss we minimize is a combination of the self-predictive loss for the observation, and the likelihood of the mass in the state representation:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\mathcal{L} = \mathrm{{ELBO}} + {\mathcal{L}}_{m} \tag{14}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
In practice, we sample sequences of length less that $T$ , and initialize the filter using stored beliefs in the dataset, in a truncated backpropagation through time fashion.
|
| 120 |
+
|
| 121 |
+
§ B. INFORMATION-GATHERING MODEL-PREDICTIVE CONTROLLER
|
| 122 |
+
|
| 123 |
+
The goal is to control the belief space process in a way that collects information about the property we're trying to perceive. The belief space for continuous systems is generally infinite dimensional (the space of probability distributions over the state space) thus intractable to work with using traditional control tools. However, by approximating the belief space using a parametric family (a Gaussian in our case), the problem can be formulated as a standard finite-dimensional continuous control problem. This is what we tackle here.
|
| 124 |
+
|
| 125 |
+
a) Belief dynamics: We can use the learned world model to simulate the belief space dynamics, as illustrated in Figure 1. The key is to be able to use the learned observation model to predict the future uncertainty about the state, rather than merely predict future states.
|
| 126 |
+
|
| 127 |
+
< g r a p h i c s >
|
| 128 |
+
|
| 129 |
+
Fig. 1. Illustration of the sampling process for belief-space planning using a generative model. First, states are sampled from the current belief. We can then use our dynamics model and candidate actions to sample future states. These future states are given to our generative observation model to generate observations. We can then feed the generated observations and candidate actions to the state estimator to simulate the belief-space dynamics.
|
| 130 |
+
|
| 131 |
+
b) Cost function: We want our controller to minimize the entropy $H$ of the system:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
J = \mathop{\sum }\limits_{{t = 1}}^{T}H\left( {b}_{t}^{1}\right) \tag{15}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
to minimize the uncertainty about the property of the object as soon as possible in the episode (compared to a final cost formulation). Minimizing this cost, for a Gaussian belief ${b}_{t} = \left( {{\mu }_{t},{\sum }_{t}}\right)$ , is equivalent to minimizing the cost
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
J = \mathop{\sum }\limits_{{t = 1}}^{T}\log {\sum }_{t}^{11} \tag{16}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
c) Optimizer: In this work, we used a sampling-based optimizer which selected the randomly-generated sequence of actions, minimizing the cost. The actions were generated using a Gaussian random walk in three dimensions, with a standard deviation of ${10}\mathrm{\;{cm}}$ . Following the model-predictive control framework, we only execute the first action of the sequence and then re-optimize.
|
| 144 |
+
|
| 145 |
+
§ C. FULL TRAINING AND CONTROL LOOP
|
| 146 |
+
|
| 147 |
+
During training, we follow the procedure:
|
| 148 |
+
|
| 149 |
+
1) Collect data using current controller for one epoch (randomizing the object property of interest), saving the observations, actions and estimated beliefs as well as the ground truth object property for this epoch
|
| 150 |
+
|
| 151 |
+
2) Train the state estimator using the dataset
|
| 152 |
+
|
| 153 |
+
3) Update stored beliefs in the dataset (by replaying the actions and observations)
|
| 154 |
+
|
| 155 |
+
Step 3) does not have to be done every epoch and can be costly as the dataset grows, but it is important to perform truncated backpropagation through time and initialize our state estimate during training.
|
| 156 |
+
|
| 157 |
+
< g r a p h i c s >
|
| 158 |
+
|
| 159 |
+
Fig. 2. MAE for the property estimation tasks, at the end of the episode averaged over 5 runs, as learning progresses. The hand engineered policy gives an upper bound on what can be achieved when the behavior must not be discovered, and we simply have to extract the mass from a sequence of sensor readings.
|
| 160 |
+
|
| 161 |
+
§ IV. EXPERIMENTS
|
| 162 |
+
|
| 163 |
+
We set up a custom robosuite [9] environment for our experiments. The robot is a Franka Emika arm with a palm-shaped end-effector (as shown in Figure 3) and a force-torque sensor at the wrist. At each episode, a cube of the same size and visual appearance is laid down at the same location, with only its mass changing. We use position control, only translation. The observations are low-level for now: joint pose and velocity, object pose, force and torque at the wrist.
|
| 164 |
+
|
| 165 |
+
§ A. MASS ESTIMATION
|
| 166 |
+
|
| 167 |
+
The first task is to learn to estimate the mass of a cube. The cube has constant size and friction coefficient, but its mass changes randomly between $1\mathrm{\;{kg}}$ and $2\mathrm{\;{kg}}$ in between episodes. Because the robot has no gripper, just a palm, it can't pick up the object, but it should be able to push it and extract mass from the force and torque readings generated by the push.
|
| 168 |
+
|
| 169 |
+
§ B. HEIGHT ESTIMATION
|
| 170 |
+
|
| 171 |
+
The second task is to learn to estimate the height of a block, randomized between $1\mathrm{\;{cm}}$ and ${15}\mathrm{\;{cm}}$ . The force torque sensor, in this scenario, also acts has a contact detector. The expected behavior would be to come down until contact is made, at which point you can extract the height from forward kinematics (keep in mind that our method has no concept of forward kinematics embedded into it). One subtlety is that the arm must position itself above the box before moving down, as it can otherwise make contact with the table instead.
|
| 172 |
+
|
| 173 |
+
< g r a p h i c s >
|
| 174 |
+
|
| 175 |
+
Fig. 3. Demonstration of the learned controller for mass estimation. We can see that it learns to stably push the object to extract mass from force torque readings. Notice how the uncertainty goes down as the arm starts pushing the block.
|
| 176 |
+
|
| 177 |
+
§ V. RESULTS
|
| 178 |
+
|
| 179 |
+
Every 5000 environment steps, we run the evaluation procedure. It consists of running 5 episodes with randomized object property, and computing the MAE, where the absolute error is computed using the estimate at the last timestep of the episode. The training curve, showing the evolution of the MAE for the different tasks is shown in Figure 2. In the graph, a line is shown where a information-gathering policy was hand-coded by a human and we trained the state-estimator; straight pushing for mass and coming down to touch the block for height. It is meant as an approximate upper-bound for the information-gathering controller.
|
| 180 |
+
|
| 181 |
+
We can see that as learning progresses, two things happen concurrently:
|
| 182 |
+
|
| 183 |
+
1) the agent learn to perform informative actions. In the case of mass estimation, the policy pushes the block stably as shown in Figure 3. In the case of height estimation, the policy goes down in a straight line until it touches the blocks.
|
| 184 |
+
|
| 185 |
+
< g r a p h i c s >
|
| 186 |
+
|
| 187 |
+
Fig. 4. Demonstration of the learned controller for height estimation. We can see that it learns to come down and adjust its estimate as it moves through free space, until touching the block.
|
| 188 |
+
|
| 189 |
+
2) the state-estimator learns to extract mass from the raw observations generated by the informative actions. For example during height estimation, the uncertainty remains high until the end-effector touches the block, at which point the estimate peaks at the correct height.
|
| 190 |
+
|
| 191 |
+
It is important to note that the pushing strategy is in no way encoded in the agent; initial trajectories are simply random walks in the workspace.
|
| 192 |
+
|
| 193 |
+
§ VI. CONCLUSION
|
| 194 |
+
|
| 195 |
+
With the goal of discovering active tactile perception behaviors to measure object properties, we designed a learning-based state estimator and an information-gathering controller. Together, these two pieces allowed a simulated robot to discover a pushing strategy for mass estimation and a top-down patting strategy for height estimation, without any prior on what should the trajectory be. This opens up the door to learning more complex information-gathering policies, such as those for estimating the center of mass, hardness, friction coefficient and more.
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,309 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Pathologies and Challenges of Using Differentiable Simulators in Policy Optimization for Contact-Rich Manipulation
|
| 2 |
+
|
| 3 |
+
H.J. Terry Suh, Max Simchowitz, Kaiqing Zhang, Tao Pang, Russ Tedrake
|
| 4 |
+
|
| 5 |
+
Abstract-Policy search methods in Reinforcement Learning (RL) have shown impressive results in contact-rich tasks such as dexterous manipulation. However, the high variance of zero-order Monte-Carlo gradient estimates results in slow convergence and a requirement for a high number of samples. By replacing these zero-order gradient estimates with first-order ones, differentiable simulators promise faster computation time for policy gradient methods when the model is known. Contrary to this belief, we highlight some of the pathologies of using first-order gradients and show that in many physical scenarios involving rich contact, using zero-order gradients result in better performance. Building on these pathologies and lessons, we propose guidelines for designing differentiable simulators, as well as policy optimization algorithms that use these simulators. By doing so, we hope to reap the benefits of first-order gradients while avoiding the potential pitfalls.
|
| 6 |
+
|
| 7 |
+
## I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Reinforcement Learning (RL) is fundamentally concerned with the problem of minimizing a stochastic objective,
|
| 10 |
+
|
| 11 |
+
$$
|
| 12 |
+
\mathop{\min }\limits_{\mathbf{\theta }}F\left( \mathbf{\theta }\right) = \mathop{\min }\limits_{\mathbf{\theta }}{\mathbb{E}}_{\mathbf{w}}f\left( {\mathbf{\theta },\mathbf{w}}\right) .
|
| 13 |
+
$$
|
| 14 |
+
|
| 15 |
+
Many algorithms in RL heavily rely on zeroth-order Monte-Carlo estimation of the gradient $\nabla F\left\lbrack {{27},{22}}\right\rbrack$ . Yet, in contact-rich robotic manipulation where we have model knowledge and structure of the dynamics, it is possible to differentiate through the physics and obtain exact gradients of $f$ , which can also be used to construct a first-order estimate of $\nabla F$ . The availability of both options begs the question: given access to gradients of $f$ , which estimator should we prefer?
|
| 16 |
+
|
| 17 |
+
In stochastic optimization, the theoretical benefits of using first-order estimates of $\nabla F$ over zeroth-order ones have mainly been understood through the lens of variance and convergence rates $\left\lbrack {{10},{16}}\right\rbrack$ : the first-order estimator often (not always) results in much less variance compared to the zeroth-order one, which leads to faster convergence rates to a local minima of nonconvex smooth objective functions. However, the landscape of RL objectives that involve long-horizon sequential decision making (e.g. policy optimization) is challenging to analyze, and convergence properties in these landscapes are relatively poorly understood. In particular, contact-rich systems can display complex characteristics including nonlinearities, non-smoothness, and discontinuities (Figure 1) [29, 17, 25].
|
| 18 |
+
|
| 19 |
+
Nevertheless, lessons from convergence rate analysis tell us that there may be benefits to using the exact gradients even for these complex physical systems. Such ideas have been championed through the term "differentiable simulation", where forward simulation of physics is programmed in a manner that is consistent with automatic differentiation $\left\lbrack {8,{12},{28},{30},9}\right\rbrack$ , or computation of analytic derivatives [3]. These methods have shown promising results in decreasing computation time compared to zeroth-order methods [13, 8, 11, 6, 5, 19].
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Fig. 1. Examples of simple optimization problems on physical systems. Goal is to: A. maximize $y$ position of the ball after dropping. B. maximize distance thrown, with a wall that results in inelastic impact. C. maximize transferred angular momentum to the pivoting bar through collision. Second row: the original objective and the stochastic objective after randomized smoothing.
|
| 24 |
+
|
| 25 |
+
However, due to the complex characteristics of contact dynamics, we show that the belief that first-order gradients improve performance over zero-order ones is not always true for contact-rich manipulation. We illustrate this phenomenon through couple pathologies: first, even under sufficient regularity conditions of continuity, the choice of contact modeling can cause the first-order gradient estimate to have higher variance compared to the zeroth-order one. In particular, this may occur in approaches that utilize the penalty method [14], which requires stiff dynamics to realistically simulate contact [9].
|
| 26 |
+
|
| 27 |
+
In addition, we show that many contact-rich systems display nearly/strictly discontinuous behavior in the underlying landscape. The presence of such discontinuities causes the first-order gradient estimator to be biased, while the zeroth-order one still remains unbiased. Furthermore, we show that even when continuous approximations are made, such approximations are often stiff and highly-Lipschitz. In these settings, the first order estimator still suffer from what we call empirical bias under finite-sample settings. The compromise of the first order estimator in the face of more accurate description of contact dynamics hints at a fundamental tension between realism of the dynamics and the performance of first-order gradients.
|
| 28 |
+
|
| 29 |
+
From these pathologies, we suggest methods in simulation, as well as algorithms, that may improve the efficacy of first-order gradient estimates obtained using differentiable simulation. We advocate for the use of implicit contact models that are less stiff, and thus have low variance of the first-order gradient. In addition, we show they can be analytically smoothed out to mitigate discontinuities. Finally, we introduce a method to interpolate gradients that escapes these identified pitfalls.
|
| 30 |
+
|
| 31 |
+
## II. Preliminaries
|
| 32 |
+
|
| 33 |
+
## A. Policy Optimization Setting
|
| 34 |
+
|
| 35 |
+
We study a discrete-time, finite-horizon, continuous-state control problem with states $\mathbf{x} \in {\mathbb{R}}^{n}$ , inputs $\mathbf{u} \in {\mathbb{R}}^{m}$ , transition function $\phi : {\mathbb{R}}^{n} \times {\mathbb{R}}^{m} \rightarrow {\mathbb{R}}^{n}$ , and horizon $H \in \mathbb{N}$ . Given a sequence of costs ${c}_{h} : {\mathbb{R}}^{n} \times {\mathbb{R}}^{m} \rightarrow \mathbb{R}$ , a family of policies ${\pi }_{h}\left( {\cdot , \cdot }\right) : {\mathbb{R}}^{n} \times {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{m}$ parameterized by $\mathbf{\theta } \in {\mathbb{R}}^{d}$ , and a sequence of injected noise terms ${\mathbf{w}}_{1 : H} \in {\left( {\mathbb{R}}^{m}\right) }^{H}$ , we define the cost-to-go functions
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
{V}_{h}\left( {{\mathbf{x}}_{h},{\mathbf{w}}_{h : H},\mathbf{\theta }}\right) = \mathop{\sum }\limits_{{{h}^{\prime } = h}}^{H}{c}_{h}\left( {{\mathbf{x}}_{{h}^{\prime }},{\mathbf{u}}_{{h}^{\prime }}}\right) ,
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\text{s.t.}{\mathbf{x}}_{{h}^{\prime } + 1} = \phi \left( {{\mathbf{x}}_{{h}^{\prime }},{\mathbf{u}}_{{h}^{\prime }}}\right) ,{\mathbf{u}}_{{h}^{\prime }} = \pi \left( {{\mathbf{x}}_{{h}^{\prime }},\mathbf{\theta }}\right) + {\mathbf{w}}_{{h}^{\prime }},{h}^{\prime } \geq h\text{.}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
Our aim is to minimize the policy optimization objective
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
F\left( \mathbf{\theta }\right) \mathrel{\text{:=}} {\mathbb{E}}_{{\mathbf{x}}_{1} \sim \rho }{\mathbb{E}}_{{\mathbf{w}}_{h}\overset{\text{ i.i.d. }}{ \sim }p}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H},\mathbf{\theta }}\right) , \tag{1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where $\rho$ is a distribution over initial states ${\mathbf{x}}_{1}$ , and ${\mathbf{w}}_{1},\ldots ,{\mathbf{w}}_{H}$ are i.i.d. according to $p$ which we assume to be a zero-mean Gaussian with covariance ${\sigma }^{2}{I}_{n}$ .
|
| 52 |
+
|
| 53 |
+
## B. Zeroth-order estimator:
|
| 54 |
+
|
| 55 |
+
The policy gradient can be estimated only using samples of the function values [31].
|
| 56 |
+
|
| 57 |
+
Definition II.1. Given a single zeroth-order estimate of the policy gradient ${\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right)$ , we define the zeroth-order batched gradient (ZoBG) ${\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right)$ as the sample mean,
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{:=}} \frac{1}{{\sigma }^{2}}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right) \left\lbrack {\mathop{\sum }\limits_{{h = 1}}^{H}{\mathrm{D}}_{\mathbf{\theta }}\pi {\left( {\mathbf{x}}_{h}^{i},\mathbf{\theta }\right) }^{\top }{\mathbf{w}}_{h}^{i}}\right\rbrack
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right) \mathrel{\text{:=}} \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) ,
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
where ${\mathbf{x}}_{h}^{i}$ is the state at time $h$ of a trajectory induced by the noise ${\mathbf{w}}_{1 : H}^{i}, i$ is the index of the sample trajectory, and ${\mathrm{D}}_{\mathbf{\theta }}\pi$ is the Jacobian matrix $\partial \pi /\partial \mathbf{\theta } \in {\mathbb{R}}^{m \times d}$ .
|
| 68 |
+
|
| 69 |
+
The hat notation denotes a per-sample Monte-Carlo estimate, and bar-notation a sample mean. The ZoBG is also referred to as the REINFORCE [31], score function, or the likelihood-ratio gradient. In practice, a baseline term $b$ is subtracted from ${V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right)$ for variance reduction. One example is the zero-noise rollout as the baseline $b = {V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{0}}_{1 : H},\mathbf{\theta }}\right)$ :
|
| 70 |
+
|
| 71 |
+
## C. First-Order Estimator.
|
| 72 |
+
|
| 73 |
+
In differentiable simulators, the gradients of the dynamics $\phi$ and costs ${c}_{h}$ are available almost surely (i.e., with probability one). Hence, one may compute the exact gradient ${\nabla }_{\mathbf{\theta }}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H},\mathbf{\theta }}\right)$ by automatic differentiation and average them to estimate $\nabla F\left( \mathbf{\theta }\right)$ .
|
| 74 |
+
|
| 75 |
+
Definition II.2. Given a single first-order gradient estimate ${\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right)$ , we define the first-order batched gradient (FoBG) as the sample mean:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{:=}} {\nabla }_{\mathbf{\theta }}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right)
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right) \mathrel{\text{:=}} \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) .
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
The FoBG is also referred to as the reparametrization gradient [15], the pathwise derivative [21], or Back Propagation through Time (BPTT).
|
| 86 |
+
|
| 87 |
+
## III. PITFALLS OF FIRST-ORDER GRADIENTS
|
| 88 |
+
|
| 89 |
+
In this section, we shows pathologies in contact-rich systems for which the FoBG can perform worse than the ZoBG.
|
| 90 |
+
|
| 91 |
+
## A. Bias under discontinuities
|
| 92 |
+
|
| 93 |
+
Under standard regularity conditions, it is well-known that both estimators are unbiased estimators of the true gradient $\nabla F\left( \mathbf{\theta }\right)$ . However, care must be taken to define these conditions precisely, as such conditions are broken for contact-rich systems. Fortunately, the ZoBG is still unbiased under mild assumptions,
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathbb{E}\left\lbrack {{\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right) }\right\rbrack = \nabla F\left( \mathbf{\theta }\right) .
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
In contrast, the FoBG requires strong continuity conditions in order to satisfy the requirement for unbiasedness. However, under Lipschitz continuity, it is indeed unbiased.
|
| 100 |
+
|
| 101 |
+
Lemma III.1. If $\phi \left( {\cdot , \cdot }\right)$ is locally Lipschitz and ${c}_{h}\left( {\cdot , \cdot }\right) \in {C}^{\infty }$ , then ${\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right)$ is defined almost surely, and
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\mathbb{E}\left\lbrack {{\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right) }\right\rbrack = \nabla F\left( \mathbf{\theta }\right) .
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
Lemma III.1 tells us that FoBG can fail when applied to discontinuous landscapes. We illustrate a simple case of biasedness through a counterexample.
|
| 108 |
+
|
| 109 |
+
Example III.2 (Heaviside). $\left\lbrack {2,{25}}\right\rbrack$ Consider the Heaviside function,
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
f\left( {\mathbf{\theta },\mathbf{w}}\right) = H\left( {\mathbf{\theta } + \mathbf{w}}\right) ,\;H\left( t\right) = {\mathbb{1}}_{t \geq 0}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
whose stochastic objective becomes the error function
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
F\left( \mathbf{\theta }\right) = {\mathbb{E}}_{\mathbf{w}}\left\lbrack {H\left( {\mathbf{\theta } + \mathbf{w}}\right) }\right\rbrack = \operatorname{erf}\left( {-\mathbf{\theta };{\sigma }^{2}}\right) ,
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
However, since ${\nabla }_{\mathbf{\theta }}H\left( {\mathbf{\theta } + \mathbf{w}}\right) = 0$ for all $\mathbf{\theta } \neq - \mathbf{w}$ , we have ${\mathbb{E}}_{{\mathbf{w}}_{i}}\delta \left( {\mathbf{\theta } + {\mathbf{w}}_{i}}\right) = 0$ . Hence, the Law of Large Numbers does not hold, and FoBG is biased as the gradient of the stochastic objective, a Gaussian, is non-zero at any $\mathbf{\theta }$ . We further note that the empirical variance of the FoBG estimator in this example is zero. On the other hand, the ZoBG escapes this problem and provides an unbiased estimate, since it always takes finite intervals that include the integral of the delta.
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
|
| 125 |
+
Fig. 2. From left: heaviside objective $f\left( {\mathbf{\theta },\mathbf{w}}\right)$ and stochastic objective $F\left( \mathbf{\theta }\right)$ , empirical values of the gradient estimates, and their empirical variance.
|
| 126 |
+
|
| 127 |
+
### B.The "Empirical bias" phenomenon
|
| 128 |
+
|
| 129 |
+
One might argue that strict discontinuity is simply an artifact of modeling choice in simulators; indeed, many simulators approximate discontinuous dynamics as a limit of continuous ones with growing Lipschitz constant $\left\lbrack {9,7}\right\rbrack$ . In this section, we explain how this can lead to a phenomenon we call empirical bias, where the FoBG appears to have low empirical variance, but is still highly inaccurate; i.e. it "looks" biased when a finite number of samples are used. Through this phenomenon, we claim that performance degradation of first-order gradient estimates do not require strict discontinuity, but is also present in continuous, yet stiff approximations of discontinuities.
|
| 130 |
+
|
| 131 |
+
Definition III. 3 (Empirical bias). Let $\mathbf{z}$ be a vector-valued random variable with $\mathbb{E}\left\lbrack {\parallel \mathbf{z}\parallel }\right\rbrack < \infty$ . We say $\mathbf{z}$ has $\left( {\beta ,\Delta , S}\right)$ - empirical bias if there is a random event $\mathcal{E}$ such that $\Pr \left\lbrack \mathcal{E}\right\rbrack \geq$ $1 - \beta$ , and $\parallel \mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack - \mathbb{E}\left\lbrack \mathbf{z}\right\rbrack \parallel \geq \Delta$ , but $\parallel \mathbf{z} - \mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack \parallel \leq S$ almost surely on $\mathcal{E}$ .
|
| 132 |
+
|
| 133 |
+
A paradigmatic example of empirical bias is a random scalar $\mathbf{z}$ which takes the value 0 with probability $1 - \beta$ , and $\frac{1}{\beta }$ with probability $\beta$ . Setting $\mathcal{E} = \{ \mathbf{z} = 0\}$ , we see $\mathbb{E}\left\lbrack \mathbf{z}\right\rbrack = 1$ , $\mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack = 0$ , and so $\mathbf{z}$ satisfies $\left( {\beta ,1,0}\right)$ -empirical bias. Note that $\operatorname{Var}\left\lbrack \mathbf{z}\right\rbrack = 1/\beta - 1$ ; in fact, small- $\beta$ empirical bias implies large variance more generally.
|
| 134 |
+
|
| 135 |
+
Lemma III.4. Suppose $\mathbf{z}$ has $\left( {\beta ,\Delta , S}\right)$ -empirical bias. Then $\operatorname{Var}\left\lbrack \mathbf{z}\right\rbrack \geq \frac{{\Delta }_{0}^{2}}{\beta }$ , where ${\Delta }_{0} \mathrel{\text{:=}} \max \{ 0,\left( {1 - \beta }\right) \Delta - \beta \parallel \mathbb{E}\left\lbrack \mathbf{z}\right\rbrack \parallel \}$ .
|
| 136 |
+
|
| 137 |
+
Empirical bias naturally arises for discontinuities or stiff continuous approximations.
|
| 138 |
+
|
| 139 |
+
Example III.5 (Coulomb friction). The Coulomb model of friction is discontinuous in the relative tangential velocity between two bodies. In many simulators $\left\lbrack {9,4}\right\rbrack$ , it is common to consider a continuous approximation instead. We idealize such approximations through a piecewise linear relaxation of the Heaviside that is continuous, parametrized by the width of the middle linear region $\nu$ (which corresponds to slip tolerance).
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{\bar{H}}_{\nu }\left( t\right) = \left\{ {\begin{array}{ll} {2t}/\nu & \text{ if }\left| t\right| \leq \nu /2 \\ {2H}\left( t\right) - 1 & \text{ else } \end{array}.}\right.
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
In practice, lower values of $\nu$ lead to more realistic behavior in simulation [28], but this has adverse effects for empirical bias. Considering ${f}_{\nu }\left( {\mathbf{\theta },\mathbf{w}}\right) = {\bar{H}}_{\nu }\left( {\mathbf{\theta } + \mathbf{w}}\right)$ , we have ${F}_{\nu }\left( \mathbf{\theta }\right) =$ ${\mathbb{E}}_{\mathbf{w}}\left\lbrack {{\bar{H}}_{\nu }\left( {\mathbf{\theta } + \mathbf{w}}\right) }\right\rbrack \mathrel{\text{:=}} \operatorname{erf}\left( {\nu /2 - \theta ;{\sigma }^{2}}\right)$ . In particular, setting ${c}_{\sigma } \mathrel{\text{:=}} \frac{1}{\sqrt{2\pi }\sigma }$ , then at $\mathbf{\theta } = \nu /2,\nabla {F}_{\nu }\left( \mathbf{\theta }\right) = {c}_{\sigma }$ , whereas, with probability at least ${c}_{\sigma }\nu ,\nabla {f}_{\nu }\left( {\mathbf{\theta },\mathbf{w}}\right) = 0$ . Hence, the FoBG has $\left( {{c}_{\sigma }\nu ,{c}_{\sigma },0}\right)$ empirical bias, and its variance scales with $1/\nu$ as $\nu \rightarrow 0$ . The limiting $\nu = 0$ case, corresponding to the Coulomb model, is the Heaviside from Example III.2, where the limit of high empirical bias, as well as variance, becomes biased in expectation (but, surprisingly, zero variance!). We empirically illustrate this effect in Figure 3. We also note that more complicated models of friction (e.g. that incorporates the Stribeck effect [24]) would suffer similar problems.
|
| 146 |
+
|
| 147 |
+
Example III.6. (Discontinuity in geometry). Another source of discontinuity in simulators comes from the discontinuity of surface normals. We show this in Figure 4, where balls that collide with a rectangular geometry create discontinuities. It is possible to make a continuous relaxation [7] by considering a smoother geometry, depicted by the addition of the dome in Figure 4. While this makes FoBG no longer biased asymptotically, the stiffness of the relaxation still results in high empirical bias.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+
Fig. 3. Top column: illustration of the physical system and the relaxation of Coulomb friction. Bottom column: the values of estimators and their empirical variances depending on number of samples and slip tolerance. Values of FoBG are zero in low-sample regimes due to empirical bias. As $\nu \rightarrow 0$ , the empirical variance of FoBG goes to zero, which shows as empty in the log-scale. Expected variance, however, blows up as it scales with $1/\nu$ .
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
|
| 155 |
+
Fig. 4. Left: example of ball hitting the wall. The green trajectories hit a rectangular wall, displaying discontinuities. Right: the pink trajectories collide with the dome on top, and show continuous but stiff behavior.
|
| 156 |
+
|
| 157 |
+
## C. High Variance from Stiffness
|
| 158 |
+
|
| 159 |
+
Even without the phenomenon of empirical bias, we show that certain choices of contact models can cause the FoBG to suffer from high variance. In particular, approximations of rigid contact with high-stiffness spring models (i.e. penalty method) causes the gradient may have a high norm.
|
| 160 |
+
|
| 161 |
+
Example III.7. (Pushing with stiff contact). We demonstrate this phenomenon through a simple 1D pushing example in Figure 5, where the ZoBG has lower variance than the FoBG as stiffness increases, until numerical semi-implicit integration becomes unstable under a fixed timestep.
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
|
| 165 |
+
Fig. 5. The variance of the gradient of ${V}_{1}$ , with running cost ${c}_{h} = \parallel {\mathbf{x}}_{h}^{2} -$ ${\mathbf{x}}^{g}{\parallel }^{2}$ , with respect to input trajectory as spring constant $k$ increases. Mass $m$ and damping coefficient $c$ are fixed.
|
| 166 |
+
|
| 167 |
+
## IV. TACKLING THE PATHOLOGIES: A PATH FORWARD
|
| 168 |
+
|
| 169 |
+
In this section, we comment on methods that can alleviate the pathologies that were found in the previous section.
|
| 170 |
+
|
| 171 |
+
## A. Less Stiff Formulations of Contact Dynamics
|
| 172 |
+
|
| 173 |
+
In order to avoid high variance of the FoBG, we must ensure that the norm of the gradient is low. Yet, as illustrated by Example III.7., approximating contact using stiff springs, as done in works that model contact with the penalty method, inevitably results in trading off stiffness and physical realism.
|
| 174 |
+
|
| 175 |
+
Therefore, we advocate less stiff contact models that are based on implicit time-stepping [23], whose per time-step computation relies on solving optimization problems such as the Linear Complementary Problem (LCP), which can be further relaxed into solving convex Quadratic Programs (QP)s [1]. The derivatives of such systems can be obtained by the implicit function theorem, differentiating through the optimality conditions of the problems. We give one example of such a convex QP as below. Correctly using gradients from implicit time-stepping can vastly improve the efficacy of FoBG by ensuring that their norm stays reasonably bounded.
|
| 176 |
+
|
| 177 |
+
Example IV.1. (Implicit Time-Stepping for Pushing). We illustrate implicit time-stepping with a 1-dimensional example consisting of a point mass and a wall. The state of the system is $\left( {x, v}\right) \in {\mathbb{R}}^{2}$ , where $x$ is the position and $v$ the velocity of the point mass.The non-penetrable wall occupies $x \leq 0$ .
|
| 178 |
+
|
| 179 |
+
The equations of motion of the system is
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
m\left( {{v}_{ + } - v}\right) = u + \lambda \tag{2a}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
{x}_{ + } = x + h{v}_{ + }, \tag{2b}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
0 \leq {x}_{ + } \bot \lambda \geq 0, \tag{2c}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
where $\left( {{x}_{ + },{v}_{ + }}\right)$ represent the system state at the next time step; $h$ is the step size; $m$ is the mass; $u$ is the impulse applied to the point mass by actuation; and $\lambda$ is the impulse due to contact with wall. Constraint (2a) is the momentum balance of the point mass. Constraint (2c) is the complementarity constraint that ensures the wall can only push on the point mass when they are in contact. We can indeed see that the equations of motion (2) is the KKT condition of the following QP:
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
\mathop{\operatorname{minimize}}\limits_{{v}_{ + }}\;\frac{1}{2}m{\left( {v}_{ + } - v\right) }^{2} - u{v}_{ + } \tag{3a}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\text{subject to}\frac{x}{h} + {v}_{ + } \geq 0 \tag{3b}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
## B. Smooth Analytic Approximations of Dynamics
|
| 204 |
+
|
| 205 |
+
Although we show that strict discontinuity is not required to have degradation of performance for the FoBG, soft relaxations of discontinuities still behave much better. To this end, we also advocate for analytically providing a smooth surrogates of the discontinuous dynamics in simulation, and increasingly lowering the relaxation during the policy optimization step. To overcome the pathologies of using FoBGs, we believe that providing such a feature should be a requirement for differentiable simulators for them to be useful in policy optimization.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
|
| 209 |
+
Fig. 6. Left: Visualization of wall and block examples in Example IV.1 and Example IV.2. Note that both schemes do not require using the spring constant $k$ , where as the penalty method will. This alleviates problems associated with stiffness of the gradients. Right: Results of simulating the methods of Example IV. 1 and Example IV.2 at $\left( {x, v}\right) = 0$ . The resulting positions ${x}^{ + }$ are plotted as functions of input impulse $u$ .
|
| 210 |
+
|
| 211 |
+
Previous works have provided smooth surrogates to the penalty method of contact $\left\lbrack {9,{13},{32}}\right\rbrack$ , which reasonably addresses discontinuities, yet still suffers from stiffness. Instead, we show that a smooth approximation can be made to implicit time-stepping methods by using common constraint relaxation methods such as the log-barrier function used in interior-point method.
|
| 212 |
+
|
| 213 |
+
Example IV.2. (Smooth Relaxation for Pushing). The optimization-based dynamics of Example IV. 1 can be smoothed by replacing the non-penetration constraint (3b) with an additional log-barrier term in the objective (3a):
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\mathop{\operatorname{minimize}}\limits_{{v}_{ + }}\frac{1}{2}m{\left( {v}_{ + } - v\right) }^{2} - u{v}_{ + } - \frac{1}{\kappa }\log \left( {\frac{x}{h} + {v}_{ + }}\right) , \tag{4}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
which is an unconstrained convex optimization program, whose optimality condition can be obtained by setting the derivative of the objective (4) to 0 :
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
m\left( {{v}_{ + } - v}\right) = u + {\left\lbrack \kappa \left( x/h + {v}_{ + }\right) \right\rbrack }^{-1}. \tag{5}
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
The optimality condition (5) can be interpreted as the momentum balance of the point mass, but the wall now acts as a force field, exerting on the object a force whose magnitude is inversely proportional to the distance to the wall. The strength of the force field is controlled by the log-barrier weight $\kappa$ . As $\kappa \rightarrow \infty$ , the solution of (4) converges to that of (3).
|
| 226 |
+
|
| 227 |
+
## C. Gradient Interpolation
|
| 228 |
+
|
| 229 |
+
Finally, we mention some recent advances on the algorithm side. If we can compute both the FoBG and the ZoBG using uncorrelated samples, we can consider an interpolated gradient,
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
{\widehat{\nabla }}^{\left\lbrack \alpha \right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{:=}} \alpha {\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) + \left( {1 - \alpha }\right) {\widehat{\nabla }}^{\left\lbrack 1\right\rbrack },{F}_{i}\left( \mathbf{\theta }\right) \tag{6}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
where $\alpha \in \left\lbrack {0,1}\right\rbrack$ . Previous works on gradient interpolation $\left\lbrack {{20},{18}}\right\rbrack$ shows that we can optimally interpolate the two gradients based on computing empirical variance. However, as Example III. 2 shows, the empirical variance can be an unreliable estimate if FoBG is biased under discontinuities.
|
| 236 |
+
|
| 237 |
+
To mitigate this problem, we can test the correctness of the FoBG against the unbiased ZoBG by constructing a confidence interval based on samples of the ZoBG, and choosing an optimal value of $\alpha$ subject to a chance constraint on the allowable value of the interpolated gradient [26].
|
| 238 |
+
|
| 239 |
+
## REFERENCES
|
| 240 |
+
|
| 241 |
+
[1] Mihai Anitescu. Optimization-based simulation of nonsmooth rigid multibody dynamics. Mathematical Programming, 105(1):113-143, 2006.
|
| 242 |
+
|
| 243 |
+
[2] Sai Praveen Bangaru, Jesse Michel, Kevin Mu, Gilbert Bernstein, Tzu-Mao Li, and Jonathan Ragan-Kelley. Systematically differentiating parametric discontinuities. ACM Trans. Graph., 40(4), July 2021. ISSN 0730-0301. doi: 10.1145/3450626. 3459775.
|
| 244 |
+
|
| 245 |
+
[3] Justin Carpentier, Guilhem Saurel, Gabriele Buondonno, Joseph Mirabel, Florent Lamiraux, Olivier Stasse, and Nicolas Mansard. The pinocchio c++ library : A fast and flexible implementation of rigid body dynamics algorithms and their analytical derivatives. In 2019 IEEE/SICE International Symposium on System Integration (SII), pages 614-619, 2019. doi: 10.1109/SII.2019.8700380.
|
| 246 |
+
|
| 247 |
+
[4] Alejandro M. Castro, Ante Qu, Naveen Kuppuswamy, Alex Alspach, and Michael Sherman. A transition-aware method for the simulation of compliant contact with regularized friction. IEEE Robotics and Automation Letters, 5(2):1859-1866, Apr 2020. ISSN 2377-3774. doi: 10.1109/lra.2020.2969933. URL http://dx.doi.org/10.1109/LRA.2020.2969933.
|
| 248 |
+
|
| 249 |
+
[5] Filipe de Avila Belbute-Peres, Kevin Smith, Kelsey Allen, Josh Tenenbaum, and J. Zico Kolter. End-to-end differentiable physics for learning and control. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 842424a1d0595b76ec4fa03c46e8d755-Paper.pdf.
|
| 250 |
+
|
| 251 |
+
[6] Tao Du, Yunfei Li, Jie Xu, Andrew Spielberg, Kui Wu, Daniela Rus, and Wojciech Matusik. D3\{pg\}: Deep differentiable deterministic policy gradients, 2020. URL https://openreview.net/forum?id=rkxZCJrtwS.
|
| 252 |
+
|
| 253 |
+
[7] Ryan Elandt, Evan Drumwright, Michael Sherman, and A. Ruina. A pressure field model for fast, robust approximation of net contact force and moment between nominally rigid objects. IROS, pages 8238-8245, 2019.
|
| 254 |
+
|
| 255 |
+
[8] C. Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, and Olivier Bachem. Brax - a differentiable physics engine for large scale rigid body simulation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021. URL https://openreview.net/ forum?id=VdvDlnnjzIN.
|
| 256 |
+
|
| 257 |
+
[9] Moritz Geilinger, David Hahn, Jonas Zehnder, Moritz Bächer, Bernhard Thomaszewski, and Stelian Coros. Add: Analytically differentiable dynamics for multi-body systems with frictional contact, 2020.
|
| 258 |
+
|
| 259 |
+
[10] Saeed Ghadimi and Guanghui Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341- 2368, 2013. doi: 10.1137/120880811. URL https://doi.org/10.1137/120880811.
|
| 260 |
+
|
| 261 |
+
[11] Paula Gradu, John Hallman, Daniel Suo, Alex Yu, Naman Agarwal, Udaya Ghai, Karan Singh, Cyril Zhang, Anirudha Majumdar, and Elad Hazan. Deluca - a differentiable control library: Environments, methods, and benchmarking, 2021.
|
| 262 |
+
|
| 263 |
+
[12] Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Frédo Durand. Difftaichi: Differentiable programming for physical simulation. ICLR, 2020.
|
| 264 |
+
|
| 265 |
+
[13] Zhiao Huang, Yuanming Hu, Tao Du, Siyuan Zhou, Hao Su, Joshua B. Tenenbaum, and Chuang Gan. Plasticinelab: A soft-body manipulation benchmark with differentiable physics. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=xCcdBRQEDW.
|
| 266 |
+
|
| 267 |
+
[14] K. H. Hunt and F. R. E. Crossley. Coefficient of Restitution Interpreted as Damping in Vibroimpact. Journal of Applied Mechanics, 42(2):440-445, 06 1975. ISSN 0021-8936. doi: 10.1115/1.3423596. URL https://doi.org/10.1115/1.3423596.
|
| 268 |
+
|
| 269 |
+
[15] Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local
|
| 270 |
+
|
| 271 |
+
[23] David Stewart and J.C. (Jeff) Trinkle. An implicit time-stepping scheme for rigid
|
| 272 |
+
|
| 273 |
+
reparameterization trick. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28.
|
| 274 |
+
|
| 275 |
+
Curran Associates, Inc., 2015.
|
| 276 |
+
|
| 277 |
+
[16] Shakir Mahamed, Mihaela Rosca, Michael Figurnov, and Andriy Mnih. Monte carlo gradient estimation in machine learning. In Jennifer Dy and Andreas Krause, editors, Journal of Machine Learning Research, volume 21, pages 1-63, 4 2020.
|
| 278 |
+
|
| 279 |
+
[17] Matthew T. Mason. Mechanics of Robotic Manipulation. The MIT Press, 06 2001. ISBN 9780262256629. doi: 10.7551/mitpress/4527.001.0001. URL https: //doi.org/10.7551/mitpress/4527.001.0001.
|
| 280 |
+
|
| 281 |
+
[18] Luke Metz, C. Daniel Freeman, Samuel S. Schoenholz, and Tal Kachman. Gradients are not all you need, 2021.
|
| 282 |
+
|
| 283 |
+
[19] Miguel Angel Zamora Mora, Momchil Peychev, Sehoon Ha, Martin Vechev, and Stelian Coros. Pods: Policy optimization via differentiable simulation. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 7805-7817. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/ mora21a.html.
|
| 284 |
+
|
| 285 |
+
[20] Paavo Parmas, Carl Edward Rasmussen, Jan Peters, and Kenji Doya. PIPPS: Flexible model-based policy search robust to the curse of chaos. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4065-4074. PMLR, 10-15 Jul 2018.
|
| 286 |
+
|
| 287 |
+
[21] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
|
| 288 |
+
|
| 289 |
+
[22] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017.
|
| 290 |
+
|
| 291 |
+
body dynamics with coulomb friction. volume 1, pages 162-169, 012000. doi: 10.1109/ROBOT.2000.844054.
|
| 292 |
+
|
| 293 |
+
[24] R. Stribeck. Die wesentlichen Eigenschaften der Gleit- und Rollenlager. Mitteilungen über Forschungsarbeiten auf dem Gebiete des Ingenieurwesens, insbesondere aus den Laboratorien der technischen Hochschulen. Julius Springer, 1903.
|
| 294 |
+
|
| 295 |
+
[25] H. J. Terry Suh, Tao Pang, and Russ Tedrake. Bundled gradients through contact via randomized smoothing. arXiv pre-print, 2021.
|
| 296 |
+
|
| 297 |
+
[26] H. J. Terry Suh, Max Simchowitz, Kaiqing Zhang, and Russ Tedrake. Do differentiable simulators give better policy gradients?, 2022. URL https://arxiv.org/ abs/2202.00817.
|
| 298 |
+
|
| 299 |
+
[27] Richard Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. Adv. Neural Inf. Process. Syst, 12, 02 2000.
|
| 300 |
+
|
| 301 |
+
[28] Russ Tedrake. Drake: A planning, control, and analysis toolbox for nonlinear dynamical systems, 2022. URL http://drake.mit.edu.
|
| 302 |
+
|
| 303 |
+
[29] Arjan van der Schaft and Hans Schumacher. An Introduction to Hybrid Dynamical Systems. Springer Publishing Company, Incorporated, 1st edition, 2000. ISBN 978-1-4471-3916-4.
|
| 304 |
+
|
| 305 |
+
[30] Keenon Werling, Dalton Omens, Jeongseok Lee, Ioannis Exarchos, and C. Karen Liu. Fast and feature-complete differentiable physics for articulated rigid bodies with contact, 2021.
|
| 306 |
+
|
| 307 |
+
[31] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 3, 05 1992.
|
| 308 |
+
|
| 309 |
+
[32] Jie Xu, Viktor Makoviychuk, Yashraj Narang, Fabio Ramos, Wojciech Matusik, Animesh Garg, and Miles Macklin. Accelerated policy learning with parallel differentiable simulation, 2022. URL https://arxiv.org/abs/2204.07137.
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/kMB2WAfisY/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,237 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PATHOLOGIES AND CHALLENGES OF USING DIFFERENTIABLE SIMULATORS IN POLICY OPTIMIZATION FOR CONTACT-RICH MANIPULATION
|
| 2 |
+
|
| 3 |
+
H.J. Terry Suh, Max Simchowitz, Kaiqing Zhang, Tao Pang, Russ Tedrake
|
| 4 |
+
|
| 5 |
+
Abstract-Policy search methods in Reinforcement Learning (RL) have shown impressive results in contact-rich tasks such as dexterous manipulation. However, the high variance of zero-order Monte-Carlo gradient estimates results in slow convergence and a requirement for a high number of samples. By replacing these zero-order gradient estimates with first-order ones, differentiable simulators promise faster computation time for policy gradient methods when the model is known. Contrary to this belief, we highlight some of the pathologies of using first-order gradients and show that in many physical scenarios involving rich contact, using zero-order gradients result in better performance. Building on these pathologies and lessons, we propose guidelines for designing differentiable simulators, as well as policy optimization algorithms that use these simulators. By doing so, we hope to reap the benefits of first-order gradients while avoiding the potential pitfalls.
|
| 6 |
+
|
| 7 |
+
§ I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Reinforcement Learning (RL) is fundamentally concerned with the problem of minimizing a stochastic objective,
|
| 10 |
+
|
| 11 |
+
$$
|
| 12 |
+
\mathop{\min }\limits_{\mathbf{\theta }}F\left( \mathbf{\theta }\right) = \mathop{\min }\limits_{\mathbf{\theta }}{\mathbb{E}}_{\mathbf{w}}f\left( {\mathbf{\theta },\mathbf{w}}\right) .
|
| 13 |
+
$$
|
| 14 |
+
|
| 15 |
+
Many algorithms in RL heavily rely on zeroth-order Monte-Carlo estimation of the gradient $\nabla F\left\lbrack {{27},{22}}\right\rbrack$ . Yet, in contact-rich robotic manipulation where we have model knowledge and structure of the dynamics, it is possible to differentiate through the physics and obtain exact gradients of $f$ , which can also be used to construct a first-order estimate of $\nabla F$ . The availability of both options begs the question: given access to gradients of $f$ , which estimator should we prefer?
|
| 16 |
+
|
| 17 |
+
In stochastic optimization, the theoretical benefits of using first-order estimates of $\nabla F$ over zeroth-order ones have mainly been understood through the lens of variance and convergence rates $\left\lbrack {{10},{16}}\right\rbrack$ : the first-order estimator often (not always) results in much less variance compared to the zeroth-order one, which leads to faster convergence rates to a local minima of nonconvex smooth objective functions. However, the landscape of RL objectives that involve long-horizon sequential decision making (e.g. policy optimization) is challenging to analyze, and convergence properties in these landscapes are relatively poorly understood. In particular, contact-rich systems can display complex characteristics including nonlinearities, non-smoothness, and discontinuities (Figure 1) [29, 17, 25].
|
| 18 |
+
|
| 19 |
+
Nevertheless, lessons from convergence rate analysis tell us that there may be benefits to using the exact gradients even for these complex physical systems. Such ideas have been championed through the term "differentiable simulation", where forward simulation of physics is programmed in a manner that is consistent with automatic differentiation $\left\lbrack {8,{12},{28},{30},9}\right\rbrack$ , or computation of analytic derivatives [3]. These methods have shown promising results in decreasing computation time compared to zeroth-order methods [13, 8, 11, 6, 5, 19].
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Fig. 1. Examples of simple optimization problems on physical systems. Goal is to: A. maximize $y$ position of the ball after dropping. B. maximize distance thrown, with a wall that results in inelastic impact. C. maximize transferred angular momentum to the pivoting bar through collision. Second row: the original objective and the stochastic objective after randomized smoothing.
|
| 24 |
+
|
| 25 |
+
However, due to the complex characteristics of contact dynamics, we show that the belief that first-order gradients improve performance over zero-order ones is not always true for contact-rich manipulation. We illustrate this phenomenon through couple pathologies: first, even under sufficient regularity conditions of continuity, the choice of contact modeling can cause the first-order gradient estimate to have higher variance compared to the zeroth-order one. In particular, this may occur in approaches that utilize the penalty method [14], which requires stiff dynamics to realistically simulate contact [9].
|
| 26 |
+
|
| 27 |
+
In addition, we show that many contact-rich systems display nearly/strictly discontinuous behavior in the underlying landscape. The presence of such discontinuities causes the first-order gradient estimator to be biased, while the zeroth-order one still remains unbiased. Furthermore, we show that even when continuous approximations are made, such approximations are often stiff and highly-Lipschitz. In these settings, the first order estimator still suffer from what we call empirical bias under finite-sample settings. The compromise of the first order estimator in the face of more accurate description of contact dynamics hints at a fundamental tension between realism of the dynamics and the performance of first-order gradients.
|
| 28 |
+
|
| 29 |
+
From these pathologies, we suggest methods in simulation, as well as algorithms, that may improve the efficacy of first-order gradient estimates obtained using differentiable simulation. We advocate for the use of implicit contact models that are less stiff, and thus have low variance of the first-order gradient. In addition, we show they can be analytically smoothed out to mitigate discontinuities. Finally, we introduce a method to interpolate gradients that escapes these identified pitfalls.
|
| 30 |
+
|
| 31 |
+
§ II. PRELIMINARIES
|
| 32 |
+
|
| 33 |
+
§ A. POLICY OPTIMIZATION SETTING
|
| 34 |
+
|
| 35 |
+
We study a discrete-time, finite-horizon, continuous-state control problem with states $\mathbf{x} \in {\mathbb{R}}^{n}$ , inputs $\mathbf{u} \in {\mathbb{R}}^{m}$ , transition function $\phi : {\mathbb{R}}^{n} \times {\mathbb{R}}^{m} \rightarrow {\mathbb{R}}^{n}$ , and horizon $H \in \mathbb{N}$ . Given a sequence of costs ${c}_{h} : {\mathbb{R}}^{n} \times {\mathbb{R}}^{m} \rightarrow \mathbb{R}$ , a family of policies ${\pi }_{h}\left( {\cdot , \cdot }\right) : {\mathbb{R}}^{n} \times {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{m}$ parameterized by $\mathbf{\theta } \in {\mathbb{R}}^{d}$ , and a sequence of injected noise terms ${\mathbf{w}}_{1 : H} \in {\left( {\mathbb{R}}^{m}\right) }^{H}$ , we define the cost-to-go functions
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
{V}_{h}\left( {{\mathbf{x}}_{h},{\mathbf{w}}_{h : H},\mathbf{\theta }}\right) = \mathop{\sum }\limits_{{{h}^{\prime } = h}}^{H}{c}_{h}\left( {{\mathbf{x}}_{{h}^{\prime }},{\mathbf{u}}_{{h}^{\prime }}}\right) ,
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\text{ s.t. }{\mathbf{x}}_{{h}^{\prime } + 1} = \phi \left( {{\mathbf{x}}_{{h}^{\prime }},{\mathbf{u}}_{{h}^{\prime }}}\right) ,{\mathbf{u}}_{{h}^{\prime }} = \pi \left( {{\mathbf{x}}_{{h}^{\prime }},\mathbf{\theta }}\right) + {\mathbf{w}}_{{h}^{\prime }},{h}^{\prime } \geq h\text{ . }
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
Our aim is to minimize the policy optimization objective
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
F\left( \mathbf{\theta }\right) \mathrel{\text{ := }} {\mathbb{E}}_{{\mathbf{x}}_{1} \sim \rho }{\mathbb{E}}_{{\mathbf{w}}_{h}\overset{\text{ i.i.d. }}{ \sim }p}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H},\mathbf{\theta }}\right) , \tag{1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where $\rho$ is a distribution over initial states ${\mathbf{x}}_{1}$ , and ${\mathbf{w}}_{1},\ldots ,{\mathbf{w}}_{H}$ are i.i.d. according to $p$ which we assume to be a zero-mean Gaussian with covariance ${\sigma }^{2}{I}_{n}$ .
|
| 52 |
+
|
| 53 |
+
§ B. ZEROTH-ORDER ESTIMATOR:
|
| 54 |
+
|
| 55 |
+
The policy gradient can be estimated only using samples of the function values [31].
|
| 56 |
+
|
| 57 |
+
Definition II.1. Given a single zeroth-order estimate of the policy gradient ${\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right)$ , we define the zeroth-order batched gradient (ZoBG) ${\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right)$ as the sample mean,
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{ := }} \frac{1}{{\sigma }^{2}}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right) \left\lbrack {\mathop{\sum }\limits_{{h = 1}}^{H}{\mathrm{D}}_{\mathbf{\theta }}\pi {\left( {\mathbf{x}}_{h}^{i},\mathbf{\theta }\right) }^{\top }{\mathbf{w}}_{h}^{i}}\right\rbrack
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right) \mathrel{\text{ := }} \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) ,
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
where ${\mathbf{x}}_{h}^{i}$ is the state at time $h$ of a trajectory induced by the noise ${\mathbf{w}}_{1 : H}^{i},i$ is the index of the sample trajectory, and ${\mathrm{D}}_{\mathbf{\theta }}\pi$ is the Jacobian matrix $\partial \pi /\partial \mathbf{\theta } \in {\mathbb{R}}^{m \times d}$ .
|
| 68 |
+
|
| 69 |
+
The hat notation denotes a per-sample Monte-Carlo estimate, and bar-notation a sample mean. The ZoBG is also referred to as the REINFORCE [31], score function, or the likelihood-ratio gradient. In practice, a baseline term $b$ is subtracted from ${V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right)$ for variance reduction. One example is the zero-noise rollout as the baseline $b = {V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{0}}_{1 : H},\mathbf{\theta }}\right)$ :
|
| 70 |
+
|
| 71 |
+
§ C. FIRST-ORDER ESTIMATOR.
|
| 72 |
+
|
| 73 |
+
In differentiable simulators, the gradients of the dynamics $\phi$ and costs ${c}_{h}$ are available almost surely (i.e., with probability one). Hence, one may compute the exact gradient ${\nabla }_{\mathbf{\theta }}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H},\mathbf{\theta }}\right)$ by automatic differentiation and average them to estimate $\nabla F\left( \mathbf{\theta }\right)$ .
|
| 74 |
+
|
| 75 |
+
Definition II.2. Given a single first-order gradient estimate ${\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right)$ , we define the first-order batched gradient (FoBG) as the sample mean:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{ := }} {\nabla }_{\mathbf{\theta }}{V}_{1}\left( {{\mathbf{x}}_{1},{\mathbf{w}}_{1 : H}^{i},\mathbf{\theta }}\right)
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right) \mathrel{\text{ := }} \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\widehat{\nabla }}^{\left\lbrack 1\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) .
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
The FoBG is also referred to as the reparametrization gradient [15], the pathwise derivative [21], or Back Propagation through Time (BPTT).
|
| 86 |
+
|
| 87 |
+
§ III. PITFALLS OF FIRST-ORDER GRADIENTS
|
| 88 |
+
|
| 89 |
+
In this section, we shows pathologies in contact-rich systems for which the FoBG can perform worse than the ZoBG.
|
| 90 |
+
|
| 91 |
+
§ A. BIAS UNDER DISCONTINUITIES
|
| 92 |
+
|
| 93 |
+
Under standard regularity conditions, it is well-known that both estimators are unbiased estimators of the true gradient $\nabla F\left( \mathbf{\theta }\right)$ . However, care must be taken to define these conditions precisely, as such conditions are broken for contact-rich systems. Fortunately, the ZoBG is still unbiased under mild assumptions,
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathbb{E}\left\lbrack {{\bar{\nabla }}^{\left\lbrack 0\right\rbrack }F\left( \mathbf{\theta }\right) }\right\rbrack = \nabla F\left( \mathbf{\theta }\right) .
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
In contrast, the FoBG requires strong continuity conditions in order to satisfy the requirement for unbiasedness. However, under Lipschitz continuity, it is indeed unbiased.
|
| 100 |
+
|
| 101 |
+
Lemma III.1. If $\phi \left( {\cdot , \cdot }\right)$ is locally Lipschitz and ${c}_{h}\left( {\cdot , \cdot }\right) \in {C}^{\infty }$ , then ${\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right)$ is defined almost surely, and
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\mathbb{E}\left\lbrack {{\bar{\nabla }}^{\left\lbrack 1\right\rbrack }F\left( \mathbf{\theta }\right) }\right\rbrack = \nabla F\left( \mathbf{\theta }\right) .
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
Lemma III.1 tells us that FoBG can fail when applied to discontinuous landscapes. We illustrate a simple case of biasedness through a counterexample.
|
| 108 |
+
|
| 109 |
+
Example III.2 (Heaviside). $\left\lbrack {2,{25}}\right\rbrack$ Consider the Heaviside function,
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
f\left( {\mathbf{\theta },\mathbf{w}}\right) = H\left( {\mathbf{\theta } + \mathbf{w}}\right) ,\;H\left( t\right) = {\mathbb{1}}_{t \geq 0}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
whose stochastic objective becomes the error function
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
F\left( \mathbf{\theta }\right) = {\mathbb{E}}_{\mathbf{w}}\left\lbrack {H\left( {\mathbf{\theta } + \mathbf{w}}\right) }\right\rbrack = \operatorname{erf}\left( {-\mathbf{\theta };{\sigma }^{2}}\right) ,
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
However, since ${\nabla }_{\mathbf{\theta }}H\left( {\mathbf{\theta } + \mathbf{w}}\right) = 0$ for all $\mathbf{\theta } \neq - \mathbf{w}$ , we have ${\mathbb{E}}_{{\mathbf{w}}_{i}}\delta \left( {\mathbf{\theta } + {\mathbf{w}}_{i}}\right) = 0$ . Hence, the Law of Large Numbers does not hold, and FoBG is biased as the gradient of the stochastic objective, a Gaussian, is non-zero at any $\mathbf{\theta }$ . We further note that the empirical variance of the FoBG estimator in this example is zero. On the other hand, the ZoBG escapes this problem and provides an unbiased estimate, since it always takes finite intervals that include the integral of the delta.
|
| 122 |
+
|
| 123 |
+
< g r a p h i c s >
|
| 124 |
+
|
| 125 |
+
Fig. 2. From left: heaviside objective $f\left( {\mathbf{\theta },\mathbf{w}}\right)$ and stochastic objective $F\left( \mathbf{\theta }\right)$ , empirical values of the gradient estimates, and their empirical variance.
|
| 126 |
+
|
| 127 |
+
§ B.THE "EMPIRICAL BIAS" PHENOMENON
|
| 128 |
+
|
| 129 |
+
One might argue that strict discontinuity is simply an artifact of modeling choice in simulators; indeed, many simulators approximate discontinuous dynamics as a limit of continuous ones with growing Lipschitz constant $\left\lbrack {9,7}\right\rbrack$ . In this section, we explain how this can lead to a phenomenon we call empirical bias, where the FoBG appears to have low empirical variance, but is still highly inaccurate; i.e. it "looks" biased when a finite number of samples are used. Through this phenomenon, we claim that performance degradation of first-order gradient estimates do not require strict discontinuity, but is also present in continuous, yet stiff approximations of discontinuities.
|
| 130 |
+
|
| 131 |
+
Definition III. 3 (Empirical bias). Let $\mathbf{z}$ be a vector-valued random variable with $\mathbb{E}\left\lbrack {\parallel \mathbf{z}\parallel }\right\rbrack < \infty$ . We say $\mathbf{z}$ has $\left( {\beta ,\Delta ,S}\right)$ - empirical bias if there is a random event $\mathcal{E}$ such that $\Pr \left\lbrack \mathcal{E}\right\rbrack \geq$ $1 - \beta$ , and $\parallel \mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack - \mathbb{E}\left\lbrack \mathbf{z}\right\rbrack \parallel \geq \Delta$ , but $\parallel \mathbf{z} - \mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack \parallel \leq S$ almost surely on $\mathcal{E}$ .
|
| 132 |
+
|
| 133 |
+
A paradigmatic example of empirical bias is a random scalar $\mathbf{z}$ which takes the value 0 with probability $1 - \beta$ , and $\frac{1}{\beta }$ with probability $\beta$ . Setting $\mathcal{E} = \{ \mathbf{z} = 0\}$ , we see $\mathbb{E}\left\lbrack \mathbf{z}\right\rbrack = 1$ , $\mathbb{E}\left\lbrack {\mathbf{z} \mid \mathcal{E}}\right\rbrack = 0$ , and so $\mathbf{z}$ satisfies $\left( {\beta ,1,0}\right)$ -empirical bias. Note that $\operatorname{Var}\left\lbrack \mathbf{z}\right\rbrack = 1/\beta - 1$ ; in fact, small- $\beta$ empirical bias implies large variance more generally.
|
| 134 |
+
|
| 135 |
+
Lemma III.4. Suppose $\mathbf{z}$ has $\left( {\beta ,\Delta ,S}\right)$ -empirical bias. Then $\operatorname{Var}\left\lbrack \mathbf{z}\right\rbrack \geq \frac{{\Delta }_{0}^{2}}{\beta }$ , where ${\Delta }_{0} \mathrel{\text{ := }} \max \{ 0,\left( {1 - \beta }\right) \Delta - \beta \parallel \mathbb{E}\left\lbrack \mathbf{z}\right\rbrack \parallel \}$ .
|
| 136 |
+
|
| 137 |
+
Empirical bias naturally arises for discontinuities or stiff continuous approximations.
|
| 138 |
+
|
| 139 |
+
Example III.5 (Coulomb friction). The Coulomb model of friction is discontinuous in the relative tangential velocity between two bodies. In many simulators $\left\lbrack {9,4}\right\rbrack$ , it is common to consider a continuous approximation instead. We idealize such approximations through a piecewise linear relaxation of the Heaviside that is continuous, parametrized by the width of the middle linear region $\nu$ (which corresponds to slip tolerance).
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{\bar{H}}_{\nu }\left( t\right) = \left\{ {\begin{array}{ll} {2t}/\nu & \text{ if }\left| t\right| \leq \nu /2 \\ {2H}\left( t\right) - 1 & \text{ else } \end{array}.}\right.
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
In practice, lower values of $\nu$ lead to more realistic behavior in simulation [28], but this has adverse effects for empirical bias. Considering ${f}_{\nu }\left( {\mathbf{\theta },\mathbf{w}}\right) = {\bar{H}}_{\nu }\left( {\mathbf{\theta } + \mathbf{w}}\right)$ , we have ${F}_{\nu }\left( \mathbf{\theta }\right) =$ ${\mathbb{E}}_{\mathbf{w}}\left\lbrack {{\bar{H}}_{\nu }\left( {\mathbf{\theta } + \mathbf{w}}\right) }\right\rbrack \mathrel{\text{ := }} \operatorname{erf}\left( {\nu /2 - \theta ;{\sigma }^{2}}\right)$ . In particular, setting ${c}_{\sigma } \mathrel{\text{ := }} \frac{1}{\sqrt{2\pi }\sigma }$ , then at $\mathbf{\theta } = \nu /2,\nabla {F}_{\nu }\left( \mathbf{\theta }\right) = {c}_{\sigma }$ , whereas, with probability at least ${c}_{\sigma }\nu ,\nabla {f}_{\nu }\left( {\mathbf{\theta },\mathbf{w}}\right) = 0$ . Hence, the FoBG has $\left( {{c}_{\sigma }\nu ,{c}_{\sigma },0}\right)$ empirical bias, and its variance scales with $1/\nu$ as $\nu \rightarrow 0$ . The limiting $\nu = 0$ case, corresponding to the Coulomb model, is the Heaviside from Example III.2, where the limit of high empirical bias, as well as variance, becomes biased in expectation (but, surprisingly, zero variance!). We empirically illustrate this effect in Figure 3. We also note that more complicated models of friction (e.g. that incorporates the Stribeck effect [24]) would suffer similar problems.
|
| 146 |
+
|
| 147 |
+
Example III.6. (Discontinuity in geometry). Another source of discontinuity in simulators comes from the discontinuity of surface normals. We show this in Figure 4, where balls that collide with a rectangular geometry create discontinuities. It is possible to make a continuous relaxation [7] by considering a smoother geometry, depicted by the addition of the dome in Figure 4. While this makes FoBG no longer biased asymptotically, the stiffness of the relaxation still results in high empirical bias.
|
| 148 |
+
|
| 149 |
+
< g r a p h i c s >
|
| 150 |
+
|
| 151 |
+
Fig. 3. Top column: illustration of the physical system and the relaxation of Coulomb friction. Bottom column: the values of estimators and their empirical variances depending on number of samples and slip tolerance. Values of FoBG are zero in low-sample regimes due to empirical bias. As $\nu \rightarrow 0$ , the empirical variance of FoBG goes to zero, which shows as empty in the log-scale. Expected variance, however, blows up as it scales with $1/\nu$ .
|
| 152 |
+
|
| 153 |
+
< g r a p h i c s >
|
| 154 |
+
|
| 155 |
+
Fig. 4. Left: example of ball hitting the wall. The green trajectories hit a rectangular wall, displaying discontinuities. Right: the pink trajectories collide with the dome on top, and show continuous but stiff behavior.
|
| 156 |
+
|
| 157 |
+
§ C. HIGH VARIANCE FROM STIFFNESS
|
| 158 |
+
|
| 159 |
+
Even without the phenomenon of empirical bias, we show that certain choices of contact models can cause the FoBG to suffer from high variance. In particular, approximations of rigid contact with high-stiffness spring models (i.e. penalty method) causes the gradient may have a high norm.
|
| 160 |
+
|
| 161 |
+
Example III.7. (Pushing with stiff contact). We demonstrate this phenomenon through a simple 1D pushing example in Figure 5, where the ZoBG has lower variance than the FoBG as stiffness increases, until numerical semi-implicit integration becomes unstable under a fixed timestep.
|
| 162 |
+
|
| 163 |
+
< g r a p h i c s >
|
| 164 |
+
|
| 165 |
+
Fig. 5. The variance of the gradient of ${V}_{1}$ , with running cost ${c}_{h} = \parallel {\mathbf{x}}_{h}^{2} -$ ${\mathbf{x}}^{g}{\parallel }^{2}$ , with respect to input trajectory as spring constant $k$ increases. Mass $m$ and damping coefficient $c$ are fixed.
|
| 166 |
+
|
| 167 |
+
§ IV. TACKLING THE PATHOLOGIES: A PATH FORWARD
|
| 168 |
+
|
| 169 |
+
In this section, we comment on methods that can alleviate the pathologies that were found in the previous section.
|
| 170 |
+
|
| 171 |
+
§ A. LESS STIFF FORMULATIONS OF CONTACT DYNAMICS
|
| 172 |
+
|
| 173 |
+
In order to avoid high variance of the FoBG, we must ensure that the norm of the gradient is low. Yet, as illustrated by Example III.7., approximating contact using stiff springs, as done in works that model contact with the penalty method, inevitably results in trading off stiffness and physical realism.
|
| 174 |
+
|
| 175 |
+
Therefore, we advocate less stiff contact models that are based on implicit time-stepping [23], whose per time-step computation relies on solving optimization problems such as the Linear Complementary Problem (LCP), which can be further relaxed into solving convex Quadratic Programs (QP)s [1]. The derivatives of such systems can be obtained by the implicit function theorem, differentiating through the optimality conditions of the problems. We give one example of such a convex QP as below. Correctly using gradients from implicit time-stepping can vastly improve the efficacy of FoBG by ensuring that their norm stays reasonably bounded.
|
| 176 |
+
|
| 177 |
+
Example IV.1. (Implicit Time-Stepping for Pushing). We illustrate implicit time-stepping with a 1-dimensional example consisting of a point mass and a wall. The state of the system is $\left( {x,v}\right) \in {\mathbb{R}}^{2}$ , where $x$ is the position and $v$ the velocity of the point mass.The non-penetrable wall occupies $x \leq 0$ .
|
| 178 |
+
|
| 179 |
+
The equations of motion of the system is
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
m\left( {{v}_{ + } - v}\right) = u + \lambda \tag{2a}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
{x}_{ + } = x + h{v}_{ + }, \tag{2b}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
0 \leq {x}_{ + } \bot \lambda \geq 0, \tag{2c}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
where $\left( {{x}_{ + },{v}_{ + }}\right)$ represent the system state at the next time step; $h$ is the step size; $m$ is the mass; $u$ is the impulse applied to the point mass by actuation; and $\lambda$ is the impulse due to contact with wall. Constraint (2a) is the momentum balance of the point mass. Constraint (2c) is the complementarity constraint that ensures the wall can only push on the point mass when they are in contact. We can indeed see that the equations of motion (2) is the KKT condition of the following QP:
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
\mathop{\operatorname{minimize}}\limits_{{v}_{ + }}\;\frac{1}{2}m{\left( {v}_{ + } - v\right) }^{2} - u{v}_{ + } \tag{3a}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\text{ subject to }\frac{x}{h} + {v}_{ + } \geq 0 \tag{3b}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
§ B. SMOOTH ANALYTIC APPROXIMATIONS OF DYNAMICS
|
| 204 |
+
|
| 205 |
+
Although we show that strict discontinuity is not required to have degradation of performance for the FoBG, soft relaxations of discontinuities still behave much better. To this end, we also advocate for analytically providing a smooth surrogates of the discontinuous dynamics in simulation, and increasingly lowering the relaxation during the policy optimization step. To overcome the pathologies of using FoBGs, we believe that providing such a feature should be a requirement for differentiable simulators for them to be useful in policy optimization.
|
| 206 |
+
|
| 207 |
+
< g r a p h i c s >
|
| 208 |
+
|
| 209 |
+
Fig. 6. Left: Visualization of wall and block examples in Example IV.1 and Example IV.2. Note that both schemes do not require using the spring constant $k$ , where as the penalty method will. This alleviates problems associated with stiffness of the gradients. Right: Results of simulating the methods of Example IV. 1 and Example IV.2 at $\left( {x,v}\right) = 0$ . The resulting positions ${x}^{ + }$ are plotted as functions of input impulse $u$ .
|
| 210 |
+
|
| 211 |
+
Previous works have provided smooth surrogates to the penalty method of contact $\left\lbrack {9,{13},{32}}\right\rbrack$ , which reasonably addresses discontinuities, yet still suffers from stiffness. Instead, we show that a smooth approximation can be made to implicit time-stepping methods by using common constraint relaxation methods such as the log-barrier function used in interior-point method.
|
| 212 |
+
|
| 213 |
+
Example IV.2. (Smooth Relaxation for Pushing). The optimization-based dynamics of Example IV. 1 can be smoothed by replacing the non-penetration constraint (3b) with an additional log-barrier term in the objective (3a):
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\mathop{\operatorname{minimize}}\limits_{{v}_{ + }}\frac{1}{2}m{\left( {v}_{ + } - v\right) }^{2} - u{v}_{ + } - \frac{1}{\kappa }\log \left( {\frac{x}{h} + {v}_{ + }}\right) , \tag{4}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
which is an unconstrained convex optimization program, whose optimality condition can be obtained by setting the derivative of the objective (4) to 0 :
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
m\left( {{v}_{ + } - v}\right) = u + {\left\lbrack \kappa \left( x/h + {v}_{ + }\right) \right\rbrack }^{-1}. \tag{5}
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
The optimality condition (5) can be interpreted as the momentum balance of the point mass, but the wall now acts as a force field, exerting on the object a force whose magnitude is inversely proportional to the distance to the wall. The strength of the force field is controlled by the log-barrier weight $\kappa$ . As $\kappa \rightarrow \infty$ , the solution of (4) converges to that of (3).
|
| 226 |
+
|
| 227 |
+
§ C. GRADIENT INTERPOLATION
|
| 228 |
+
|
| 229 |
+
Finally, we mention some recent advances on the algorithm side. If we can compute both the FoBG and the ZoBG using uncorrelated samples, we can consider an interpolated gradient,
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
{\widehat{\nabla }}^{\left\lbrack \alpha \right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) \mathrel{\text{ := }} \alpha {\widehat{\nabla }}^{\left\lbrack 0\right\rbrack }{F}_{i}\left( \mathbf{\theta }\right) + \left( {1 - \alpha }\right) {\widehat{\nabla }}^{\left\lbrack 1\right\rbrack },{F}_{i}\left( \mathbf{\theta }\right) \tag{6}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
where $\alpha \in \left\lbrack {0,1}\right\rbrack$ . Previous works on gradient interpolation $\left\lbrack {{20},{18}}\right\rbrack$ shows that we can optimally interpolate the two gradients based on computing empirical variance. However, as Example III. 2 shows, the empirical variance can be an unreliable estimate if FoBG is biased under discontinuities.
|
| 236 |
+
|
| 237 |
+
To mitigate this problem, we can test the correctness of the FoBG against the unbiased ZoBG by constructing a confidence interval based on samples of the ZoBG, and choosing an optimal value of $\alpha$ subject to a chance constraint on the allowable value of the interpolated gradient [26].
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Learning Slip with a Patterned Capacitive Tactile Sensor
|
| 2 |
+
|
| 3 |
+
Yuri Gloumakov, Member, IEEE, Tae Myung Huh, Member, IEEE, Hannah Stuart, Member, IEEE
|
| 4 |
+
|
| 5 |
+
Abstract- The task of dynamically manipulating objects within a robotic hand presents ongoing challenges. In particular, friction and slip often dictate task success yet remain difficult to measure directly, quickly, and accurately; this includes both the detection of slip events and slip speed. Complex solutions exist that involve training a control policy using neural networks, with image-based sensors or external cameras, or when contact geometry can be inferred. Using only a capacitive sensor with a `nib`-patterned structure, we attempt to demonstrate the sensor's ability to detect slip speed during uninterrupted contact where geometry cannot be inferred, while benefitting from faster sensing, cheaper construction, and smaller profile. We hope that by collecting vibration amplitude and frequency and applying supervised learning techniques to directly measure slip speed we can guide an implementation of manipulation controls without a priori assumptions about object properties, such as friction or geometry.
|
| 6 |
+
|
| 7 |
+
Index Terms-Tactile Sensing, In-Hand Manipulation.
|
| 8 |
+
|
| 9 |
+
## I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Robotic within-hand manipulation [1] affords robot systems to manipulate objects in tight spaces and avoid gross arm movements, a particularly useful ability in cluttered or constrained environments. However, due to uncertainties in object properties, like friction, successful reorientations can prove to be a challenging task. Some approaches have used inverse kinematics with a highly constrained rigid hand and taking advantage of overcoming friction during sliding to reorient an object [2], while others have taken advantage of compliant or under-actuated systems [3]. However, controlling for object slip directly, without such models, can enable a much faster reorientations with unknown objects, an important feature in situations that necessitate faster response time such as in assembly lines or active disaster zones.
|
| 12 |
+
|
| 13 |
+
Thus far, aggressive dynamic manipulation has been accomplished using learned control policies, whether exploring real-world object contacts [4] or in simulation [5]. However, using a nibbed capacitive tactile sensor developed by Huh et al. [6] (Fig. 1) we hope to demonstrate that dynamic manipulations can be performed using simple control policies by only training for object motion recognition, thus making the sensor more generalizable to different scenarios while reducing the need for complex computing.
|
| 14 |
+
|
| 15 |
+
In this letter we explore the sensor's ability to detect speed of a slipping object as it slides across the sensor. While incipient slip has been demonstrated in various systems [7], [8], slip detection and regrasping can be leveraged to quickly reposition an object within the hand with minimal arm or finger movement [9], [10]. Meanwhile, steady-state slipping speed has only been demonstrated when objects are either much smaller than the sensor or not making contact with its entire surface [11], [12], so that the geometry or forces of an edge contact can be tracked over time. However, objects in a factory setting or during sorting are often fully flush and flat with the sensor and controlling the slip is necessary for dynamic manipulation. We hypothesize that the deflection of the sensor's nib interface would undergo a stick-slip interaction yielding characteristic frequencies and deflection amplitudes unique to each combination of material and slip speed.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
Figure 1. On the left, the sensor can be seen mounted on the tip of a robotic finger. The tactile sensor is made up of a grid of nibs according to dimensions in (a), where the deflection of each nib is tracked in 4 directions. These deflections are used to track pressure (b), sheer (c), and vibrations (d) that can be used detect slipping. The conductive fabric that is embedded in the nibs and deflected changes the capacitive signal between itself and the electrodes. Figure images were borrowed from [6].
|
| 20 |
+
|
| 21 |
+
## II. METHODS
|
| 22 |
+
|
| 23 |
+
To discover how the sensor detects slipping speed, we created a testbed that allowed us to test different slipping speeds and materials. The testbed was designed to maintain a constant distance between the sensor and a sliding object (Fig. 2); keeping the pressure constant was another consideration. Three rectangular objects made of different materials were tested: cherry, basswood, and acrylic with dimensions of ${200} \times {40} \times 3$ $\mathrm{{mm}}$ . The objects were pulled ${134}\mathrm{\;{mm}}$ by a string attached to a UR-10 robotic arm. The objects were then pushed back to the starting point and pulled again while sensor data was recorded at ${600}\mathrm{\;{Hz}}$ . This push-pull cycle lasted for 2 minutes for each speed setting, and speeds were varied from ${10} - {100}\mathrm{\;{mm}}/\mathrm{s}$ in 5 $\mathrm{{mm}}/\mathrm{s}$ increments. Since only the steady-state speed regime was of interest, the data from the acceleration and deceleration were spliced out. The termination of acceleration and initiation of deceleration were estimated to occur within the first $1/{8}^{\text{th }}$ and the last $1/{6}^{\text{th }}$ of the slipping period, respectively, with a conservative margin.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
Y. Gloumakov, T. Huh, and H. Stuart are with the Mechanical Engineering Department, University of California, Berkeley, CA 06511 USA, (email: \{yurigloum, thuh, hstuart\} @berkeley.edu).
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
Figure 2. The left figure depicts the testbed that hosts the sensor and allows the object to slide through, rolling over a set of smooth bearings. On the right the robot arm can be seen to pull on the object by a string. The acrylic piece is placed on the end effect to push the object back into place.
|
| 34 |
+
|
| 35 |
+
A feature of the nibbed sensor is its Programmable System on Chip (PSoC) infrastructure that enables us to couple any desired set of electrodes that result in a faster signal at the cost of resolution. Because we constrained the slip to a single linear direction, the nib deflection only needed to be tracked along a single axis (Fig. 3). Using a fast Fourier transform (FFT) the signal was converted into the frequency spectrum. Linear regressions are used to create a model using both the amplitude signal and the frequency spectrum separately to discover a fit that could identify the speed and material properties from a new signal. To obtain the frequency spectrum, a 300-frame sliding window was used, with an overlap of 1 frame to maximize the amount of extracted data.
|
| 36 |
+
|
| 37 |
+
Due to steady state slipping, the frequency responses were regarded as independent samples. Here, a frequency sample is a vector of length $n$ , which corresponds to 300 (half the sampling rate) divided by the bin size, varied from 1 to 300 , and where vector values correspond to their respective frequency amplitudes. Both the frequency response, as well as the raw signal amplitude, were averaged during each pull cycle; this meant that during the 2-minute data collection, the slower speed trials yielded fewer cycles and therefore less data. The data was used in building a regression and exploring classification and clustering methods.
|
| 38 |
+
|
| 39 |
+
## III. RESULTS
|
| 40 |
+
|
| 41 |
+
An example of amplitude data during one of the trials is shown in figure 3. At the lowest speed, over the course of 2 minutes, only 7 pull cycles were collected, while at the fastest speed, up to 44 cycles were collected over the course of the same period. The mean amplitude of each cycle is plotted in figure 4. The linear fit ${\mathrm{R}}^{2}$ values were 0.475,0.280, and 0.399 cherry, basswood, and acrylic objects, respectively. Although this corresponds to weak correlations, at speeds below ${50}\mathrm{\;{mm}}/\mathrm{s}$ the correlation appears stronger.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
Figure 4. The raw signal amplitude is plotted against the pull speed for each of the three materials. The average amplitude during each pull cycle is plotted as a single point. A linear regression fit is overlayed.
|
| 46 |
+
|
| 47 |
+
In the frequency domain, linear fits have weaker correlations still when looking at individual frequency bins. In figure 5, we explore the correlation between speed and frequency bands, which consisted of the signal across any number of frequency bins simultaneously; in the figure only the highest and lowest correlations are displayed. Only weak correlation persisted.
|
| 48 |
+
|
| 49 |
+
## IV. DISCUSSION
|
| 50 |
+
|
| 51 |
+
In this work we observed that neither signal amplitude nor frequency responses yielded a strong correlation. Nevertheless, a negative correlation persisted, suggesting that there is an exploitable relationship which can be used to identify the speed at which an object is slipping. However, at speeds below 50 $\mathrm{{mm}}/\mathrm{s}$ , a stronger relationship can be seen, and therefore, this would likely be the region that should be explored further in future data collections. This was not an unexpected result, as the difference between speeds was likely to plateau above a critical speed; nibs experience shorter stick times with increasing substrate speed, likely leading to a saturation in the amplitude signal [13]. Additionally, it appears there are differences in the amplitude response between materials that we believe can be used to train a classifier.
|
| 52 |
+
|
| 53 |
+
The results related to raw amplitude signal can be seen to have a sinusoidal feature over speeds. We suspect that this corresponds to a resonant frequency related to the testbed. Alternatively, this could be due to the nonlinearity of the robotic arm as it moves in a straight line.
|
| 54 |
+
|
| 55 |
+
Although the raw amplitude signals display a correlation with speed, it is highly susceptible to changes in grasp force, a factor that we deliberately accounted for by holding the distance constant. In an active controller, a sufficient grasp controller would need to be implement. However, frequency responses are less susceptible to grasp force, and the observation that certain frequency bands appear to find a correlation between the signal and sliding speed suggests that this would be a more reliable metric. Some short frequency bands appear to have very little correlations with speed, while others have a correlation. Out of all the tested frequency bins for the basswood and acrylic materials, 99.73% and 92.58%, respectively, exhibit a positive linear relationship between frequency bin amplitude and speed, while for the cherry material 100% of the tested frequency bins have a negative relationship. This suggests that material can likewise be determined by analyzing the frequency response.
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+
Figure 3. Example of a 2-minute push-pull cycle is shown for a single trial. Initiation and termination of the of the pull corresponds to the first green and the second red vertical line pairs. Accelerations and decelerations are spliced out, therefore only the region between second green and the first red vertical line pairs is considered (highlighted region is shown for the first two pull cycles). The filtered signal is displayed for reference only. A brief pause in motion can be seen immediately after the second vertical red line, then a brief high amplitude signal generated by the object being pushed back to its starting point (the highest amplitude signal), and finally followed by a prolonged pause corresponding to a re-tensioning of the string.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Figure 5. The maximum and minimum R2 value of the linear fit for each frequency band is displayed; these correspond to the highest and lowest correlations between specific frequency bands and slip speed. These values converge when the whole frequency spectrum is considered simultaneously, since there is only one frequency band.
|
| 64 |
+
|
| 65 |
+
Follow up work will include implementing classifiers that are capable of precisely distinguishing between materials and slipping speeds using, likely, the frequency signals. Ultimately, we hope to build a model capable of interpolating the data and identifying the speed with higher precision.
|
| 66 |
+
|
| 67 |
+
## REFERENCES
|
| 68 |
+
|
| 69 |
+
[1] A. Bicchi, "Hands for dexterous manipulation and robust grasping: A difficult road toward simplicity," IEEE Trans. Robot. Autom., vol. 16, no. 6, pp. 652-662, 2000, doi: 10.1109/70.897777.
|
| 70 |
+
|
| 71 |
+
[2] A. A. Cole, P. Hsu, and S. S. Sastry, "Dynamic control of sliding by robot hands for regrasping," Trans. Robot., vol. 8, no. 1, 1992.
|
| 72 |
+
|
| 73 |
+
[3] A. Sintov, A. S. Morgan, A. Kimmel, A. M. Dollar, K. E. Bekris, and A. Boularias, "Learning a State Transition Model of an Underactuated Adaptive Hand," IEEE Robot. Autom. Lett., vol. 4, no. 2, pp. 1287- 1294, 2019, doi: 10.1109/LRA.2019.2894875.
|
| 74 |
+
|
| 75 |
+
[4] C. Wang, S. Wang, B. Romero, F. Veiga, and E. Adelson, "SwingBot: Learning physical features from in-hand tactile exploration for dynamic swing-up manipulation," IEEE Int. Conf. Intell. Robot. Syst., no. 2, pp. 5633-5640, 2020, doi: 10.1109/IROS45743.2020.9341006.
|
| 76 |
+
|
| 77 |
+
[5] T. Bi and C. Sferrazza, "Zero-Shot Sim-to-Real Transfer of Tactile Control Policies for Aggressive Swing-Up Manipulation," IEEE Robot. Autom. Lett., vol. 6, no. 3, pp. 5761-5768, 2021, doi: 10.1109/LRA.2021.3084880.
|
| 78 |
+
|
| 79 |
+
[6] T. M. Huh, H. Choi, S. Willcox, S. Moon, and M. R. Cutkosky, "Dynamically Reconfigurable Tactile Sensor for Robotic Manipulation," IEEE Robot. Autom. Lett., vol. 5, no. 2, pp. 2562-2569, 2020.
|
| 80 |
+
|
| 81 |
+
[7] M. R. Tremblay and M. R. Cutkosky, "Estimating Friction Using Incipient Slip Sensing During a Manipulation Task," pp. 429-434, 1993.
|
| 82 |
+
|
| 83 |
+
[8] W. Yuan, R. Li, M. A. Srinivasan, and E. H. Adelson, "Measurement of shear and slip with a GelSight tactile sensor," Proc. - IEEE Int. Conf. Robot. Autom., vol. 2015-June, no. June, pp. 304-311, 2015, doi: 10.1109/ICRA.2015.7139016.
|
| 84 |
+
|
| 85 |
+
[9] F. Veiga, H. Van Hoof, J. Peters, and T. Hermans, "Stabilizing novel objects by learning to predict tactile slip," IEEE Int. Conf. Intell. Robot. Syst., pp. 5065-5072, 2015, doi: 10.1109/IROS.2015.7354090.
|
| 86 |
+
|
| 87 |
+
[10] J. W. James and N. F. Lepora, "Slip detection for grasp stabilization with a multifingered tactile robot hand," IEEE Trans. Robot., vol. 37, no. 2, pp. 506-519, 2021, doi: 10.1109/TRO.2020.3031245.
|
| 88 |
+
|
| 89 |
+
[11] D. D. Damian, T. H. Newton, R. Pfeifer, and A. M. Okamura, "Artificial
|
| 90 |
+
|
| 91 |
+
tactile sensing of position and slip speed by exploiting geometrical features," IEEE/ASME Trans. Mechatronics, vol. 20, no. 1, pp. 263-274, 2015, doi: 10.1109/TMECH.2014.2321680.
|
| 92 |
+
|
| 93 |
+
[12] H. Chen et al., "Hybrid porous micro structured finger skin inspired self-powered electronic skin system for pressure sensing and sliding detection," Nano Energy, vol. 51, no. July, pp. 496-503, 2018, doi: 10.1016/j.nanoen.2018.07.001.
|
| 94 |
+
|
| 95 |
+
[13] D. D. Make, C. Gao, and D. Kuhlmann-Wilsdorf, "Fundamentals of stick-slip," Wear, vol. 164, pp. 1139-1149, 1993.
|
papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/srVrKQl8X7R/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LEARNING SLIP WITH A PATTERNED CAPACITIVE TACTILE SENSOR
|
| 2 |
+
|
| 3 |
+
Yuri Gloumakov, Member, IEEE, Tae Myung Huh, Member, IEEE, Hannah Stuart, Member, IEEE
|
| 4 |
+
|
| 5 |
+
Abstract- The task of dynamically manipulating objects within a robotic hand presents ongoing challenges. In particular, friction and slip often dictate task success yet remain difficult to measure directly, quickly, and accurately; this includes both the detection of slip events and slip speed. Complex solutions exist that involve training a control policy using neural networks, with image-based sensors or external cameras, or when contact geometry can be inferred. Using only a capacitive sensor with a `nib`-patterned structure, we attempt to demonstrate the sensor's ability to detect slip speed during uninterrupted contact where geometry cannot be inferred, while benefitting from faster sensing, cheaper construction, and smaller profile. We hope that by collecting vibration amplitude and frequency and applying supervised learning techniques to directly measure slip speed we can guide an implementation of manipulation controls without a priori assumptions about object properties, such as friction or geometry.
|
| 6 |
+
|
| 7 |
+
Index Terms-Tactile Sensing, In-Hand Manipulation.
|
| 8 |
+
|
| 9 |
+
§ I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Robotic within-hand manipulation [1] affords robot systems to manipulate objects in tight spaces and avoid gross arm movements, a particularly useful ability in cluttered or constrained environments. However, due to uncertainties in object properties, like friction, successful reorientations can prove to be a challenging task. Some approaches have used inverse kinematics with a highly constrained rigid hand and taking advantage of overcoming friction during sliding to reorient an object [2], while others have taken advantage of compliant or under-actuated systems [3]. However, controlling for object slip directly, without such models, can enable a much faster reorientations with unknown objects, an important feature in situations that necessitate faster response time such as in assembly lines or active disaster zones.
|
| 12 |
+
|
| 13 |
+
Thus far, aggressive dynamic manipulation has been accomplished using learned control policies, whether exploring real-world object contacts [4] or in simulation [5]. However, using a nibbed capacitive tactile sensor developed by Huh et al. [6] (Fig. 1) we hope to demonstrate that dynamic manipulations can be performed using simple control policies by only training for object motion recognition, thus making the sensor more generalizable to different scenarios while reducing the need for complex computing.
|
| 14 |
+
|
| 15 |
+
In this letter we explore the sensor's ability to detect speed of a slipping object as it slides across the sensor. While incipient slip has been demonstrated in various systems [7], [8], slip detection and regrasping can be leveraged to quickly reposition an object within the hand with minimal arm or finger movement [9], [10]. Meanwhile, steady-state slipping speed has only been demonstrated when objects are either much smaller than the sensor or not making contact with its entire surface [11], [12], so that the geometry or forces of an edge contact can be tracked over time. However, objects in a factory setting or during sorting are often fully flush and flat with the sensor and controlling the slip is necessary for dynamic manipulation. We hypothesize that the deflection of the sensor's nib interface would undergo a stick-slip interaction yielding characteristic frequencies and deflection amplitudes unique to each combination of material and slip speed.
|
| 16 |
+
|
| 17 |
+
< g r a p h i c s >
|
| 18 |
+
|
| 19 |
+
Figure 1. On the left, the sensor can be seen mounted on the tip of a robotic finger. The tactile sensor is made up of a grid of nibs according to dimensions in (a), where the deflection of each nib is tracked in 4 directions. These deflections are used to track pressure (b), sheer (c), and vibrations (d) that can be used detect slipping. The conductive fabric that is embedded in the nibs and deflected changes the capacitive signal between itself and the electrodes. Figure images were borrowed from [6].
|
| 20 |
+
|
| 21 |
+
§ II. METHODS
|
| 22 |
+
|
| 23 |
+
To discover how the sensor detects slipping speed, we created a testbed that allowed us to test different slipping speeds and materials. The testbed was designed to maintain a constant distance between the sensor and a sliding object (Fig. 2); keeping the pressure constant was another consideration. Three rectangular objects made of different materials were tested: cherry, basswood, and acrylic with dimensions of ${200} \times {40} \times 3$ $\mathrm{{mm}}$ . The objects were pulled ${134}\mathrm{\;{mm}}$ by a string attached to a UR-10 robotic arm. The objects were then pushed back to the starting point and pulled again while sensor data was recorded at ${600}\mathrm{\;{Hz}}$ . This push-pull cycle lasted for 2 minutes for each speed setting, and speeds were varied from ${10} - {100}\mathrm{\;{mm}}/\mathrm{s}$ in 5 $\mathrm{{mm}}/\mathrm{s}$ increments. Since only the steady-state speed regime was of interest, the data from the acceleration and deceleration were spliced out. The termination of acceleration and initiation of deceleration were estimated to occur within the first $1/{8}^{\text{ th }}$ and the last $1/{6}^{\text{ th }}$ of the slipping period, respectively, with a conservative margin.
|
| 24 |
+
|
| 25 |
+
Y. Gloumakov, T. Huh, and H. Stuart are with the Mechanical Engineering Department, University of California, Berkeley, CA 06511 USA, (email: {yurigloum, thuh, hstuart} @berkeley.edu).
|
| 26 |
+
|
| 27 |
+
< g r a p h i c s >
|
| 28 |
+
|
| 29 |
+
Figure 2. The left figure depicts the testbed that hosts the sensor and allows the object to slide through, rolling over a set of smooth bearings. On the right the robot arm can be seen to pull on the object by a string. The acrylic piece is placed on the end effect to push the object back into place.
|
| 30 |
+
|
| 31 |
+
A feature of the nibbed sensor is its Programmable System on Chip (PSoC) infrastructure that enables us to couple any desired set of electrodes that result in a faster signal at the cost of resolution. Because we constrained the slip to a single linear direction, the nib deflection only needed to be tracked along a single axis (Fig. 3). Using a fast Fourier transform (FFT) the signal was converted into the frequency spectrum. Linear regressions are used to create a model using both the amplitude signal and the frequency spectrum separately to discover a fit that could identify the speed and material properties from a new signal. To obtain the frequency spectrum, a 300-frame sliding window was used, with an overlap of 1 frame to maximize the amount of extracted data.
|
| 32 |
+
|
| 33 |
+
Due to steady state slipping, the frequency responses were regarded as independent samples. Here, a frequency sample is a vector of length $n$ , which corresponds to 300 (half the sampling rate) divided by the bin size, varied from 1 to 300, and where vector values correspond to their respective frequency amplitudes. Both the frequency response, as well as the raw signal amplitude, were averaged during each pull cycle; this meant that during the 2-minute data collection, the slower speed trials yielded fewer cycles and therefore less data. The data was used in building a regression and exploring classification and clustering methods.
|
| 34 |
+
|
| 35 |
+
§ III. RESULTS
|
| 36 |
+
|
| 37 |
+
An example of amplitude data during one of the trials is shown in figure 3. At the lowest speed, over the course of 2 minutes, only 7 pull cycles were collected, while at the fastest speed, up to 44 cycles were collected over the course of the same period. The mean amplitude of each cycle is plotted in figure 4. The linear fit ${\mathrm{R}}^{2}$ values were 0.475,0.280, and 0.399 cherry, basswood, and acrylic objects, respectively. Although this corresponds to weak correlations, at speeds below ${50}\mathrm{\;{mm}}/\mathrm{s}$ the correlation appears stronger.
|
| 38 |
+
|
| 39 |
+
< g r a p h i c s >
|
| 40 |
+
|
| 41 |
+
Figure 4. The raw signal amplitude is plotted against the pull speed for each of the three materials. The average amplitude during each pull cycle is plotted as a single point. A linear regression fit is overlayed.
|
| 42 |
+
|
| 43 |
+
In the frequency domain, linear fits have weaker correlations still when looking at individual frequency bins. In figure 5, we explore the correlation between speed and frequency bands, which consisted of the signal across any number of frequency bins simultaneously; in the figure only the highest and lowest correlations are displayed. Only weak correlation persisted.
|
| 44 |
+
|
| 45 |
+
§ IV. DISCUSSION
|
| 46 |
+
|
| 47 |
+
In this work we observed that neither signal amplitude nor frequency responses yielded a strong correlation. Nevertheless, a negative correlation persisted, suggesting that there is an exploitable relationship which can be used to identify the speed at which an object is slipping. However, at speeds below 50 $\mathrm{{mm}}/\mathrm{s}$ , a stronger relationship can be seen, and therefore, this would likely be the region that should be explored further in future data collections. This was not an unexpected result, as the difference between speeds was likely to plateau above a critical speed; nibs experience shorter stick times with increasing substrate speed, likely leading to a saturation in the amplitude signal [13]. Additionally, it appears there are differences in the amplitude response between materials that we believe can be used to train a classifier.
|
| 48 |
+
|
| 49 |
+
The results related to raw amplitude signal can be seen to have a sinusoidal feature over speeds. We suspect that this corresponds to a resonant frequency related to the testbed. Alternatively, this could be due to the nonlinearity of the robotic arm as it moves in a straight line.
|
| 50 |
+
|
| 51 |
+
Although the raw amplitude signals display a correlation with speed, it is highly susceptible to changes in grasp force, a factor that we deliberately accounted for by holding the distance constant. In an active controller, a sufficient grasp controller would need to be implement. However, frequency responses are less susceptible to grasp force, and the observation that certain frequency bands appear to find a correlation between the signal and sliding speed suggests that this would be a more reliable metric. Some short frequency bands appear to have very little correlations with speed, while others have a correlation. Out of all the tested frequency bins for the basswood and acrylic materials, 99.73% and 92.58%, respectively, exhibit a positive linear relationship between frequency bin amplitude and speed, while for the cherry material 100% of the tested frequency bins have a negative relationship. This suggests that material can likewise be determined by analyzing the frequency response.
|
| 52 |
+
|
| 53 |
+
< g r a p h i c s >
|
| 54 |
+
|
| 55 |
+
Figure 3. Example of a 2-minute push-pull cycle is shown for a single trial. Initiation and termination of the of the pull corresponds to the first green and the second red vertical line pairs. Accelerations and decelerations are spliced out, therefore only the region between second green and the first red vertical line pairs is considered (highlighted region is shown for the first two pull cycles). The filtered signal is displayed for reference only. A brief pause in motion can be seen immediately after the second vertical red line, then a brief high amplitude signal generated by the object being pushed back to its starting point (the highest amplitude signal), and finally followed by a prolonged pause corresponding to a re-tensioning of the string.
|
| 56 |
+
|
| 57 |
+
< g r a p h i c s >
|
| 58 |
+
|
| 59 |
+
Figure 5. The maximum and minimum R2 value of the linear fit for each frequency band is displayed; these correspond to the highest and lowest correlations between specific frequency bands and slip speed. These values converge when the whole frequency spectrum is considered simultaneously, since there is only one frequency band.
|
| 60 |
+
|
| 61 |
+
Follow up work will include implementing classifiers that are capable of precisely distinguishing between materials and slipping speeds using, likely, the frequency signals. Ultimately, we hope to build a model capable of interpolating the data and identifying the speed with higher precision.
|
papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Other models for data visualisations Paul Heinicker
|
| 2 |
+
|
| 3 |
+
Other models of visualising aim at a (re)formulation of contemporary expectations and narratives concerning data and their visualisations as a very specific model of thinking data visualisation. It is precisely how and with what intention we work on and discuss visualisations that defines the conceptual space we open to this cultural technique. The concept of "other models" first points to the consequences and limitations of these ways of thinking. My positioning of the "other" consists first of the description of what it wants to distinguish itself from. I understand the "other visualising" as a chance to make the normative mode of data visualisation visible and discussable. In the discourse of visualisation, there is not yet an established language for critiquing the expectations of data images. The "other visualising" therefore establishes a negative way of reading the cultural and image phenomenon. As a first concretisation of these models, I formulate in the following a differently directed definition: data visualisation as intended violence.
|
| 4 |
+
|
| 5 |
+
## Data = Intention
|
| 6 |
+
|
| 7 |
+
Ideas and hopes around data visualisations are essentially oriented around two fundamental ideas of data visualisation: data and visualisation. With regard to data, I tend to describe contemporary data narratives using the figure of data exceptionalism as reproducers of a normative model of the imagination, practice, and reflection of data.
|
| 8 |
+
|
| 9 |
+
The concept of data exceptionalism enables to make visible a data positivist perspective, which is essentially defined by the rhetoric of the exception - the data phenomenon as a cultural turning point, a reductionist notion of data - solely numerical and technical, and a data forgetfulness in the sense of forgetting original - non-technical or mathematical approaches. A potential counter-position aims at broadening a narrowed notion of data, and this broadening has also been done by returning to existing concepts of data. Thus, in my perspective, it is primarily intentionality that characterises data. Data are not natural phenomena, but cultural artefacts of ordering structures. Data are not simply there, rather they are intentional. They are created from a particular perspective, in an artificial process, and for an application or reception. This data intention can be concretised in the reflection of the models that produce this data. Thus, at least two model applications are found in the intentional use of data. On the one hand, data - defined by me as abstractions - are not to be understood as images of reality, but as conscious projections of one or more models about this reality. On the other hand, I also understand the various modes of data practices as models applied with a purpose. Data exceptionalism is then understood as dealing with data in a particular model, namely in a positivist way. The ideas and intentions about what can be considered or produced as data and how to work with data are primarily shaped by models.
|
| 10 |
+
|
| 11 |
+
Probably the most important insight that comes from considering data exceptionalism is the aspect of modelling. The added value of data does not lie in the longed-for automated analysis of patterns in them, but more tellingly in the reflection of the models they produce. Data are both mirrors and producers of social reality. From this perspective, data are not the cause of social asymmetries, but rather an effect of a particular conception of what to do with the data. Data exceptionalism then only describes a certain model to proceed in a data positivist way. The questions about this model, i.e. questions why and for what purpose data is used, then promises possibly even more epistemic value than the analysis of the data itself. What is needed, according to this line of reasoning, is not another algorithmic, computational, or digital turn, but a return to the ideas, notions, and concepts, in short, the modelling of data. Data, by definition, are understood as abstractions, not images of reality, but always projections of a model about that reality. The deficiency of data is not that they are reduced in capacity, but that the confidence of completeness is ascribed to them by society.
|
| 12 |
+
|
| 13 |
+
## Visualisation $=$ Violence
|
| 14 |
+
|
| 15 |
+
In relation to the object of visualisation, I distinguish the practice of visualisation in two central forms. In a dichotomous arrangement, I differentiate affirmative and, opposite to that, critical approaches. "Affirmative" I interpret as an attitude toward the data to be visualised that takes them as given and their visualisation as unqualifiedly necessary. Instead of this efficiency- and optimization-driven idea of an image-driven visibility of data, more agile concepts or models should be found that can grasp the process of visualisation more profoundly in terms of its epistemic potential. What is problematised with this conceptual "immobility" is the tendency of the affirmative visualisation model to seem hopeless. Visualisation should rather be understood in its transformative processes, which independently of the object design their own reality and thus their own knowledge, which needs to be reflected accordingly. Therefore, alternative models are needed that attempt to describe the limits and possibilities of the cultural technique of visualisation.
|
| 16 |
+
|
| 17 |
+
In this context, my ideal of the "other visualising" also concretis-es itself. The "other" means approaches to the idea of visualisation that, apart from the affirmative visualisation models, is based on the critical reflection of the underlying models of thought. In addition to the critique of established conventions, it is primarily a diagrammatic position that understands visualisations as a projection of models. In contrast to a passive understanding of visualised diagrams as a rigid and (re)clarifying order, the diagrammatic is thought of as an active process that designs new arrangements or models in the relation of structures. What unites all these diagrammatics is that they push a certain structure through the filter of a conceptual model or world order onto its object. It is the purposeful transformation of data into a particular order that can be described as violent. Thus, again, there are at least two types of models that shape the process of visualisations. First, it is the notion of how visualisations are conceived: as an affirmative form of legible visualisation, the structural reading as diagrammatic reordering, or even the cosmogrammatic projection. Second, it is then the violent transformation of a data base, shaped via a particular model, that can result in any number of visualisations, depending on which model is chosen.
|
| 18 |
+
|
| 19 |
+
## Data Visualisation $=$ Intended Violence
|
| 20 |
+
|
| 21 |
+
As a consequence, I understand data visualisations in their intentional and enforced implementation as intended violence. Data is abstracted from an arbitrary object through a particular model, and then in turn made perceptible through the model of a transformation. In this double model arrangement, the relational aspect of visualisations becomes clear, inscribing itself as a process of projection. Data visualisations do not represent, but rather design their very own images in a cascading transformation of structures. The interpretive directions of this insight are, however, open. A designer or recipient of a visualisation can open up to this circumstance, but these phenomena function intrinsically without this awareness. The model perspective on visualisations is only one possible form of critical questioning. However, it enables diverse moments of insight.
|
| 22 |
+
|
| 23 |
+
Other models are ultimately intended to give indications of how visualisations are to be conceived as a cultural technique. The goal is not the search for the one visualisation that is to be optimised ever further in its readability and mediation efficiency. Rather, of relevance is an inefficiency that can allow and open up the diversity and complexity of visualisation culture. Instead of the contemporary culture of exclusion by a dominant (and affirmative) model, ideas that deviate from it should also be involved in the creation of visualisations.
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
papers/IEEE/IEEE 2022/IEEE 2022 Workshop/IEEE 2022 Workshop altVIS/XnsV9ZhsOVc/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ OTHER MODELS FOR DATA VISUALISATIONS PAUL HEINICKER
|
| 2 |
+
|
| 3 |
+
Other models of visualising aim at a (re)formulation of contemporary expectations and narratives concerning data and their visualisations as a very specific model of thinking data visualisation. It is precisely how and with what intention we work on and discuss visualisations that defines the conceptual space we open to this cultural technique. The concept of "other models" first points to the consequences and limitations of these ways of thinking. My positioning of the "other" consists first of the description of what it wants to distinguish itself from. I understand the "other visualising" as a chance to make the normative mode of data visualisation visible and discussable. In the discourse of visualisation, there is not yet an established language for critiquing the expectations of data images. The "other visualising" therefore establishes a negative way of reading the cultural and image phenomenon. As a first concretisation of these models, I formulate in the following a differently directed definition: data visualisation as intended violence.
|
| 4 |
+
|
| 5 |
+
§ DATA = INTENTION
|
| 6 |
+
|
| 7 |
+
Ideas and hopes around data visualisations are essentially oriented around two fundamental ideas of data visualisation: data and visualisation. With regard to data, I tend to describe contemporary data narratives using the figure of data exceptionalism as reproducers of a normative model of the imagination, practice, and reflection of data.
|
| 8 |
+
|
| 9 |
+
The concept of data exceptionalism enables to make visible a data positivist perspective, which is essentially defined by the rhetoric of the exception - the data phenomenon as a cultural turning point, a reductionist notion of data - solely numerical and technical, and a data forgetfulness in the sense of forgetting original - non-technical or mathematical approaches. A potential counter-position aims at broadening a narrowed notion of data, and this broadening has also been done by returning to existing concepts of data. Thus, in my perspective, it is primarily intentionality that characterises data. Data are not natural phenomena, but cultural artefacts of ordering structures. Data are not simply there, rather they are intentional. They are created from a particular perspective, in an artificial process, and for an application or reception. This data intention can be concretised in the reflection of the models that produce this data. Thus, at least two model applications are found in the intentional use of data. On the one hand, data - defined by me as abstractions - are not to be understood as images of reality, but as conscious projections of one or more models about this reality. On the other hand, I also understand the various modes of data practices as models applied with a purpose. Data exceptionalism is then understood as dealing with data in a particular model, namely in a positivist way. The ideas and intentions about what can be considered or produced as data and how to work with data are primarily shaped by models.
|
| 10 |
+
|
| 11 |
+
Probably the most important insight that comes from considering data exceptionalism is the aspect of modelling. The added value of data does not lie in the longed-for automated analysis of patterns in them, but more tellingly in the reflection of the models they produce. Data are both mirrors and producers of social reality. From this perspective, data are not the cause of social asymmetries, but rather an effect of a particular conception of what to do with the data. Data exceptionalism then only describes a certain model to proceed in a data positivist way. The questions about this model, i.e. questions why and for what purpose data is used, then promises possibly even more epistemic value than the analysis of the data itself. What is needed, according to this line of reasoning, is not another algorithmic, computational, or digital turn, but a return to the ideas, notions, and concepts, in short, the modelling of data. Data, by definition, are understood as abstractions, not images of reality, but always projections of a model about that reality. The deficiency of data is not that they are reduced in capacity, but that the confidence of completeness is ascribed to them by society.
|
| 12 |
+
|
| 13 |
+
§ VISUALISATION $=$ VIOLENCE
|
| 14 |
+
|
| 15 |
+
In relation to the object of visualisation, I distinguish the practice of visualisation in two central forms. In a dichotomous arrangement, I differentiate affirmative and, opposite to that, critical approaches. "Affirmative" I interpret as an attitude toward the data to be visualised that takes them as given and their visualisation as unqualifiedly necessary. Instead of this efficiency- and optimization-driven idea of an image-driven visibility of data, more agile concepts or models should be found that can grasp the process of visualisation more profoundly in terms of its epistemic potential. What is problematised with this conceptual "immobility" is the tendency of the affirmative visualisation model to seem hopeless. Visualisation should rather be understood in its transformative processes, which independently of the object design their own reality and thus their own knowledge, which needs to be reflected accordingly. Therefore, alternative models are needed that attempt to describe the limits and possibilities of the cultural technique of visualisation.
|
| 16 |
+
|
| 17 |
+
In this context, my ideal of the "other visualising" also concretis-es itself. The "other" means approaches to the idea of visualisation that, apart from the affirmative visualisation models, is based on the critical reflection of the underlying models of thought. In addition to the critique of established conventions, it is primarily a diagrammatic position that understands visualisations as a projection of models. In contrast to a passive understanding of visualised diagrams as a rigid and (re)clarifying order, the diagrammatic is thought of as an active process that designs new arrangements or models in the relation of structures. What unites all these diagrammatics is that they push a certain structure through the filter of a conceptual model or world order onto its object. It is the purposeful transformation of data into a particular order that can be described as violent. Thus, again, there are at least two types of models that shape the process of visualisations. First, it is the notion of how visualisations are conceived: as an affirmative form of legible visualisation, the structural reading as diagrammatic reordering, or even the cosmogrammatic projection. Second, it is then the violent transformation of a data base, shaped via a particular model, that can result in any number of visualisations, depending on which model is chosen.
|
| 18 |
+
|
| 19 |
+
§ DATA VISUALISATION $=$ INTENDED VIOLENCE
|
| 20 |
+
|
| 21 |
+
As a consequence, I understand data visualisations in their intentional and enforced implementation as intended violence. Data is abstracted from an arbitrary object through a particular model, and then in turn made perceptible through the model of a transformation. In this double model arrangement, the relational aspect of visualisations becomes clear, inscribing itself as a process of projection. Data visualisations do not represent, but rather design their very own images in a cascading transformation of structures. The interpretive directions of this insight are, however, open. A designer or recipient of a visualisation can open up to this circumstance, but these phenomena function intrinsically without this awareness. The model perspective on visualisations is only one possible form of critical questioning. However, it enables diverse moments of insight.
|
| 22 |
+
|
| 23 |
+
Other models are ultimately intended to give indications of how visualisations are to be conceived as a cultural technique. The goal is not the search for the one visualisation that is to be optimised ever further in its readability and mediation efficiency. Rather, of relevance is an inefficiency that can allow and open up the diversity and complexity of visualisation culture. Instead of the contemporary culture of exclusion by a dominant (and affirmative) model, ideas that deviate from it should also be involved in the creation of visualisations.
|
| 24 |
+
|
| 25 |
+
< g r a p h i c s >
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,351 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Hybrid Approach to Network Intrusion Detection Based On Graph Neural Networks and Transformer Architectures
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Hongrun Zhang
|
| 4 |
+
|
| 5 |
+
College of Computer Technology
|
| 6 |
+
|
| 7 |
+
and Applications
|
| 8 |
+
|
| 9 |
+
Qinghai University
|
| 10 |
+
|
| 11 |
+
QingHai, China
|
| 12 |
+
|
| 13 |
+
ys220854040277@qhu.edu.cn
|
| 14 |
+
|
| 15 |
+
2nd Tengfei Cao
|
| 16 |
+
|
| 17 |
+
College of Computer Technology
|
| 18 |
+
|
| 19 |
+
and Applications
|
| 20 |
+
|
| 21 |
+
Qinghai University
|
| 22 |
+
|
| 23 |
+
QingHai, China
|
| 24 |
+
|
| 25 |
+
caotf@qhu.edu.cn
|
| 26 |
+
|
| 27 |
+
${Abstract}$ -In this paper, we propose a model of a Network Intrusion Detection System (NIDS) named E-T-GraphSAGE (ETG), which fuses Graph Neural Network (GNN) and Transformer techniques. With the widespread adoption of the Internet of Things (IoT) and cloud computing, network structures have become complex and vulnerable. The efficacy of traditional intrusion detection systems is limited in the context of novel and unconventional cyber-attacks. This paper proposes a novel approach to address this challenge. GNN is used to capture the complex relationships between network nodes and edges, analyze network traffic graphs, and identify anomalous behaviors. By introducing the Transformer, the model enhances its ability to handle long-range dependencies in network streaming data and to understand network dynamics at a macro level. The GraphSAGE-Transformer (ETG) model is proposed to optimize the edge features through the self-attention mechanism to exploit the potential of network streaming data and improve the accuracy of intrusion detection. The experimental results show that the model outperforms the existing techniques in key performance metrics Tests on several standard datasets (BoT-IoT, NF-BoT-IoT, NF-ToN-IoT) validate the broad applicability and robustness of the ETG model, especially in complex network environments.
|
| 28 |
+
|
| 29 |
+
Keywords—GNN, GraphSAGE, Transformer, NIDS
|
| 30 |
+
|
| 31 |
+
## I. INTRODUCTION
|
| 32 |
+
|
| 33 |
+
With the widespread adoption of the Internet of Things (IoT) and cloud computing, the structure of network systems is becoming more complex, and the types and numbers of devices are increasing dramatically. This environment provides more vulnerabilities and points of entry for cyber attackers, making traditional cyber defense systems face serious challenges ${}^{\left\lbrack 1\right\rbrack }$ . Modern network attacks are varied, including distributed denial-of-service (DDoS) attacks, malware spread, and data breaches but also more subtle and adaptable, frequently targeting multiple layers of the network and various nodes. In addition, with the rapid development of attack techniques, new and unknown zero-day vulnerability attacks frequently appear, and these attacks are able to bypass the signature-based intrusion detection system easily ${}^{\left\lbrack 2\right\rbrack }$ . Therefore, there is a need to develop new detection techniques that not only recognize known attack patterns but also can predict and adapt to unknown threats.
|
| 34 |
+
|
| 35 |
+
To overcome these limitations, recent research has increasingly focused on leveraging machine learning and deep learning techniques. Among these, Transformer architectures have gained attention for their self-attention mechanism, which effectively captures long-range dependencies in sequential data. Originally developed for natural language processing, Transformers have been successfully adapted for cybersecurity applications, offering the ability to analyze complex interdependencies within network traffic.
|
| 36 |
+
|
| 37 |
+
Graph neural networks (GNNs), known for their ability to handle graph-structured data, offer significant potential in cybersecurity applications. By capturing the complex relationships between nodes (e.g., IP addresses or devices) and edges (i.e., data transmissions or sessions) in a network, GNNs are able to efficiently map the overall pattern of network behavior. This capability makes GNN particularly suitable for identifying and analyzing complex network intrusions that are difficult to detect through conventional detection means ${}^{\left\lbrack 3\right\rbrack }$ . GNNs can analyze network traffic graphs by representing hosts or servers as nodes and their communications as edges. By learning the normal and abnormal characteristics of these communication patterns, the GNN is able to identify anomalous behavior in the network, such as unauthorized data access or abnormal data traffic. In addition, a key advantage of GNNs is their ability to integrate data from multiple sources and extract deep network characteristics, which is particularly important for detecting advanced persistent threats (APTs) and multi-stage attacks.
|
| 38 |
+
|
| 39 |
+
GNN not only enhances the detection of known threats, but more importantly, it provides a mechanism to understand and predict new or variant attack behaviors that are difficult to identify with traditional methods. Therefore, the introduction of GNN into network security systems, especially network intrusion detection systems, will greatly enhance the system's ability to defend against complex network threats ${}^{\left\lbrack 4\right\rbrack }$ .
|
| 40 |
+
|
| 41 |
+
This research aims to develop an enhanced Network Intrusion Detection System (NIDS) by integrating Graph Neural Networks (GNNs) with Transformer architectures. The goal is to improve the efficiency and accuracy of detecting complex and previously unknown attack patterns by leveraging the Transformer's ability to capture long-range dependencies in network traffic. This integration seeks to enhance the model's capability to analyze network flows on both local and global scales, improving overall performance in detecting sophisticated cyber threats.
|
| 42 |
+
|
| 43 |
+
The proposed study will use a hybrid approach, combining GNNs and Transformers to analyze network traffic. GNNs will be employed to construct graph representations of network entities and interactions, while the Transformer's self-attention mechanism will capture long-range dependencies and global patterns ${}^{\left\lbrack 5\right\rbrack }$ . This integrated model aims to enhance understanding of network dynamics and improve detection and prediction of both known and emerging threats. The model's effectiveness will be evaluated through experiments on benchmark datasets, comparing its performance with existing intrusion detection systems.
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
Fig. 1. Network flow data graph structuring.
|
| 48 |
+
|
| 49 |
+
As shown in Fig. 1, we utilize both GNN and Transformer to encode the raw stream data successively to obtain the desired graph data structure, which is input to the model for training.
|
| 50 |
+
|
| 51 |
+
(1) The core contribution of this research is the development of a NIDS model that combines GNN and Transformer. The edge features optimized by the self-attention mechanism fully exploit the potential of network streaming data and significantly improve the detection accuracy of network intrusion.
|
| 52 |
+
|
| 53 |
+
(2) Tests on multiple standard datasets show that our model outperforms existing techniques in key performance metrics such as accuracy, recall, and F1 score.
|
| 54 |
+
|
| 55 |
+
The rest of the paper's organizational sequel will detail the design and experimental evaluation of the E-T-GraphSAGE (ETG) model. Part II will explore the development of NIDS, as well as research related to GNNs and Transformers. Part III details the model architecture and key technologies. The fourth section shows the experimental results on a variety of cyberattack datasets and compares them with other methods. The concluding section will summarize the research results and discuss future research directions.
|
| 56 |
+
|
| 57 |
+
## II. RELATED WORK
|
| 58 |
+
|
| 59 |
+
In recent years, various approaches have been proposed to enhance the performance of Intrusion Detection Systems (IDS) Alowaidi et al. ${}^{\left\lbrack 6\right\rbrack }$ proposed a hybrid Intrusion Detection System (IDS) combining Machine Learning (ML) and Deep Learning (DL) techniques, which enhances IDS performance and prediction accuracy while lowering computational costs. However, the model's generalization relies on the diversity and representativeness of the training data. If the training data is biased, it negatively impacts the model's real-world performance Gupta et al. ${}^{\left\lbrack 7\right\rbrack }$ proposed an anomaly-based NIDS, this approach considers multiple performance metrics, along with training time and resource usage, but remains limited by dataset dependency and average generalization capabilities. Kumar et al. ${}^{\left\lbrack 8\right\rbrack }$ proposed a bi-directional long short-term memory (BiLSTM) based anomaly detection system for Internet of Things (IoT) networks. The BiLSTM model effectively improves the accuracy by preprocessing and feature selection through normalization and gain ratio.
|
| 60 |
+
|
| 61 |
+
Suárez-Varela et al. ${}^{\left\lbrack 9\right\rbrack }$ introduced the use of GNNs in the modeling control, and management of communication networks, demonstrated their advantages in terms of generalization capabilities and data-driven solutions, and discussed their potential in network modeling control and management. Hnamte et al. ${}^{\left\lbrack {10}\right\rbrack }$ proposed an approach using Deep Convolutional Neural Networks (DCNN) and validated its performance with the InSDN dataset. While DCNN achieves high accuracy, it demands significant data and computational resources for training.
|
| 62 |
+
|
| 63 |
+
Kisanga et al. ${}^{\left\lbrack {11}\right\rbrack }$ proposed a new Activity and Event Network (AEN) graph framework that focuses on capturing long-term stealthy threats that are difficult to detect by traditional security tools, and is very promising in detecting long-term threats in cybersecurity. L et al. ${}^{\left\lbrack {12}\right\rbrack }$ proposed an end-to-end anomalous edge detection method based on unified graph embedding, which enhances the model's ability to learn task-relevant patterns by combining embedding learning and anomaly detection into the same objective function, and accurately estimates the probability distributions of edges through the local structure of the graph to identify anomalous edges. Superior accuracy and scalability are demonstrated on multiple publicly available datasets.
|
| 64 |
+
|
| 65 |
+
Sun et al. ${}^{\left\lbrack {13}\right\rbrack }$ proposed a framework combining Graph Neural Network (GNN) and Transformer for self-supervised heterogeneous graph representation learning. The Metapath-aware Hop2Token method is designed to efficiently convert neighbors with different hop counts in heterogeneous graphs into Token sequences, reducing the computational complexity in Transformer processing. GTC enhances information fusion, improves learning efficiency, and reduces the demand for computational resources by contrasting learning tasks between graph pattern views and hop count views.
|
| 66 |
+
|
| 67 |
+
Nguyen et al. ${}^{\left\lbrack {14}\right\rbrack }$ proposed a Transformer-based GNN model for learning graph representation. With an unsupervised conduction learning approach, UGformer is able to solve the problem of limited category labels, but for large-scale datasets to construct graphs, UGformer may still need to be optimized to deal with extremely large graph structures, despite the sampling mechanism that UGformer is designed for.
|
| 68 |
+
|
| 69 |
+
Unlike previous studies, our method focuses on extracting data edge features from network streams and develops E-GraphSAGE models that incorporate transformer modules. Combining local and global features to achieve more accurate feature representations, making full use of the structural and topological information and inherent in network streaming data to achieve better feature representations and network intrusion detection performance. The T-E-GraphSAGE method introduced in this paper addresses the shortcomings of traditional graph embedding techniques by capturing topological details and edge features in network flow data, leading to more precise detection. while its ability to effectively classify samples with unseen node features. Three NIDS standard datasets are used to evaluate our model, which verifies the broad applicability accuracy, and robustness of our model in different types of network scenarios, which is effective in comparison with traditional ML methods, especially in complex network environments. Through these improvements, the performance of our system in network intrusion detection has been significantly improved, and it is able to effectively respond to various network attacks in complex network environments.
|
| 70 |
+
|
| 71 |
+
### III.The Proposed Method
|
| 72 |
+
|
| 73 |
+
## A. GraphSAGE
|
| 74 |
+
|
| 75 |
+
Graph Neural Networks (GNN) are becoming increasingly popular in the field of machine learning. Its power stems from the effective utilization of graph-structured data. These data are widely available in application areas such as social media networks, biological research, and telecommunication systems ${}^{\left\lbrack {15}\right\rbrack }$ . The primary reason for using GNN in NIDS is their capability to leverage the structural data present in network streams, which can be represented graphically. Although some conventional machine learning approaches also handle graph data, they usually involve intricate processes and depend heavily on manually crafted features, leading to more cumbersome and less efficient applications.
|
| 76 |
+
|
| 77 |
+
GraphSAGE ${}^{\left\lbrack {16}\right\rbrack }$ is an efficient graph neural network technique that generates embedded representations of nodes by sampling and aggregating the features of their neighbors. It is particularly suitable for processing large-scale graph data. The main steps include sampling neighboring nodes, aggregating features, and updating node features, which effectively solve the computation and storage bottlenecks of traditional graph neural networks. As a result, GraphSAGE has been widely used in many fields.
|
| 78 |
+
|
| 79 |
+
GraphSAGE : learning node representation through local aggregation, and its core steps include three aspects: neighbor node sampling, feature aggregation, and node feature update, as shown in Fig. 2.
|
| 80 |
+
|
| 81 |
+
In neighbor node sampling, for each node, a fixed number of neighbor nodes are randomly sampled to reduce the computation and storage requirements. Suppose a node in the graph is $v$ , and its set of neighbor nodes is $N\left( v\right)$ , and the set of neighbor nodes obtained from sampling is $\widetilde{N}\left( v\right)$ . This process can be represented as:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\widetilde{N}\left( v\right) = \operatorname{Sample}\left( {N\left( v\right) , K}\right) \tag{1}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $K$ denotes the number of neighbor nodes sampled. This phase seeks to manage computational complexity by limiting the number of adjacent nodes for each vertex in extensive graphs.
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+
### Fig.2. GraphSAGE model diagram
|
| 92 |
+
|
| 93 |
+
In feature aggregation, a feature aggregation operation is performed on the sampled set of neighbor nodes $\widetilde{N}\left( v\right)$ to generate neighbor feature representations. Common aggregation methods include mean value aggregation, pooling, and LSTM. The following are the formulas for several aggregation methods:
|
| 94 |
+
|
| 95 |
+
1) Mean aggregation: Mean aggregation computes the average of neighboring node features. Its formula is:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{mean}\left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{2}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where ${h}_{u}^{\left( k - 1\right) }$ denotes the feature representation of the neighboring node at the $k - 1$ th layer of $u$ , and ${h}_{\widetilde{N}\left( v\right) }^{\left( k\right) }$ denotes the representation of the node $v$ after aggregating the features of its neighboring nodes at the $k$ layer.
|
| 102 |
+
|
| 103 |
+
2) Maximum pooling: Maximum pooling is used to take the maximum value in the features of neighboring nodes. The formula for this is:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \max \left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{3}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
3) LSTM aggregation: LSTM aggregation uses LSTM network for neighbor node features with the formula:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{LSTM}\left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{4}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
For node feature update, the algorithm combines the node's own features with the aggregated neighbor features and updates the node feature representation through a neural network. A common way of combining is a concatenation operation (concatenation) followed by a transformation through a fully connected layer. Its formula is:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{h}_{v}^{\left( k\right) } = \sigma \left( {{W}^{\left( k\right) } \cdot \operatorname{concat}\left( {{h}_{v}^{\left( k - 1\right) },{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) }}\right) }\right) \tag{5}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where $\sigma$ denotes the activation function (e.g., ReLU), ${W}^{\left( k\right) }$ denotes the weight matrix of the $k$ -th layer, and ${h}_{v}^{\left( k\right) }$ denotes the feature representation of node $v$ in the $k$ -th layer.
|
| 122 |
+
|
| 123 |
+
In the specific process, the features are first initialized and each node’s feature can be its attribute vector ${x}_{v}$ . Then multilayer sampling and aggregation is performed, for the $k$ -th layer, each node $v$ randomly samples a fixed number of $K$ neighbors from its neighborhood to form the sampling set $\widetilde{N}\left( v\right)$ and aggregates the features of the neighboring nodes using the selected aggregation function (e.g., mean, maximum pooling, or LSTM) to obtain ${h}_{\bar{N}\left( v\right) }^{\left( k\right) }$ . Then the node $v$ own features are connected to the aggregated neighboring features in a join operation and nonlinearly transformed through the fully connected layer to obtain a new node feature representation ${h}_{v}^{\left( k\right) }$ Finally, after multi-layer (usually 2 to 3 layers) sampling and aggregation operations, the embedding representation of each node is finally generated ${h}_{v}$ . Through the above steps, the GraphSAGE algorithm is able to efficiently deal with large-scale graph data, and generate high-quality node embedding representations through sampling and aggregation operations.
|
| 124 |
+
|
| 125 |
+
## B. E-Transformer-GraphSAGE Methods
|
| 126 |
+
|
| 127 |
+
The traditional GraphSAGE method mainly focuses on the analysis and utilization of node features for node classification, but is deficient in dealing with edge features. The primary objective of NIDS aims to detect and identify malicious traffic. In our study, we focus on the application of edge features and improve the GraphSAGE model by using the edge embedding method and introducing the Transformer layer method.
|
| 128 |
+
|
| 129 |
+
1) E-GraphSAGE: In order to handle graph structure data efficiently, we designed and implemented the GraphSAGE layer (SAGELayer). This layer updates the representation of each node by aggregating the features of the node's neighbors to capture the relationships between nodes in the graph. GraphSAGE accomplishes the updating of node representations through message passing and apply updates, and employs the ReLU activation function to improve the model’s nonlinear representation ${}^{\left\lbrack {17}\right\rbrack }$ . The main differences from the original GraphSAGE algorithm are the algorithmic inputs, the message passing aggregation functions and the outputs. In the SAGE layer, edge embedding is incorporated into the messaging process to provide richer information. Unlike the traditional GraphSAGE module, the aggregated embedding of sampled neighboring edges is generated at the kth layer for edge features. using a mean aggregation function as shown in the following equation.
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{mean}\left( \left\{ {{e}_{uv}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) ,{uv} \in \varepsilon }\right\} \right) \tag{6}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where ${e}_{uv}^{\left( k - 1\right) }$ is the feature of the edge ${uv}$ in the $\mathrm{k} - 1$ layer of the sampling neighborhood $\widetilde{N}\left( v\right)$ of node $v$ , and the set $\{ \forall u \in$ $\widetilde{N}\left( v\right) ,{uv} \in \varepsilon \}$ represents the sampling edges within the neighborhood $\widetilde{N}\left( v\right)$ . The edge features of the ${uv}$ of the kth layer are spliced by the following equation, which represents the final result of the forward propagation phase.
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{h}_{uv}^{k} = \operatorname{CONCAT}\left( {{h}_{u}^{k},{h}_{v}^{k}}\right) ,{uv} \in \mathcal{E} \tag{7}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
In our study, we constructed a two-layer E-GraphSAGE model with each layer consisting of an E-SAGELayer.
|
| 142 |
+
|
| 143 |
+
Neighboring node features are aggregated to generate the embedded representation of the node and a mean value aggregation method is used, where the features of the node are the mean value of the features of its neighboring nodes. The first layer E-SAGELayer in this model aggregates the input features to generate the first layer of node embedding; The second layer takes the first layer of node embeddings as input and again performs aggregation to generate the final node embeddings. Through this multi-layer aggregation, we are able to capture more complex node characteristics and neighbor relationships. A Dropout operation is used to avoid overfitting. The advantage of stacking multiple layers of GraphSAGE is the ability to capture more complex node relationships and form richer node representations to improve the performance of the model.
|
| 144 |
+
|
| 145 |
+
2) Transformer: The traditional GraphSAGE method mainly focuses on the analysis and utilization of node features for node classification, but is deficient in dealing with edge features. The primary aim of NIDS is to detect and identify malicious traffic, aligning with the edge classification problem in network flow classification. Our study emphasizes the use of edge features and enhances the GraphSAGE model by incorporating the edge embedding method and introducing the Transformer layer technique.
|
| 146 |
+
|
| 147 |
+
The Transformer Encoder Layer (TEL) is the basic component of the Transformer model, which mainly consists of the MultiheadAttention mechanism, Feed-forward Neural Network (Linear Layer), and Normalization Layer (LayerNorm), and Dropout is applied between the layers to prevent overfitting. In the Transformer Encoder Layer, the inputs are node features (generated by the SAGE layer) and this layer does not explicitly process edge features. Its main function is to capture the dependencies between node features and global information through a multi-head attention mechanism along with a feed-forward neural network.
|
| 148 |
+
|
| 149 |
+
a) Multi-head attention: The self-attention mechanism allows the model to capture global dependencies by focusing on all other elements in a sequence while processing each element in the sequence. The multi-head self-attention mechanism improves the model's sensitivity to different features by performing multiple self-attention computations in parallel. The specific formula is as follows:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\left\{ \begin{matrix} \operatorname{Attemtion}\left( {Q, K, V}\right) = \operatorname{softmax}\left( \frac{Q{K}^{T}}{\sqrt{{d}_{k}}}\right) V \\ \operatorname{MultiHead}\left( {Q, K, V}\right) = \operatorname{Concat}\left( {{\operatorname{head}}_{1},\cdots {\operatorname{head}}_{i},\cdots ,{\operatorname{head}}_{h}}\right) {W}_{C} \end{matrix}\right. \tag{8}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
Identify applicable funding agency here. If none, delete this text box.
|
| 158 |
+
|
| 159 |
+
---
|
| 160 |
+
|
| 161 |
+
where $\operatorname{Attemtion}\left( {Q, K, V}\right)$ is the single-head self-attention computation, $\mathrm{Q}$ denotes the computational query matrix, $\mathrm{K}$ denotes the key matrix, $\mathrm{V}$ denotes the value matrix, $\mathrm{d}$ denotes the input vector dimension, and MultiHead(Q, K, V)denotes the multi-head self-attention splicing the results of the $h$ heads together and obtaining the final output by a linear transformation, where ${\text{head}}_{i} =$ Attention $\left( {{Q}_{i},{K}_{i},{V}_{i}}\right)$ , and ${W}_{O} \in {\mathbb{R}}^{h{d}_{k} \times {d}_{\text{model }}}$ is the output weight matrix and ${d}_{\text{model }}$ is the input feature dimension.
|
| 162 |
+
|
| 163 |
+
Specifically, the MultiheadAttention mechanism captures the global dependencies of the input data by processing the input data in parallel through multiple Attention Heads. Each Attention Head performs self-attention computation independently, which is able to focus on different features in the input data and enhance the sensitivity of the model to multiple features. The multi-head attention mechanism's output is linked to the feed-forward neural network via a linear transformation.
|
| 164 |
+
|
| 165 |
+
b) Feed-forward neural network: Feed-forward neural networks (FFN) are fully connected neural networks applied independently at each position in each Transformer coding layer. The specific formula is as follows:
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\operatorname{FFN}\left( x\right) = \max \left( {0, x{W}_{1} + {b}_{1}}\right) {W}_{2} + {b}_{2} \tag{9}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
where ${W}_{1} \in {\mathbb{R}}^{{d}_{\text{model }} \times {dff}},{W}_{2} \in {\mathbb{R}}^{{d}_{ff} \times {d}_{\text{model }}},{b}_{1} \in {\mathbb{R}}^{{d}_{ff}},{b}_{2} \in$ ${\mathbb{R}}^{{d}_{\text{model }}}$ is the parameter of the science department and ${d}_{ff}$ is the hidden layer dimension of the FNN.
|
| 172 |
+
|
| 173 |
+
The feedforward neural network used in this paper includes two fully connected layers with a ReLU activation function and Dropout applied in between. The first fully connected layer maps the input dimension from the embedded dimension (embed_dim) to a higher hidden dimension (ff_hidden_dim), the ReLU activation function introduces a nonlinear transformation, and the Dropout operation is used to prevent overfitting. The second fully connected layer maps the hidden dimension back to the embedded dimension, thus keeping the dimensionality of the inputs and outputs the same.
|
| 174 |
+
|
| 175 |
+
c) Normalization layer: The normalization layer is implemented following each sublayer, including both self-attention and the feed-forward neural network, to ensure regularization and stabilize the training process. The specific formulas are as follows:
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\text{ LayerNorm }\left( x\right) = \frac{x - \mu }{\sigma + \varepsilon } \cdot \gamma + \beta \tag{10}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
where $\mu$ and $\sigma$ are the mean and standard deviation of the inputs respectively, $\gamma$ and $\beta$ are the learnable scaling and offset parameters and $\varepsilon$ is a small constant.
|
| 182 |
+
|
| 183 |
+
Each coding layer undergoes Layer Normalization and Residual Connection between and after the multi-head self-attention mechanism and the feed-forward neural network. Layer Normalization helps to stabilize and speed up the training process, while Residual Connection helps to solve the problem of vanishing gradients in deep networks.
|
| 184 |
+
|
| 185 |
+
d) Dropout: Dropout randomly discards a certain percentage of neurons during training to prevent overfitting. By stacking multiple such coding layers, the Transformer model is able to effectively capture the global dependencies of the input data and enhance the model's sensitivity to different features. The multi-head self-attention mechanism in each layer enables the model to focus on different features in the input data, and the feed-forward neural network further processes these features. Through the layer-by-layer processing of the multilayer structure, the model is able to capture more complex and deeper feature relationships in the input data, which improves its performance in various tasks.
|
| 186 |
+
|
| 187 |
+
## C. NIDS
|
| 188 |
+
|
| 189 |
+
Fig. 3 shows how the network stream data is constructed as graph data and the propagation process from the source node to the destination node. Fig. 4 shows an overview of our E-Transformer-GraphSAGE NIDS. Initially, a graph is created using the network flow data. Next, the generated network graph is fed into the E-Transformer-GraphSAGE model for supervised training. Edge embeddings are designed to classify network streams into benign or malicious categories. The following subsections explain these three steps in detail.
|
| 190 |
+
|
| 191 |
+
Netflow Data
|
| 192 |
+
|
| 193 |
+
<table><tr><td/><td>IPV4 SRC ADDR</td><td>L4 SRC PORT</td><td>IPV4 DST ADDR</td><td>L4 DST PORT</td><td>PROTOC OL</td><td>L7 PROT 。</td><td>IN BYTE S</td><td>OUT BY TES</td><td>IN PKTS</td><td>OUT PK TS</td><td>TCP FLA QS</td><td>FLOW D URATION MILLIS ECONDS</td><td>Label</td><td>Attack</td></tr><tr><td/><td>192,168.1.7 0</td><td>46800</td><td>239.255.25 5.250</td><td>15600</td><td>17</td><td>0</td><td>63</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>Benign</td></tr><tr><td/><td>192,168,1.7 9</td><td>41361</td><td>192,168.1.1</td><td>15600</td><td>17</td><td>0</td><td>63</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>Benign</td></tr><tr><td/><td>192,168.1.1</td><td>60641</td><td>192,168.1.3 1</td><td>53</td><td>17</td><td>5</td><td>100</td><td>100</td><td>2</td><td>2</td><td>0</td><td>2</td><td>1</td><td>Injection</td></tr><tr><td/><td>192,168.1.1</td><td>43803</td><td>192,168.1.1 52</td><td>53</td><td>17</td><td>5</td><td>100</td><td>100</td><td>2</td><td>2</td><td>0</td><td>7</td><td>1</td><td>Scanning</td></tr><tr><td/><td>192,168,1.3 1</td><td>63898</td><td>192,168.1.3 6</td><td>5355</td><td>17</td><td>154</td><td>122</td><td>0</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td><td>Benign</td></tr><tr><td/><td>192,168,1.3 6</td><td>53153</td><td>192,168.1.0 7</td><td>5355</td><td>17</td><td>154</td><td>122</td><td>0</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td><td>Benign</td></tr><tr><td/><td>192,168.1.3 6</td><td>44248</td><td>192,168.1.1 52</td><td>80</td><td>6</td><td>7</td><td>526</td><td>2816</td><td>6</td><td>6</td><td>27</td><td>1021</td><td>1</td><td>XSS</td></tr><tr><td/><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr></table>
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
|
| 197 |
+
Fig. 3. Network flow data conversion diagram data
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
|
| 201 |
+
Fig. 4. E-Transformer-graphsage-based Network Intrusion Detection System
|
| 202 |
+
|
| 203 |
+
1) Graph data structure: Net-Flow is a commonly used format for logging network communications in production environments and is the predominant format in Network Intrusion Detection System (NIDS) environments. A flow record typically includes fields that identify the communication's source and destination, along with additional information like packet and byte counts, and flow duration. Graph structures naturally model this type of data. In this study, we use the source IP address, source port, destination IP address, and destination port. The first two fields form a binary group identifying the source node, and the last two form the destination node. The remaining data are used as features for that edge, making the graph nodes featureless. We assign a vector of all 1's to all nodes in the algorithm.
|
| 204 |
+
|
| 205 |
+
2) E-Transformer-GraphSAGE: Our proposed model combines the sensitivity of GNN to local structures and the ability of Transformer to capture global dependencies by first processing the graph data through E-GraphSAGE to obtain node representations. Then, Transformer is utilized to further capture global dependencies. During the training process, we utilize a weighted cross-entropy loss function (CrossEntropyLoss) to address category imbalance. We use Adam optimizer (Adam optimizer) for parameter updating. The algorithm's output is compared with the labels from the NIDS dataset and the model's trainable parameters are adjusted in the backpropagation phase. After tuning the model parameters during training, the performance of the model can be evaluated by classifying unseen test samples. The process involves converting the test stream records into graph data structures. Edge embeddings are then generated using a trained E-Transformer-GraphSAGE layer. These edge embeddings are subsequently transformed into class probabilities via the Softmax layer. The predicted class probabilities are compared with the actual class labels to evaluate the classification performance metrics.
|
| 206 |
+
|
| 207 |
+
## IV. EXPERIMENT
|
| 208 |
+
|
| 209 |
+
In this section, We performed binary classification and multiclassification task comparisons to validate the effectiveness of our algorithm.
|
| 210 |
+
|
| 211 |
+
## A. Experiment Setting
|
| 212 |
+
|
| 213 |
+
We modeled using Python, Pytorch, and DGL, and the server environment was performed on an Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz total of 32 cores, a single A100 graphics card, and 192G RAM.
|
| 214 |
+
|
| 215 |
+
## B. Datasets
|
| 216 |
+
|
| 217 |
+
To evaluate our proposed GNN-based NIDS, we use three publicly available datasets that include various labeled attack flows and benign network flows. The first dataset is BoT-IoT, which is widely used for evaluating ML based network intrusion detection systems in the Internet of Things, with a proprietary format and feature set. The second and third datasets are NF-ToN-IoT and NF-BoT-IoT presented in Netflow format.
|
| 218 |
+
|
| 219 |
+
1) BoT-IoT datasets: The BoT-IoT dataset ${}^{\left\lbrack {18}\right\rbrack }$ was generated by the Cyber Range Lab at the Australian Center for Cyber Security (ACCS) to evaluate the performance of cyber security tools. It simulates real network environments containing normal traffic and multiple types of malicious traffic such as DDoS, DoS, reconnaissance, and data theft for Intrusion Detection System (IDS) training and testing. Avoid combining SI units, like current in amperes, with CGS units, such as the magnetic field measured in oersteds, as this can cause dimensional imbalance and confusion. If using mixed units, clearly specify the units for each quantity in the equation.
|
| 220 |
+
|
| 221 |
+
2) NF-BoT-IoT datasets: The NF-BoT-IoT dataset ${}^{\left\lbrack {19}\right\rbrack }$ is a NetFlow characterization dataset extracted from the BoT-IoT dataset to provide a more concise representation of network traffic by summarizing IP traffic flows. The dataset includes information such as source and destination IP addresses, ports, packet counts, byte counts, and timestamps, which helps in large-scale data analysis and real-time intrusion detection.
|
| 222 |
+
|
| 223 |
+
3) NF-ToN-IoT datasets: The NF-ToN-IoT dataset is a NetFlow characterization dataset generated based on the ToN-IoT dataset and contains telemetry and operational network data from Internet of Things (IoT) devices. The dataset provides detailed traffic records that help detect network intrusions and understand traffic patterns in IoT environments and is suitable for IoT security research.
|
| 224 |
+
|
| 225 |
+
## C. Results Of The Experiment
|
| 226 |
+
|
| 227 |
+
To assess the effectiveness of the proposed neural network model, we employed the standard metrics outlined in Table I. Here, TP stands for true positives, TN for true negatives, FP for false positives, and FN for false negatives.
|
| 228 |
+
|
| 229 |
+
TABLE I. EVALUATION INDICATORS
|
| 230 |
+
|
| 231 |
+
<table><tr><td>Accuracy</td><td>$\frac{\mathbf{{TP}} + \mathbf{{TN}}}{\mathbf{{TP}} + \mathbf{{FP}} + \mathbf{{TN}} + \mathbf{{FN}}} \times \mathbf{{100}}\%$</td></tr><tr><td>Precision</td><td>$\mathbf{{TP}} + \mathbf{{FP}} \times \mathbf{{100}}\%$</td></tr><tr><td>FAR</td><td>$\overline{{FP} + {TN}} \times {100}\%$</td></tr><tr><td>Recall</td><td>TP + FN $\times$ 100%</td></tr><tr><td>F1-Score</td><td>$2 \times \frac{\text{ Precision } \times \text{ Recall }}{\text{ Precision } \times \text{ Recall }} \times {100}\%$</td></tr></table>
|
| 232 |
+
|
| 233 |
+
1) Binary classification results: The datasets employed in our experiments contain dual-layer labels for each data instance The first layer indicates whether the network flow is benign or non-benign, while the second layer specifies the attack type. For the binary classification task, we use the first layer of labels, and for the multi-class classification task, we use the second layer of labels ${}^{\left\lbrack {20},{21}\right\rbrack }$ . across three datasets: BoT-IoT, NF-BoT-IoT, and NF-ToN-IoT. The findings demonstrate that our method performs exceptionally well in binary classification, a key factor for successful network intrusion detection.
|
| 234 |
+
|
| 235 |
+
TABLE II. BINARY CLASSIFFCATION RESULTS
|
| 236 |
+
|
| 237 |
+
<table><tr><td>Dataset</td><td>Accuracy</td><td>Precision</td><td>F1-Score</td><td>Recall</td><td>$\mathbf{{FAR}}$</td></tr><tr><td>BoT-IoT</td><td>99.99%</td><td>1.00</td><td>1.00</td><td>99.99%</td><td>0.00%</td></tr><tr><td>NF-BoT- IoT</td><td>94.52%</td><td>1.00</td><td>0.99</td><td>97.32%</td><td>0.24%</td></tr><tr><td>NF-ToN- IoT</td><td>99.93%</td><td>1.00</td><td>1.00</td><td>99.84%</td><td>0.03%</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table II summarizes our model's performance metrics-accuracy, precision, F1-Score, and False Alarm Rate (FAR)-
|
| 240 |
+
|
| 241 |
+
In cybersecurity, datasets frequently exhibit an imbalance, with fewer attack samples compared to normal traffic. The F1- Score is particularly important in such scenarios as it balances precision and recall, providing a more accurate assessment of the model's ability to differentiate between benign and malicious traffic than accuracy alone.
|
| 242 |
+
|
| 243 |
+
Given the importance of precise intrusion detection, particularly in practical applications where the cost of missed detections is high, we prioritize the F1-Score as a more reliable indicator of our model's performance. In the following sections, we will compare our F1-Score with those from other studies to demonstrate how effectively our model handles the challenges of imbalanced datasets, ensuring dependable intrusion detection.
|
| 244 |
+
|
| 245 |
+
TABLE III. COMPARISON OF BINARY-CLASSIFICATION ALGORITHMS F1
|
| 246 |
+
|
| 247 |
+
<table><tr><td>Method</td><td>Dataset</td><td>F1</td></tr><tr><td>Ours CatBoost</td><td>BoT-IoT</td><td>1.00 0.99</td></tr><tr><td>Ours Extra Tree Classifier TS-IDS</td><td>NF-BoT-IoT</td><td>0.99 0.97 0.95</td></tr><tr><td>Ours Extra Tree Classifier</td><td>NF-ToN-IoT</td><td>1.00 1.00</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table III shows the F1 of our method compared with other algorithms ${}^{\left\lbrack {21},{22}\right\rbrack }$ . The results show that our method achieves F1- Scores that are either similar to or better than those of existing approaches. This indicates that our method performs effectively in both traffic classification and binary network intrusion detection.
|
| 250 |
+
|
| 251 |
+
The comparable or superior F1-Scores demonstrate that our model is not only accurate in identifying malicious network traffic but also maintains a balanced performance across different datasets. This balance is crucial in practical applications, where high precision and recall are necessary to minimize false positives and ensure reliable intrusion detection.
|
| 252 |
+
|
| 253 |
+
In summary, the data in Table III confirms that our method is competitive with, and in some cases superior to, other leading algorithms, highlighting its effectiveness in traffic classification and network intrusion detection tasks.
|
| 254 |
+
|
| 255 |
+
2) Multiclass classiffcation results: Table IV presents the multi-classification results of our method across three standard datasets, where the classifier is tasked with distinguishing between various attack types. The multi-classification problem is more complex than binary classification, as it requires the model to accurately identify not just whether an attack is present, but also to specify the type of attack. The results in Table IV indicate that our model demonstrates strong performance, particularly on the BoT-IoT dataset. This superior performance is indicative of the model's capability to effectively differentiate between the distinct attack types within this dataset.
|
| 256 |
+
|
| 257 |
+
Table V provides further insight into the model's performance by showing the recall and F1-Score values for different attacks in the multi-classification task, specifically focusing on the ToN-IoT dataset. These metrics are crucial for understanding the model's ability to correctly identify each attack type. High recall values suggest that the model is effective in identifying the majority of true positive instances for most attack types, minimizing the risk of undetected threats. Similarly, strong F1-Score values indicate a good balance between precision and recall, reinforcing the model's robustness in handling diverse attack scenarios.
|
| 258 |
+
|
| 259 |
+
TABLE IV. COMPARISON OF BOT-IOT AND NF-BOT-IOT MULTI-CLASSIFICATION ALGORITHMS FI
|
| 260 |
+
|
| 261 |
+
<table><tr><td/><td colspan="2">BoT-IoT</td><td colspan="2">NF-BoT-IoT</td></tr><tr><td>Class Name</td><td>Recall</td><td>F1- Score</td><td>Class Name</td><td>Recall</td></tr><tr><td>Benign</td><td>100.00%</td><td>0.99</td><td>Benign</td><td>100.00%</td></tr><tr><td>DDos</td><td>99.99%</td><td>1.00</td><td>DDos</td><td>99.99%</td></tr><tr><td>Dos</td><td>99.99%</td><td>1.00</td><td>Dos</td><td>99.99%</td></tr><tr><td>Reconnaissance</td><td>99.99%</td><td>1.00</td><td>Reconnaissance</td><td>99.99%</td></tr><tr><td>Theft</td><td>94.52%</td><td>0.98</td><td>Theft</td><td>94.52%</td></tr><tr><td>Weighted Average</td><td>99.99</td><td>1.00</td><td>Weighted Average</td><td>99.99</td></tr></table>
|
| 262 |
+
|
| 263 |
+
ABLE V. COMPARISON OF NF-TON-IOT MULTI-CLASSIFICATION ALGORITHMS
|
| 264 |
+
|
| 265 |
+
<table><tr><td/><td colspan="2">NF-ToN-IoT</td></tr><tr><td>Class Name</td><td>Recall</td><td>F1-Score</td></tr><tr><td>Benign</td><td>98.33%</td><td>0.99</td></tr><tr><td>Backdoor</td><td>98.46%</td><td>0.99</td></tr><tr><td>DDos</td><td>57.47%</td><td>0.73</td></tr><tr><td>Dos</td><td>99.72</td><td>0.46</td></tr><tr><td>Injection</td><td>30.59</td><td>0.46</td></tr><tr><td>MIMT</td><td>55.02</td><td>0.25</td></tr><tr><td>Ransomware</td><td>80.28</td><td>0.42</td></tr><tr><td>Password</td><td>100.00</td><td>0.99</td></tr><tr><td>Scanning</td><td>25.92</td><td>0.15</td></tr><tr><td>XSS</td><td>40.70%</td><td>0.28</td></tr><tr><td>Weighted Average</td><td>68.65%</td><td>0.67</td></tr></table>
|
| 266 |
+
|
| 267 |
+
However, the experimental plots of confusion matrices shown in Figures 5 and 6 for the NF-BoT-IoT and NF-ToN-IoT datasets reveal some nuances in the model's performance. While the recognition rate is extremely high for several attack types, the model struggles with accurately classifying DDoS attacks. This issue likely stems from the fact that during model training, DDoS and DoS attacks shared similar features, leading to a significant overlap in their learned representations. As a result, the model occasionally misclassifies DDoS attacks as DoS attacks, which suggests that the feature extraction process may need refinement to better distinguish between these two attack types.
|
| 268 |
+
|
| 269 |
+
The observed difficulty in separating DDoS from DoS attacks highlights a potential area for improvement. One possible solution could involve enhancing the feature engineering process to capture more distinctive characteristics of these attack types. Additionally, adjusting the training process to emphasize the differences between DDoS and DoS attacks, perhaps through the use of more advanced techniques like adversarial training or ensemble learning, could further improve classification accuracy.
|
| 270 |
+
|
| 271 |
+
In summary, while our model excels in the multi-classification of several attack types, especially within the BoT-IoT dataset, there remains room for improvement in the classification of closely related attacks such as DDoS and DoS. Addressing these challenges will be crucial for further enhancing the model's overall reliability and effectiveness in real-world network security applications.
|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
|
| 275 |
+
Fig. 5. NF-BoT-IoT multiclassification results
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+
Fig. 6. NF-ToN-IoT multiclassification results
|
| 280 |
+
|
| 281 |
+
As with binary classification, we compared the performance of our model's Network Intrusion Detection System (NIDS) with other classifiers, as shown in studies ${}^{\left\lbrack {23},{24}\right\rbrack }$ . Table VI presents the results of this comparison, focusing on the multi-classification task.
|
| 282 |
+
|
| 283 |
+
The findings reveal that our algorithm consistently achieves higher average F1-Score values compared to all existing methods. This is particularly important in multi-classification, where the ability to accurately distinguish between multiple types of network attacks is crucial. The superior F1-Score suggests that our model not only identifies attacks effectively but also excels in correctly classifying the different types of attacks, a challenge where other classifiers often fall short.
|
| 284 |
+
|
| 285 |
+
These results underscore the effectiveness of our approach in handling the complexities of multi-class network intrusion detection, proving that our model outperforms current leading methods in this critical area.
|
| 286 |
+
|
| 287 |
+
TABLE VI. COMPARISON OF MULTI-CLASSIFICATION ALGORITHMS F 1
|
| 288 |
+
|
| 289 |
+
<table><tr><td>Method</td><td>Dataset</td><td>W-F1</td></tr><tr><td>Ours CatBoost</td><td>BoT-IoT</td><td>1.00 0.99</td></tr><tr><td>Ours Extra Tree Classifier TS-IDS</td><td>NF-BoT-IoT</td><td>0.88 0.77 0.83</td></tr><tr><td>Ours Extra Tree Classifier</td><td>NF-ToN-IoT</td><td>0.67 0.60</td></tr></table>
|
| 290 |
+
|
| 291 |
+
Overall, our method demonstrates superior performance compared to other Network Intrusion Detection System (NIDS) approaches across both binary and multi-classification tasks, as evidenced by the results from the three datasets utilized in our study. Our model not only achieves higher accuracy and F1- Scores but also shows remarkable robustness and generalizability. This indicates that it is well-equipped to handle various types of network traffic and detect both known and emerging threats effectively.
|
| 292 |
+
|
| 293 |
+
The model's ability to consistently outperform other methods highlights its advanced capabilities in accurately identifying and classifying different types of network attacks, whether it's simply distinguishing between benign and malicious traffic or correctly categorizing specific attack types. This robust performance across diverse datasets suggests that our method is adaptable to different network environments and can maintain its effectiveness even when faced with the complexities and variabilities of real-world data.
|
| 294 |
+
|
| 295 |
+
## V. CONCLUSION AND FUTURE WORK
|
| 296 |
+
|
| 297 |
+
In this paper, we have introduced a novel GNN-based network intrusion detection method called E-T-GraphSAGE, which has enhanced attack flow detection by capturing edge features and topology patterns within network flow graphs. Our focus has been on applying E-T-GraphSAGE to detect malicious network flows in the context of network intrusion detection. Experimental evaluations have shown that our model performs very well on the three NIDS benchmark datasets and generally outperforms currently available network intrusion detection methods. In the future, we plan to build unsupervised graph neural network intrusion detection models, as well as lighten the E-T-GraphSAGE model and apply it to edge network servers, especially small and medium-sized network devices, for better timely network intrusion detection at the edge.
|
| 298 |
+
|
| 299 |
+
## ACKNOWLEDGMENT
|
| 300 |
+
|
| 301 |
+
This work is supported by the National Natural Science Foundation of China under Grant 62101299.
|
| 302 |
+
|
| 303 |
+
## REFERENCES
|
| 304 |
+
|
| 305 |
+
[1] Chaabouni N, Mosbah M, Zemmari A, et al. Network intrusion detection for IoT security based on learning techniques[J]. IEEE Communications Surveys & Tutorials, 2019, 21(3): 2671-2701.
|
| 306 |
+
|
| 307 |
+
[2] Naeem H. Analysis of Network Security in IoT-based Cloud Computing Using Machine Learning[J]. International Journal for Electronic Crime Investigation, 2023, 7(2).
|
| 308 |
+
|
| 309 |
+
[3] Deng X, Zhu J, Pei X, et al. Flow topology-based graph convolutional network for intrusion detection in label-limited IoT networks[J]. IEEE Transactions on Network and Service Management, 2022, 20(1): 684- 696.
|
| 310 |
+
|
| 311 |
+
[4] Zhong X, Wan G. Six-GraphSecurity: Industrial Internet Intrusion Detection Based On Graph Neural Network[C]//2023 IEEE 7th Information Technology and Mechatronics Engineering Conference (ITOEC). IEEE, 2023, 7: 1340-1344.
|
| 312 |
+
|
| 313 |
+
[5] Sukhbaatar S, Grave E, Bojanowski P, et al. Adaptive attention span in transformers[J]. arXiv preprint arXiv:1905.07799, 2019.
|
| 314 |
+
|
| 315 |
+
[6] Alowaidi M. Modified Intrusion Detection Tree with Hybrid Deep Learning Framework based Cyber Security Intrusion Detection Model[J]. International Journal of Advanced Computer Science and Applications, 2022, 13(10).
|
| 316 |
+
|
| 317 |
+
[7] Gupta N, Jindal V, Bedi P. LIO-IDS: Handling class imbalance using LSTM and improved one-vs-one technique in intrusion detection system[J]. Computer Networks, 2021, 192: 108076.
|
| 318 |
+
|
| 319 |
+
[8] Kumar P J, Neduncheliyan S, Adnan M M, et al. Anomaly-Based Intrusion Detection System Using Bidirectional Long Short-Term Memory for Internet of Things[C]//2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE). IEEE, 2024: 01-04..
|
| 320 |
+
|
| 321 |
+
[9] Suárez-Varela J, Almasan P, Ferriol-Galmés M, et al. Graph neural networks for communication networks: Context, use cases and opportunities[J]. IEEE network, 2022, 37(3): 146-153.
|
| 322 |
+
|
| 323 |
+
[10] Hnamte and J. Hussain, "Network Intrusion Detection using Deep Convolution Neural Network," 2023 4th International Conference for Emerging Technology (INCET), Belgaum, India, 2023, pp. 1-6, doi: 10.1109/INCET57972.2023.10170202.
|
| 324 |
+
|
| 325 |
+
[11] Kisanga P, Woungang I, Traore I, et al. Network anomaly detection using a graph neural network[C]//2023 International Conference on Computing, Networking and Communications (ICNC). IEEE, 2023: 61- 65
|
| 326 |
+
|
| 327 |
+
[12] Ouyang L, Zhang Y, Wang Y. Unified graph embedding-based anomalous edge detection[C]//2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020: 1-8.
|
| 328 |
+
|
| 329 |
+
[13] Sun Y, Zhu D, Wang Y, et al. GTC: GNN-Transformer Co-contrastive Learning for Self-supervised Heterogeneous Graph Representation[J]. arXiv preprint arXiv:2403.15520, 2024.
|
| 330 |
+
|
| 331 |
+
[14] Dai Quoc Nguyen, Tu Dinh Nguyen, and Dinh Phung. 2022. Universal Graph Transformer Self-Attention Networks. In Companion Proceedings of the Web Conference 2022 (WWW '22 Companion), April 25-29, 2022, Virtual Event, Lyon, France. ACM, New York, NY, USA,
|
| 332 |
+
|
| 333 |
+
[15] Zhou J, Cui G, Hu S, et al. Graph neural networks: A review of methods and applications[J]. AI open, 2020, 1: 57-81.
|
| 334 |
+
|
| 335 |
+
[16] Hamilton W, Ying Z, Leskovec J. Inductive representation learning on large graphs[J]. Advances in neural information processing systems, 2017,30.
|
| 336 |
+
|
| 337 |
+
[17] Lo W W, Layeghy S, Sarhan M, et al. E-graphsage: A graph neural network based intrusion detection system for iot[C]//NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium. IEEE, 2022: 1-9.
|
| 338 |
+
|
| 339 |
+
[18] Koroniotis N, Moustafa N, Sitnikova E, et al. Towards the development of realistic botnet dataset in the internet of things for network forensic analytics: Bot-iot dataset[J]. Future Generation Computer Systems, 2019, 100: 779-796.
|
| 340 |
+
|
| 341 |
+
[19] Sarhan M, Layeghy S, Moustafa N, et al. Netflow datasets for machine learning-based network intrusion detection systems[C]//Big Data Technologies and Applications: 10th EAI International Conference, BDTA 2020, and 13th EAI International Conference on Wireless Internet, WiCON 2020, Virtual Event, December 11, 2020, Proceedings 10. Springer International Publishing, 2021: 117-135.
|
| 342 |
+
|
| 343 |
+
[20] Sarhan M, Layeghy S, Portmann M. Evaluating standard feature sets towards increased generalisability and explainability of ML-based network intrusion detection[J]. Big Data Research, 2022, 30: 100359.
|
| 344 |
+
|
| 345 |
+
[21] Tanha J, Abdi Y, Samadi N, et al. Boosting methods for multi-class imbalanced data classification: an experimental review[J]. Journal of Big data, 2020, 7: 1-47.
|
| 346 |
+
|
| 347 |
+
[22] Lawal M A, Shaikh R A, Hassan S R. An anomaly mitigation framework for iot using fog computing[J]. Electronics, 2020, 9(10): 1565.
|
| 348 |
+
|
| 349 |
+
[23] Churcher A, Ullah R, Ahmad J, et al. An experimental analysis of attack classification using machine learning in IoT networks[J]. Sensors, 2021, 21(2): 446.
|
| 350 |
+
|
| 351 |
+
[24] Nguyen H, Kashef R. TS-IDS: Traffic-aware self-supervised learning for IoT Network Intrusion Detection[J]. Knowledge-Based Systems, 2023, 279: 110966.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/0a7OXKwmw9/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,445 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ A HYBRID APPROACH TO NETWORK INTRUSION DETECTION BASED ON GRAPH NEURAL NETWORKS AND TRANSFORMER ARCHITECTURES
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Hongrun Zhang
|
| 4 |
+
|
| 5 |
+
College of Computer Technology
|
| 6 |
+
|
| 7 |
+
and Applications
|
| 8 |
+
|
| 9 |
+
Qinghai University
|
| 10 |
+
|
| 11 |
+
QingHai, China
|
| 12 |
+
|
| 13 |
+
ys220854040277@qhu.edu.cn
|
| 14 |
+
|
| 15 |
+
2nd Tengfei Cao
|
| 16 |
+
|
| 17 |
+
College of Computer Technology
|
| 18 |
+
|
| 19 |
+
and Applications
|
| 20 |
+
|
| 21 |
+
Qinghai University
|
| 22 |
+
|
| 23 |
+
QingHai, China
|
| 24 |
+
|
| 25 |
+
caotf@qhu.edu.cn
|
| 26 |
+
|
| 27 |
+
${Abstract}$ -In this paper, we propose a model of a Network Intrusion Detection System (NIDS) named E-T-GraphSAGE (ETG), which fuses Graph Neural Network (GNN) and Transformer techniques. With the widespread adoption of the Internet of Things (IoT) and cloud computing, network structures have become complex and vulnerable. The efficacy of traditional intrusion detection systems is limited in the context of novel and unconventional cyber-attacks. This paper proposes a novel approach to address this challenge. GNN is used to capture the complex relationships between network nodes and edges, analyze network traffic graphs, and identify anomalous behaviors. By introducing the Transformer, the model enhances its ability to handle long-range dependencies in network streaming data and to understand network dynamics at a macro level. The GraphSAGE-Transformer (ETG) model is proposed to optimize the edge features through the self-attention mechanism to exploit the potential of network streaming data and improve the accuracy of intrusion detection. The experimental results show that the model outperforms the existing techniques in key performance metrics Tests on several standard datasets (BoT-IoT, NF-BoT-IoT, NF-ToN-IoT) validate the broad applicability and robustness of the ETG model, especially in complex network environments.
|
| 28 |
+
|
| 29 |
+
Keywords—GNN, GraphSAGE, Transformer, NIDS
|
| 30 |
+
|
| 31 |
+
§ I. INTRODUCTION
|
| 32 |
+
|
| 33 |
+
With the widespread adoption of the Internet of Things (IoT) and cloud computing, the structure of network systems is becoming more complex, and the types and numbers of devices are increasing dramatically. This environment provides more vulnerabilities and points of entry for cyber attackers, making traditional cyber defense systems face serious challenges ${}^{\left\lbrack 1\right\rbrack }$ . Modern network attacks are varied, including distributed denial-of-service (DDoS) attacks, malware spread, and data breaches but also more subtle and adaptable, frequently targeting multiple layers of the network and various nodes. In addition, with the rapid development of attack techniques, new and unknown zero-day vulnerability attacks frequently appear, and these attacks are able to bypass the signature-based intrusion detection system easily ${}^{\left\lbrack 2\right\rbrack }$ . Therefore, there is a need to develop new detection techniques that not only recognize known attack patterns but also can predict and adapt to unknown threats.
|
| 34 |
+
|
| 35 |
+
To overcome these limitations, recent research has increasingly focused on leveraging machine learning and deep learning techniques. Among these, Transformer architectures have gained attention for their self-attention mechanism, which effectively captures long-range dependencies in sequential data. Originally developed for natural language processing, Transformers have been successfully adapted for cybersecurity applications, offering the ability to analyze complex interdependencies within network traffic.
|
| 36 |
+
|
| 37 |
+
Graph neural networks (GNNs), known for their ability to handle graph-structured data, offer significant potential in cybersecurity applications. By capturing the complex relationships between nodes (e.g., IP addresses or devices) and edges (i.e., data transmissions or sessions) in a network, GNNs are able to efficiently map the overall pattern of network behavior. This capability makes GNN particularly suitable for identifying and analyzing complex network intrusions that are difficult to detect through conventional detection means ${}^{\left\lbrack 3\right\rbrack }$ . GNNs can analyze network traffic graphs by representing hosts or servers as nodes and their communications as edges. By learning the normal and abnormal characteristics of these communication patterns, the GNN is able to identify anomalous behavior in the network, such as unauthorized data access or abnormal data traffic. In addition, a key advantage of GNNs is their ability to integrate data from multiple sources and extract deep network characteristics, which is particularly important for detecting advanced persistent threats (APTs) and multi-stage attacks.
|
| 38 |
+
|
| 39 |
+
GNN not only enhances the detection of known threats, but more importantly, it provides a mechanism to understand and predict new or variant attack behaviors that are difficult to identify with traditional methods. Therefore, the introduction of GNN into network security systems, especially network intrusion detection systems, will greatly enhance the system's ability to defend against complex network threats ${}^{\left\lbrack 4\right\rbrack }$ .
|
| 40 |
+
|
| 41 |
+
This research aims to develop an enhanced Network Intrusion Detection System (NIDS) by integrating Graph Neural Networks (GNNs) with Transformer architectures. The goal is to improve the efficiency and accuracy of detecting complex and previously unknown attack patterns by leveraging the Transformer's ability to capture long-range dependencies in network traffic. This integration seeks to enhance the model's capability to analyze network flows on both local and global scales, improving overall performance in detecting sophisticated cyber threats.
|
| 42 |
+
|
| 43 |
+
The proposed study will use a hybrid approach, combining GNNs and Transformers to analyze network traffic. GNNs will be employed to construct graph representations of network entities and interactions, while the Transformer's self-attention mechanism will capture long-range dependencies and global patterns ${}^{\left\lbrack 5\right\rbrack }$ . This integrated model aims to enhance understanding of network dynamics and improve detection and prediction of both known and emerging threats. The model's effectiveness will be evaluated through experiments on benchmark datasets, comparing its performance with existing intrusion detection systems.
|
| 44 |
+
|
| 45 |
+
< g r a p h i c s >
|
| 46 |
+
|
| 47 |
+
Fig. 1. Network flow data graph structuring.
|
| 48 |
+
|
| 49 |
+
As shown in Fig. 1, we utilize both GNN and Transformer to encode the raw stream data successively to obtain the desired graph data structure, which is input to the model for training.
|
| 50 |
+
|
| 51 |
+
(1) The core contribution of this research is the development of a NIDS model that combines GNN and Transformer. The edge features optimized by the self-attention mechanism fully exploit the potential of network streaming data and significantly improve the detection accuracy of network intrusion.
|
| 52 |
+
|
| 53 |
+
(2) Tests on multiple standard datasets show that our model outperforms existing techniques in key performance metrics such as accuracy, recall, and F1 score.
|
| 54 |
+
|
| 55 |
+
The rest of the paper's organizational sequel will detail the design and experimental evaluation of the E-T-GraphSAGE (ETG) model. Part II will explore the development of NIDS, as well as research related to GNNs and Transformers. Part III details the model architecture and key technologies. The fourth section shows the experimental results on a variety of cyberattack datasets and compares them with other methods. The concluding section will summarize the research results and discuss future research directions.
|
| 56 |
+
|
| 57 |
+
§ II. RELATED WORK
|
| 58 |
+
|
| 59 |
+
In recent years, various approaches have been proposed to enhance the performance of Intrusion Detection Systems (IDS) Alowaidi et al. ${}^{\left\lbrack 6\right\rbrack }$ proposed a hybrid Intrusion Detection System (IDS) combining Machine Learning (ML) and Deep Learning (DL) techniques, which enhances IDS performance and prediction accuracy while lowering computational costs. However, the model's generalization relies on the diversity and representativeness of the training data. If the training data is biased, it negatively impacts the model's real-world performance Gupta et al. ${}^{\left\lbrack 7\right\rbrack }$ proposed an anomaly-based NIDS, this approach considers multiple performance metrics, along with training time and resource usage, but remains limited by dataset dependency and average generalization capabilities. Kumar et al. ${}^{\left\lbrack 8\right\rbrack }$ proposed a bi-directional long short-term memory (BiLSTM) based anomaly detection system for Internet of Things (IoT) networks. The BiLSTM model effectively improves the accuracy by preprocessing and feature selection through normalization and gain ratio.
|
| 60 |
+
|
| 61 |
+
Suárez-Varela et al. ${}^{\left\lbrack 9\right\rbrack }$ introduced the use of GNNs in the modeling control, and management of communication networks, demonstrated their advantages in terms of generalization capabilities and data-driven solutions, and discussed their potential in network modeling control and management. Hnamte et al. ${}^{\left\lbrack {10}\right\rbrack }$ proposed an approach using Deep Convolutional Neural Networks (DCNN) and validated its performance with the InSDN dataset. While DCNN achieves high accuracy, it demands significant data and computational resources for training.
|
| 62 |
+
|
| 63 |
+
Kisanga et al. ${}^{\left\lbrack {11}\right\rbrack }$ proposed a new Activity and Event Network (AEN) graph framework that focuses on capturing long-term stealthy threats that are difficult to detect by traditional security tools, and is very promising in detecting long-term threats in cybersecurity. L et al. ${}^{\left\lbrack {12}\right\rbrack }$ proposed an end-to-end anomalous edge detection method based on unified graph embedding, which enhances the model's ability to learn task-relevant patterns by combining embedding learning and anomaly detection into the same objective function, and accurately estimates the probability distributions of edges through the local structure of the graph to identify anomalous edges. Superior accuracy and scalability are demonstrated on multiple publicly available datasets.
|
| 64 |
+
|
| 65 |
+
Sun et al. ${}^{\left\lbrack {13}\right\rbrack }$ proposed a framework combining Graph Neural Network (GNN) and Transformer for self-supervised heterogeneous graph representation learning. The Metapath-aware Hop2Token method is designed to efficiently convert neighbors with different hop counts in heterogeneous graphs into Token sequences, reducing the computational complexity in Transformer processing. GTC enhances information fusion, improves learning efficiency, and reduces the demand for computational resources by contrasting learning tasks between graph pattern views and hop count views.
|
| 66 |
+
|
| 67 |
+
Nguyen et al. ${}^{\left\lbrack {14}\right\rbrack }$ proposed a Transformer-based GNN model for learning graph representation. With an unsupervised conduction learning approach, UGformer is able to solve the problem of limited category labels, but for large-scale datasets to construct graphs, UGformer may still need to be optimized to deal with extremely large graph structures, despite the sampling mechanism that UGformer is designed for.
|
| 68 |
+
|
| 69 |
+
Unlike previous studies, our method focuses on extracting data edge features from network streams and develops E-GraphSAGE models that incorporate transformer modules. Combining local and global features to achieve more accurate feature representations, making full use of the structural and topological information and inherent in network streaming data to achieve better feature representations and network intrusion detection performance. The T-E-GraphSAGE method introduced in this paper addresses the shortcomings of traditional graph embedding techniques by capturing topological details and edge features in network flow data, leading to more precise detection. while its ability to effectively classify samples with unseen node features. Three NIDS standard datasets are used to evaluate our model, which verifies the broad applicability accuracy, and robustness of our model in different types of network scenarios, which is effective in comparison with traditional ML methods, especially in complex network environments. Through these improvements, the performance of our system in network intrusion detection has been significantly improved, and it is able to effectively respond to various network attacks in complex network environments.
|
| 70 |
+
|
| 71 |
+
§ III.THE PROPOSED METHOD
|
| 72 |
+
|
| 73 |
+
§ A. GRAPHSAGE
|
| 74 |
+
|
| 75 |
+
Graph Neural Networks (GNN) are becoming increasingly popular in the field of machine learning. Its power stems from the effective utilization of graph-structured data. These data are widely available in application areas such as social media networks, biological research, and telecommunication systems ${}^{\left\lbrack {15}\right\rbrack }$ . The primary reason for using GNN in NIDS is their capability to leverage the structural data present in network streams, which can be represented graphically. Although some conventional machine learning approaches also handle graph data, they usually involve intricate processes and depend heavily on manually crafted features, leading to more cumbersome and less efficient applications.
|
| 76 |
+
|
| 77 |
+
GraphSAGE ${}^{\left\lbrack {16}\right\rbrack }$ is an efficient graph neural network technique that generates embedded representations of nodes by sampling and aggregating the features of their neighbors. It is particularly suitable for processing large-scale graph data. The main steps include sampling neighboring nodes, aggregating features, and updating node features, which effectively solve the computation and storage bottlenecks of traditional graph neural networks. As a result, GraphSAGE has been widely used in many fields.
|
| 78 |
+
|
| 79 |
+
GraphSAGE : learning node representation through local aggregation, and its core steps include three aspects: neighbor node sampling, feature aggregation, and node feature update, as shown in Fig. 2.
|
| 80 |
+
|
| 81 |
+
In neighbor node sampling, for each node, a fixed number of neighbor nodes are randomly sampled to reduce the computation and storage requirements. Suppose a node in the graph is $v$ , and its set of neighbor nodes is $N\left( v\right)$ , and the set of neighbor nodes obtained from sampling is $\widetilde{N}\left( v\right)$ . This process can be represented as:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\widetilde{N}\left( v\right) = \operatorname{Sample}\left( {N\left( v\right) ,K}\right) \tag{1}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $K$ denotes the number of neighbor nodes sampled. This phase seeks to manage computational complexity by limiting the number of adjacent nodes for each vertex in extensive graphs.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
§ FIG.2. GRAPHSAGE MODEL DIAGRAM
|
| 92 |
+
|
| 93 |
+
In feature aggregation, a feature aggregation operation is performed on the sampled set of neighbor nodes $\widetilde{N}\left( v\right)$ to generate neighbor feature representations. Common aggregation methods include mean value aggregation, pooling, and LSTM. The following are the formulas for several aggregation methods:
|
| 94 |
+
|
| 95 |
+
1) Mean aggregation: Mean aggregation computes the average of neighboring node features. Its formula is:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{mean}\left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{2}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where ${h}_{u}^{\left( k - 1\right) }$ denotes the feature representation of the neighboring node at the $k - 1$ th layer of $u$ , and ${h}_{\widetilde{N}\left( v\right) }^{\left( k\right) }$ denotes the representation of the node $v$ after aggregating the features of its neighboring nodes at the $k$ layer.
|
| 102 |
+
|
| 103 |
+
2) Maximum pooling: Maximum pooling is used to take the maximum value in the features of neighboring nodes. The formula for this is:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \max \left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{3}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
3) LSTM aggregation: LSTM aggregation uses LSTM network for neighbor node features with the formula:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{LSTM}\left( \left\{ {{h}_{u}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) }\right\} \right) \tag{4}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
For node feature update, the algorithm combines the node's own features with the aggregated neighbor features and updates the node feature representation through a neural network. A common way of combining is a concatenation operation (concatenation) followed by a transformation through a fully connected layer. Its formula is:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{h}_{v}^{\left( k\right) } = \sigma \left( {{W}^{\left( k\right) } \cdot \operatorname{concat}\left( {{h}_{v}^{\left( k - 1\right) },{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) }}\right) }\right) \tag{5}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where $\sigma$ denotes the activation function (e.g., ReLU), ${W}^{\left( k\right) }$ denotes the weight matrix of the $k$ -th layer, and ${h}_{v}^{\left( k\right) }$ denotes the feature representation of node $v$ in the $k$ -th layer.
|
| 122 |
+
|
| 123 |
+
In the specific process, the features are first initialized and each node’s feature can be its attribute vector ${x}_{v}$ . Then multilayer sampling and aggregation is performed, for the $k$ -th layer, each node $v$ randomly samples a fixed number of $K$ neighbors from its neighborhood to form the sampling set $\widetilde{N}\left( v\right)$ and aggregates the features of the neighboring nodes using the selected aggregation function (e.g., mean, maximum pooling, or LSTM) to obtain ${h}_{\bar{N}\left( v\right) }^{\left( k\right) }$ . Then the node $v$ own features are connected to the aggregated neighboring features in a join operation and nonlinearly transformed through the fully connected layer to obtain a new node feature representation ${h}_{v}^{\left( k\right) }$ Finally, after multi-layer (usually 2 to 3 layers) sampling and aggregation operations, the embedding representation of each node is finally generated ${h}_{v}$ . Through the above steps, the GraphSAGE algorithm is able to efficiently deal with large-scale graph data, and generate high-quality node embedding representations through sampling and aggregation operations.
|
| 124 |
+
|
| 125 |
+
§ B. E-TRANSFORMER-GRAPHSAGE METHODS
|
| 126 |
+
|
| 127 |
+
The traditional GraphSAGE method mainly focuses on the analysis and utilization of node features for node classification, but is deficient in dealing with edge features. The primary objective of NIDS aims to detect and identify malicious traffic. In our study, we focus on the application of edge features and improve the GraphSAGE model by using the edge embedding method and introducing the Transformer layer method.
|
| 128 |
+
|
| 129 |
+
1) E-GraphSAGE: In order to handle graph structure data efficiently, we designed and implemented the GraphSAGE layer (SAGELayer). This layer updates the representation of each node by aggregating the features of the node's neighbors to capture the relationships between nodes in the graph. GraphSAGE accomplishes the updating of node representations through message passing and apply updates, and employs the ReLU activation function to improve the model’s nonlinear representation ${}^{\left\lbrack {17}\right\rbrack }$ . The main differences from the original GraphSAGE algorithm are the algorithmic inputs, the message passing aggregation functions and the outputs. In the SAGE layer, edge embedding is incorporated into the messaging process to provide richer information. Unlike the traditional GraphSAGE module, the aggregated embedding of sampled neighboring edges is generated at the kth layer for edge features. using a mean aggregation function as shown in the following equation.
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{h}_{\widetilde{N}\left( v\right) }^{\left( k\right) } = \operatorname{mean}\left( \left\{ {{e}_{uv}^{\left( k - 1\right) },\forall u \in \widetilde{N}\left( v\right) ,{uv} \in \varepsilon }\right\} \right) \tag{6}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where ${e}_{uv}^{\left( k - 1\right) }$ is the feature of the edge ${uv}$ in the $\mathrm{k} - 1$ layer of the sampling neighborhood $\widetilde{N}\left( v\right)$ of node $v$ , and the set $\{ \forall u \in$ $\widetilde{N}\left( v\right) ,{uv} \in \varepsilon \}$ represents the sampling edges within the neighborhood $\widetilde{N}\left( v\right)$ . The edge features of the ${uv}$ of the kth layer are spliced by the following equation, which represents the final result of the forward propagation phase.
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{h}_{uv}^{k} = \operatorname{CONCAT}\left( {{h}_{u}^{k},{h}_{v}^{k}}\right) ,{uv} \in \mathcal{E} \tag{7}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
In our study, we constructed a two-layer E-GraphSAGE model with each layer consisting of an E-SAGELayer.
|
| 142 |
+
|
| 143 |
+
Neighboring node features are aggregated to generate the embedded representation of the node and a mean value aggregation method is used, where the features of the node are the mean value of the features of its neighboring nodes. The first layer E-SAGELayer in this model aggregates the input features to generate the first layer of node embedding; The second layer takes the first layer of node embeddings as input and again performs aggregation to generate the final node embeddings. Through this multi-layer aggregation, we are able to capture more complex node characteristics and neighbor relationships. A Dropout operation is used to avoid overfitting. The advantage of stacking multiple layers of GraphSAGE is the ability to capture more complex node relationships and form richer node representations to improve the performance of the model.
|
| 144 |
+
|
| 145 |
+
2) Transformer: The traditional GraphSAGE method mainly focuses on the analysis and utilization of node features for node classification, but is deficient in dealing with edge features. The primary aim of NIDS is to detect and identify malicious traffic, aligning with the edge classification problem in network flow classification. Our study emphasizes the use of edge features and enhances the GraphSAGE model by incorporating the edge embedding method and introducing the Transformer layer technique.
|
| 146 |
+
|
| 147 |
+
The Transformer Encoder Layer (TEL) is the basic component of the Transformer model, which mainly consists of the MultiheadAttention mechanism, Feed-forward Neural Network (Linear Layer), and Normalization Layer (LayerNorm), and Dropout is applied between the layers to prevent overfitting. In the Transformer Encoder Layer, the inputs are node features (generated by the SAGE layer) and this layer does not explicitly process edge features. Its main function is to capture the dependencies between node features and global information through a multi-head attention mechanism along with a feed-forward neural network.
|
| 148 |
+
|
| 149 |
+
a) Multi-head attention: The self-attention mechanism allows the model to capture global dependencies by focusing on all other elements in a sequence while processing each element in the sequence. The multi-head self-attention mechanism improves the model's sensitivity to different features by performing multiple self-attention computations in parallel. The specific formula is as follows:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\left\{ \begin{matrix} \operatorname{Attemtion}\left( {Q,K,V}\right) = \operatorname{softmax}\left( \frac{Q{K}^{T}}{\sqrt{{d}_{k}}}\right) V \\ \operatorname{MultiHead}\left( {Q,K,V}\right) = \operatorname{Concat}\left( {{\operatorname{head}}_{1},\cdots {\operatorname{head}}_{i},\cdots ,{\operatorname{head}}_{h}}\right) {W}_{C} \end{matrix}\right. \tag{8}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Identify applicable funding agency here. If none, delete this text box.
|
| 156 |
+
|
| 157 |
+
where $\operatorname{Attemtion}\left( {Q,K,V}\right)$ is the single-head self-attention computation, $\mathrm{Q}$ denotes the computational query matrix, $\mathrm{K}$ denotes the key matrix, $\mathrm{V}$ denotes the value matrix, $\mathrm{d}$ denotes the input vector dimension, and MultiHead(Q, K, V)denotes the multi-head self-attention splicing the results of the $h$ heads together and obtaining the final output by a linear transformation, where ${\text{ head }}_{i} =$ Attention $\left( {{Q}_{i},{K}_{i},{V}_{i}}\right)$ , and ${W}_{O} \in {\mathbb{R}}^{h{d}_{k} \times {d}_{\text{ model }}}$ is the output weight matrix and ${d}_{\text{ model }}$ is the input feature dimension.
|
| 158 |
+
|
| 159 |
+
Specifically, the MultiheadAttention mechanism captures the global dependencies of the input data by processing the input data in parallel through multiple Attention Heads. Each Attention Head performs self-attention computation independently, which is able to focus on different features in the input data and enhance the sensitivity of the model to multiple features. The multi-head attention mechanism's output is linked to the feed-forward neural network via a linear transformation.
|
| 160 |
+
|
| 161 |
+
b) Feed-forward neural network: Feed-forward neural networks (FFN) are fully connected neural networks applied independently at each position in each Transformer coding layer. The specific formula is as follows:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\operatorname{FFN}\left( x\right) = \max \left( {0,x{W}_{1} + {b}_{1}}\right) {W}_{2} + {b}_{2} \tag{9}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where ${W}_{1} \in {\mathbb{R}}^{{d}_{\text{ model }} \times {dff}},{W}_{2} \in {\mathbb{R}}^{{d}_{ff} \times {d}_{\text{ model }}},{b}_{1} \in {\mathbb{R}}^{{d}_{ff}},{b}_{2} \in$ ${\mathbb{R}}^{{d}_{\text{ model }}}$ is the parameter of the science department and ${d}_{ff}$ is the hidden layer dimension of the FNN.
|
| 168 |
+
|
| 169 |
+
The feedforward neural network used in this paper includes two fully connected layers with a ReLU activation function and Dropout applied in between. The first fully connected layer maps the input dimension from the embedded dimension (embed_dim) to a higher hidden dimension (ff_hidden_dim), the ReLU activation function introduces a nonlinear transformation, and the Dropout operation is used to prevent overfitting. The second fully connected layer maps the hidden dimension back to the embedded dimension, thus keeping the dimensionality of the inputs and outputs the same.
|
| 170 |
+
|
| 171 |
+
c) Normalization layer: The normalization layer is implemented following each sublayer, including both self-attention and the feed-forward neural network, to ensure regularization and stabilize the training process. The specific formulas are as follows:
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\text{ LayerNorm }\left( x\right) = \frac{x - \mu }{\sigma + \varepsilon } \cdot \gamma + \beta \tag{10}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
where $\mu$ and $\sigma$ are the mean and standard deviation of the inputs respectively, $\gamma$ and $\beta$ are the learnable scaling and offset parameters and $\varepsilon$ is a small constant.
|
| 178 |
+
|
| 179 |
+
Each coding layer undergoes Layer Normalization and Residual Connection between and after the multi-head self-attention mechanism and the feed-forward neural network. Layer Normalization helps to stabilize and speed up the training process, while Residual Connection helps to solve the problem of vanishing gradients in deep networks.
|
| 180 |
+
|
| 181 |
+
d) Dropout: Dropout randomly discards a certain percentage of neurons during training to prevent overfitting. By stacking multiple such coding layers, the Transformer model is able to effectively capture the global dependencies of the input data and enhance the model's sensitivity to different features. The multi-head self-attention mechanism in each layer enables the model to focus on different features in the input data, and the feed-forward neural network further processes these features. Through the layer-by-layer processing of the multilayer structure, the model is able to capture more complex and deeper feature relationships in the input data, which improves its performance in various tasks.
|
| 182 |
+
|
| 183 |
+
§ C. NIDS
|
| 184 |
+
|
| 185 |
+
Fig. 3 shows how the network stream data is constructed as graph data and the propagation process from the source node to the destination node. Fig. 4 shows an overview of our E-Transformer-GraphSAGE NIDS. Initially, a graph is created using the network flow data. Next, the generated network graph is fed into the E-Transformer-GraphSAGE model for supervised training. Edge embeddings are designed to classify network streams into benign or malicious categories. The following subsections explain these three steps in detail.
|
| 186 |
+
|
| 187 |
+
Netflow Data
|
| 188 |
+
|
| 189 |
+
max width=
|
| 190 |
+
|
| 191 |
+
X IPV4 SRC ADDR L4 SRC PORT IPV4 DST ADDR L4 DST PORT PROTOC OL L7 PROT 。 IN BYTE S OUT BY TES IN PKTS OUT PK TS TCP FLA QS FLOW D URATION MILLIS ECONDS Label Attack
|
| 192 |
+
|
| 193 |
+
1-15
|
| 194 |
+
X 192,168.1.7 0 46800 239.255.25 5.250 15600 17 0 63 0 1 0 0 0 0 Benign
|
| 195 |
+
|
| 196 |
+
1-15
|
| 197 |
+
X 192,168,1.7 9 41361 192,168.1.1 15600 17 0 63 0 1 0 0 0 0 Benign
|
| 198 |
+
|
| 199 |
+
1-15
|
| 200 |
+
X 192,168.1.1 60641 192,168.1.3 1 53 17 5 100 100 2 2 0 2 1 Injection
|
| 201 |
+
|
| 202 |
+
1-15
|
| 203 |
+
X 192,168.1.1 43803 192,168.1.1 52 53 17 5 100 100 2 2 0 7 1 Scanning
|
| 204 |
+
|
| 205 |
+
1-15
|
| 206 |
+
X 192,168,1.3 1 63898 192,168.1.3 6 5355 17 154 122 0 2 0 0 0 0 Benign
|
| 207 |
+
|
| 208 |
+
1-15
|
| 209 |
+
X 192,168,1.3 6 53153 192,168.1.0 7 5355 17 154 122 0 2 0 0 0 0 Benign
|
| 210 |
+
|
| 211 |
+
1-15
|
| 212 |
+
X 192,168.1.3 6 44248 192,168.1.1 52 80 6 7 526 2816 6 6 27 1021 1 XSS
|
| 213 |
+
|
| 214 |
+
1-15
|
| 215 |
+
X ... ... ... ... ... ... ... ... ... ... ... ... ... ...
|
| 216 |
+
|
| 217 |
+
1-15
|
| 218 |
+
|
| 219 |
+
< g r a p h i c s >
|
| 220 |
+
|
| 221 |
+
Fig. 3. Network flow data conversion diagram data
|
| 222 |
+
|
| 223 |
+
< g r a p h i c s >
|
| 224 |
+
|
| 225 |
+
Fig. 4. E-Transformer-graphsage-based Network Intrusion Detection System
|
| 226 |
+
|
| 227 |
+
1) Graph data structure: Net-Flow is a commonly used format for logging network communications in production environments and is the predominant format in Network Intrusion Detection System (NIDS) environments. A flow record typically includes fields that identify the communication's source and destination, along with additional information like packet and byte counts, and flow duration. Graph structures naturally model this type of data. In this study, we use the source IP address, source port, destination IP address, and destination port. The first two fields form a binary group identifying the source node, and the last two form the destination node. The remaining data are used as features for that edge, making the graph nodes featureless. We assign a vector of all 1's to all nodes in the algorithm.
|
| 228 |
+
|
| 229 |
+
2) E-Transformer-GraphSAGE: Our proposed model combines the sensitivity of GNN to local structures and the ability of Transformer to capture global dependencies by first processing the graph data through E-GraphSAGE to obtain node representations. Then, Transformer is utilized to further capture global dependencies. During the training process, we utilize a weighted cross-entropy loss function (CrossEntropyLoss) to address category imbalance. We use Adam optimizer (Adam optimizer) for parameter updating. The algorithm's output is compared with the labels from the NIDS dataset and the model's trainable parameters are adjusted in the backpropagation phase. After tuning the model parameters during training, the performance of the model can be evaluated by classifying unseen test samples. The process involves converting the test stream records into graph data structures. Edge embeddings are then generated using a trained E-Transformer-GraphSAGE layer. These edge embeddings are subsequently transformed into class probabilities via the Softmax layer. The predicted class probabilities are compared with the actual class labels to evaluate the classification performance metrics.
|
| 230 |
+
|
| 231 |
+
§ IV. EXPERIMENT
|
| 232 |
+
|
| 233 |
+
In this section, We performed binary classification and multiclassification task comparisons to validate the effectiveness of our algorithm.
|
| 234 |
+
|
| 235 |
+
§ A. EXPERIMENT SETTING
|
| 236 |
+
|
| 237 |
+
We modeled using Python, Pytorch, and DGL, and the server environment was performed on an Intel(R) Xeon(R) Gold 6242 CPU @ 2.80GHz total of 32 cores, a single A100 graphics card, and 192G RAM.
|
| 238 |
+
|
| 239 |
+
§ B. DATASETS
|
| 240 |
+
|
| 241 |
+
To evaluate our proposed GNN-based NIDS, we use three publicly available datasets that include various labeled attack flows and benign network flows. The first dataset is BoT-IoT, which is widely used for evaluating ML based network intrusion detection systems in the Internet of Things, with a proprietary format and feature set. The second and third datasets are NF-ToN-IoT and NF-BoT-IoT presented in Netflow format.
|
| 242 |
+
|
| 243 |
+
1) BoT-IoT datasets: The BoT-IoT dataset ${}^{\left\lbrack {18}\right\rbrack }$ was generated by the Cyber Range Lab at the Australian Center for Cyber Security (ACCS) to evaluate the performance of cyber security tools. It simulates real network environments containing normal traffic and multiple types of malicious traffic such as DDoS, DoS, reconnaissance, and data theft for Intrusion Detection System (IDS) training and testing. Avoid combining SI units, like current in amperes, with CGS units, such as the magnetic field measured in oersteds, as this can cause dimensional imbalance and confusion. If using mixed units, clearly specify the units for each quantity in the equation.
|
| 244 |
+
|
| 245 |
+
2) NF-BoT-IoT datasets: The NF-BoT-IoT dataset ${}^{\left\lbrack {19}\right\rbrack }$ is a NetFlow characterization dataset extracted from the BoT-IoT dataset to provide a more concise representation of network traffic by summarizing IP traffic flows. The dataset includes information such as source and destination IP addresses, ports, packet counts, byte counts, and timestamps, which helps in large-scale data analysis and real-time intrusion detection.
|
| 246 |
+
|
| 247 |
+
3) NF-ToN-IoT datasets: The NF-ToN-IoT dataset is a NetFlow characterization dataset generated based on the ToN-IoT dataset and contains telemetry and operational network data from Internet of Things (IoT) devices. The dataset provides detailed traffic records that help detect network intrusions and understand traffic patterns in IoT environments and is suitable for IoT security research.
|
| 248 |
+
|
| 249 |
+
§ C. RESULTS OF THE EXPERIMENT
|
| 250 |
+
|
| 251 |
+
To assess the effectiveness of the proposed neural network model, we employed the standard metrics outlined in Table I. Here, TP stands for true positives, TN for true negatives, FP for false positives, and FN for false negatives.
|
| 252 |
+
|
| 253 |
+
TABLE I. EVALUATION INDICATORS
|
| 254 |
+
|
| 255 |
+
max width=
|
| 256 |
+
|
| 257 |
+
Accuracy $\frac{\mathbf{{TP}} + \mathbf{{TN}}}{\mathbf{{TP}} + \mathbf{{FP}} + \mathbf{{TN}} + \mathbf{{FN}}} \times \mathbf{{100}}\%$
|
| 258 |
+
|
| 259 |
+
1-2
|
| 260 |
+
Precision $\mathbf{{TP}} + \mathbf{{FP}} \times \mathbf{{100}}\%$
|
| 261 |
+
|
| 262 |
+
1-2
|
| 263 |
+
FAR $\overline{{FP} + {TN}} \times {100}\%$
|
| 264 |
+
|
| 265 |
+
1-2
|
| 266 |
+
Recall TP + FN $\times$ 100%
|
| 267 |
+
|
| 268 |
+
1-2
|
| 269 |
+
F1-Score $2 \times \frac{\text{ Precision } \times \text{ Recall }}{\text{ Precision } \times \text{ Recall }} \times {100}\%$
|
| 270 |
+
|
| 271 |
+
1-2
|
| 272 |
+
|
| 273 |
+
1) Binary classification results: The datasets employed in our experiments contain dual-layer labels for each data instance The first layer indicates whether the network flow is benign or non-benign, while the second layer specifies the attack type. For the binary classification task, we use the first layer of labels, and for the multi-class classification task, we use the second layer of labels ${}^{\left\lbrack {20},{21}\right\rbrack }$ . across three datasets: BoT-IoT, NF-BoT-IoT, and NF-ToN-IoT. The findings demonstrate that our method performs exceptionally well in binary classification, a key factor for successful network intrusion detection.
|
| 274 |
+
|
| 275 |
+
TABLE II. BINARY CLASSIFFCATION RESULTS
|
| 276 |
+
|
| 277 |
+
max width=
|
| 278 |
+
|
| 279 |
+
Dataset Accuracy Precision F1-Score Recall $\mathbf{{FAR}}$
|
| 280 |
+
|
| 281 |
+
1-6
|
| 282 |
+
BoT-IoT 99.99% 1.00 1.00 99.99% 0.00%
|
| 283 |
+
|
| 284 |
+
1-6
|
| 285 |
+
NF-BoT- IoT 94.52% 1.00 0.99 97.32% 0.24%
|
| 286 |
+
|
| 287 |
+
1-6
|
| 288 |
+
NF-ToN- IoT 99.93% 1.00 1.00 99.84% 0.03%
|
| 289 |
+
|
| 290 |
+
1-6
|
| 291 |
+
|
| 292 |
+
Table II summarizes our model's performance metrics-accuracy, precision, F1-Score, and False Alarm Rate (FAR)-
|
| 293 |
+
|
| 294 |
+
In cybersecurity, datasets frequently exhibit an imbalance, with fewer attack samples compared to normal traffic. The F1- Score is particularly important in such scenarios as it balances precision and recall, providing a more accurate assessment of the model's ability to differentiate between benign and malicious traffic than accuracy alone.
|
| 295 |
+
|
| 296 |
+
Given the importance of precise intrusion detection, particularly in practical applications where the cost of missed detections is high, we prioritize the F1-Score as a more reliable indicator of our model's performance. In the following sections, we will compare our F1-Score with those from other studies to demonstrate how effectively our model handles the challenges of imbalanced datasets, ensuring dependable intrusion detection.
|
| 297 |
+
|
| 298 |
+
TABLE III. COMPARISON OF BINARY-CLASSIFICATION ALGORITHMS F1
|
| 299 |
+
|
| 300 |
+
max width=
|
| 301 |
+
|
| 302 |
+
Method Dataset F1
|
| 303 |
+
|
| 304 |
+
1-3
|
| 305 |
+
Ours CatBoost BoT-IoT 1.00 0.99
|
| 306 |
+
|
| 307 |
+
1-3
|
| 308 |
+
Ours Extra Tree Classifier TS-IDS NF-BoT-IoT 0.99 0.97 0.95
|
| 309 |
+
|
| 310 |
+
1-3
|
| 311 |
+
Ours Extra Tree Classifier NF-ToN-IoT 1.00 1.00
|
| 312 |
+
|
| 313 |
+
1-3
|
| 314 |
+
|
| 315 |
+
Table III shows the F1 of our method compared with other algorithms ${}^{\left\lbrack {21},{22}\right\rbrack }$ . The results show that our method achieves F1- Scores that are either similar to or better than those of existing approaches. This indicates that our method performs effectively in both traffic classification and binary network intrusion detection.
|
| 316 |
+
|
| 317 |
+
The comparable or superior F1-Scores demonstrate that our model is not only accurate in identifying malicious network traffic but also maintains a balanced performance across different datasets. This balance is crucial in practical applications, where high precision and recall are necessary to minimize false positives and ensure reliable intrusion detection.
|
| 318 |
+
|
| 319 |
+
In summary, the data in Table III confirms that our method is competitive with, and in some cases superior to, other leading algorithms, highlighting its effectiveness in traffic classification and network intrusion detection tasks.
|
| 320 |
+
|
| 321 |
+
2) Multiclass classiffcation results: Table IV presents the multi-classification results of our method across three standard datasets, where the classifier is tasked with distinguishing between various attack types. The multi-classification problem is more complex than binary classification, as it requires the model to accurately identify not just whether an attack is present, but also to specify the type of attack. The results in Table IV indicate that our model demonstrates strong performance, particularly on the BoT-IoT dataset. This superior performance is indicative of the model's capability to effectively differentiate between the distinct attack types within this dataset.
|
| 322 |
+
|
| 323 |
+
Table V provides further insight into the model's performance by showing the recall and F1-Score values for different attacks in the multi-classification task, specifically focusing on the ToN-IoT dataset. These metrics are crucial for understanding the model's ability to correctly identify each attack type. High recall values suggest that the model is effective in identifying the majority of true positive instances for most attack types, minimizing the risk of undetected threats. Similarly, strong F1-Score values indicate a good balance between precision and recall, reinforcing the model's robustness in handling diverse attack scenarios.
|
| 324 |
+
|
| 325 |
+
TABLE IV. COMPARISON OF BOT-IOT AND NF-BOT-IOT MULTI-CLASSIFICATION ALGORITHMS FI
|
| 326 |
+
|
| 327 |
+
max width=
|
| 328 |
+
|
| 329 |
+
X 2|c|BoT-IoT 2|c|NF-BoT-IoT
|
| 330 |
+
|
| 331 |
+
1-5
|
| 332 |
+
Class Name Recall F1- Score Class Name Recall
|
| 333 |
+
|
| 334 |
+
1-5
|
| 335 |
+
Benign 100.00% 0.99 Benign 100.00%
|
| 336 |
+
|
| 337 |
+
1-5
|
| 338 |
+
DDos 99.99% 1.00 DDos 99.99%
|
| 339 |
+
|
| 340 |
+
1-5
|
| 341 |
+
Dos 99.99% 1.00 Dos 99.99%
|
| 342 |
+
|
| 343 |
+
1-5
|
| 344 |
+
Reconnaissance 99.99% 1.00 Reconnaissance 99.99%
|
| 345 |
+
|
| 346 |
+
1-5
|
| 347 |
+
Theft 94.52% 0.98 Theft 94.52%
|
| 348 |
+
|
| 349 |
+
1-5
|
| 350 |
+
Weighted Average 99.99 1.00 Weighted Average 99.99
|
| 351 |
+
|
| 352 |
+
1-5
|
| 353 |
+
|
| 354 |
+
ABLE V. COMPARISON OF NF-TON-IOT MULTI-CLASSIFICATION ALGORITHMS
|
| 355 |
+
|
| 356 |
+
max width=
|
| 357 |
+
|
| 358 |
+
X 2|c|NF-ToN-IoT
|
| 359 |
+
|
| 360 |
+
1-3
|
| 361 |
+
Class Name Recall F1-Score
|
| 362 |
+
|
| 363 |
+
1-3
|
| 364 |
+
Benign 98.33% 0.99
|
| 365 |
+
|
| 366 |
+
1-3
|
| 367 |
+
Backdoor 98.46% 0.99
|
| 368 |
+
|
| 369 |
+
1-3
|
| 370 |
+
DDos 57.47% 0.73
|
| 371 |
+
|
| 372 |
+
1-3
|
| 373 |
+
Dos 99.72 0.46
|
| 374 |
+
|
| 375 |
+
1-3
|
| 376 |
+
Injection 30.59 0.46
|
| 377 |
+
|
| 378 |
+
1-3
|
| 379 |
+
MIMT 55.02 0.25
|
| 380 |
+
|
| 381 |
+
1-3
|
| 382 |
+
Ransomware 80.28 0.42
|
| 383 |
+
|
| 384 |
+
1-3
|
| 385 |
+
Password 100.00 0.99
|
| 386 |
+
|
| 387 |
+
1-3
|
| 388 |
+
Scanning 25.92 0.15
|
| 389 |
+
|
| 390 |
+
1-3
|
| 391 |
+
XSS 40.70% 0.28
|
| 392 |
+
|
| 393 |
+
1-3
|
| 394 |
+
Weighted Average 68.65% 0.67
|
| 395 |
+
|
| 396 |
+
1-3
|
| 397 |
+
|
| 398 |
+
However, the experimental plots of confusion matrices shown in Figures 5 and 6 for the NF-BoT-IoT and NF-ToN-IoT datasets reveal some nuances in the model's performance. While the recognition rate is extremely high for several attack types, the model struggles with accurately classifying DDoS attacks. This issue likely stems from the fact that during model training, DDoS and DoS attacks shared similar features, leading to a significant overlap in their learned representations. As a result, the model occasionally misclassifies DDoS attacks as DoS attacks, which suggests that the feature extraction process may need refinement to better distinguish between these two attack types.
|
| 399 |
+
|
| 400 |
+
The observed difficulty in separating DDoS from DoS attacks highlights a potential area for improvement. One possible solution could involve enhancing the feature engineering process to capture more distinctive characteristics of these attack types. Additionally, adjusting the training process to emphasize the differences between DDoS and DoS attacks, perhaps through the use of more advanced techniques like adversarial training or ensemble learning, could further improve classification accuracy.
|
| 401 |
+
|
| 402 |
+
In summary, while our model excels in the multi-classification of several attack types, especially within the BoT-IoT dataset, there remains room for improvement in the classification of closely related attacks such as DDoS and DoS. Addressing these challenges will be crucial for further enhancing the model's overall reliability and effectiveness in real-world network security applications.
|
| 403 |
+
|
| 404 |
+
< g r a p h i c s >
|
| 405 |
+
|
| 406 |
+
Fig. 5. NF-BoT-IoT multiclassification results
|
| 407 |
+
|
| 408 |
+
< g r a p h i c s >
|
| 409 |
+
|
| 410 |
+
Fig. 6. NF-ToN-IoT multiclassification results
|
| 411 |
+
|
| 412 |
+
As with binary classification, we compared the performance of our model's Network Intrusion Detection System (NIDS) with other classifiers, as shown in studies ${}^{\left\lbrack {23},{24}\right\rbrack }$ . Table VI presents the results of this comparison, focusing on the multi-classification task.
|
| 413 |
+
|
| 414 |
+
The findings reveal that our algorithm consistently achieves higher average F1-Score values compared to all existing methods. This is particularly important in multi-classification, where the ability to accurately distinguish between multiple types of network attacks is crucial. The superior F1-Score suggests that our model not only identifies attacks effectively but also excels in correctly classifying the different types of attacks, a challenge where other classifiers often fall short.
|
| 415 |
+
|
| 416 |
+
These results underscore the effectiveness of our approach in handling the complexities of multi-class network intrusion detection, proving that our model outperforms current leading methods in this critical area.
|
| 417 |
+
|
| 418 |
+
TABLE VI. COMPARISON OF MULTI-CLASSIFICATION ALGORITHMS F 1
|
| 419 |
+
|
| 420 |
+
max width=
|
| 421 |
+
|
| 422 |
+
Method Dataset W-F1
|
| 423 |
+
|
| 424 |
+
1-3
|
| 425 |
+
Ours CatBoost BoT-IoT 1.00 0.99
|
| 426 |
+
|
| 427 |
+
1-3
|
| 428 |
+
Ours Extra Tree Classifier TS-IDS NF-BoT-IoT 0.88 0.77 0.83
|
| 429 |
+
|
| 430 |
+
1-3
|
| 431 |
+
Ours Extra Tree Classifier NF-ToN-IoT 0.67 0.60
|
| 432 |
+
|
| 433 |
+
1-3
|
| 434 |
+
|
| 435 |
+
Overall, our method demonstrates superior performance compared to other Network Intrusion Detection System (NIDS) approaches across both binary and multi-classification tasks, as evidenced by the results from the three datasets utilized in our study. Our model not only achieves higher accuracy and F1- Scores but also shows remarkable robustness and generalizability. This indicates that it is well-equipped to handle various types of network traffic and detect both known and emerging threats effectively.
|
| 436 |
+
|
| 437 |
+
The model's ability to consistently outperform other methods highlights its advanced capabilities in accurately identifying and classifying different types of network attacks, whether it's simply distinguishing between benign and malicious traffic or correctly categorizing specific attack types. This robust performance across diverse datasets suggests that our method is adaptable to different network environments and can maintain its effectiveness even when faced with the complexities and variabilities of real-world data.
|
| 438 |
+
|
| 439 |
+
§ V. CONCLUSION AND FUTURE WORK
|
| 440 |
+
|
| 441 |
+
In this paper, we have introduced a novel GNN-based network intrusion detection method called E-T-GraphSAGE, which has enhanced attack flow detection by capturing edge features and topology patterns within network flow graphs. Our focus has been on applying E-T-GraphSAGE to detect malicious network flows in the context of network intrusion detection. Experimental evaluations have shown that our model performs very well on the three NIDS benchmark datasets and generally outperforms currently available network intrusion detection methods. In the future, we plan to build unsupervised graph neural network intrusion detection models, as well as lighten the E-T-GraphSAGE model and apply it to edge network servers, especially small and medium-sized network devices, for better timely network intrusion detection at the edge.
|
| 442 |
+
|
| 443 |
+
§ ACKNOWLEDGMENT
|
| 444 |
+
|
| 445 |
+
This work is supported by the National Natural Science Foundation of China under Grant 62101299.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,577 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Distributed Unknown Input Observer-Based Global Fault-Tolerant Average Consensus Control for Linear Multi-Agent Systems
|
| 2 |
+
|
| 3 |
+
Ximing Yang
|
| 4 |
+
|
| 5 |
+
School of Automation Engineering
|
| 6 |
+
|
| 7 |
+
University of Electronic Science and Technology of China
|
| 8 |
+
|
| 9 |
+
Chengdu 611731, China
|
| 10 |
+
|
| 11 |
+
yxm961115123@163.com
|
| 12 |
+
|
| 13 |
+
Tieshan Li
|
| 14 |
+
|
| 15 |
+
School of Automation Engineering
|
| 16 |
+
|
| 17 |
+
University of Electronic Science and Technology of China
|
| 18 |
+
|
| 19 |
+
Chengdu 611731, China
|
| 20 |
+
|
| 21 |
+
tieshanli@126.com
|
| 22 |
+
|
| 23 |
+
Yue Long
|
| 24 |
+
|
| 25 |
+
School of Automation Engineering
|
| 26 |
+
|
| 27 |
+
University of Electronic Science and Technology of China
|
| 28 |
+
|
| 29 |
+
Chengdu 611731, China
|
| 30 |
+
|
| 31 |
+
longyue@uestc.edu.cn Hanqing Yang School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
|
| 32 |
+
|
| 33 |
+
hqyang5517@uestc.edu.cn
|
| 34 |
+
|
| 35 |
+
${Abstract}$ -The paper mainly investigates the distributed unknown input observer-based global fault-tolerant average consensus control problem for multi-agent systems (MASs). First, a distributed unknown input observer based on relative estimation error is proposed, which can effectively reduce the impact of external disturbances and achieve accurate estimation of the agent states and the faults they suffered. Then, based on the obtained accurate estimations and using the relative estimation error, a global fault-tolerant average consensus controller is proposed. The proposed controller can compensate for the effects of faults and enable the MASs to achieve global average consensus. Finally, simulations are given to verify the effectiveness of the proposed scheme.
|
| 36 |
+
|
| 37 |
+
Index Terms-Multi-agent systems, fault-tolerant control, distributed unknown input observer, global average consensus.
|
| 38 |
+
|
| 39 |
+
## I. INTRODUCTION
|
| 40 |
+
|
| 41 |
+
In the past decades, the study of multi-agent systems (MASs) has been highly emphasized. Due to their extensive civilian and military applications, MASs are subject to stringent performance requirements, such as adaptability, flexibility, and robustness [1]. To meet these requirements, considerable attention has been given to coordination issues in MASs, such as consensus [2], containment control [3], and formation control [4]. These coordination mechanisms have been utilized in a wide range of applications such as intelligent transportation systems [5], drone formation [6], and smart grids [7]. However, the scalability and complexity of MASs render traditional centralized control schemes insufficient to meet these requirements. Therefore, the exploration of distributed control schemes for MASs is of significant importance.
|
| 42 |
+
|
| 43 |
+
Compared with centralized control schemes, distributed control schemes are more suitable for the coordinated control of autonomous agents in MASs [8]. Currently, the existing control schemes can be categorized into two types based on the structure of MASs: leaderless and leader-follower. The goal of control in leaderless MASs is to reach the consensus among the agents [9]. In contrast, the control objective of leader-follower MASs is for the follower agents to track the state of the leader [10]. A formation control scheme based on dynamic output feedback was proposed for cases where velocity cannot be measured, ensuring that the agents converge to the desired formation pattern within a finite time [11]. In [12], an adaptive control strategy with a fully distributed neural network was proposed to ensure that all followers track the leader's state and that the synchronization error remains within a specified range. A formation control method based on constructing a direction alignment law and formation control law using the displacement between agents was proposed to address the direction misalignment issue caused by local reference frames [13]. Overall, distributed control has emerged as a popular research direction, attracting considerable research efforts and yielding abundant results. However, many research outcomes focus solely on the control methods design and consider relatively idealized cases, assuming precise knowledge of system states and the absence of system faults, which diminishes their engineering feasibility.
|
| 44 |
+
|
| 45 |
+
In practical applications, MASs consist of numerous agents distributed across a spatial area, with each agent facing distinct environmental challenges. Agents may encounter uncertainties, such as actuator faults, which can incapacitate the entire control system [14]. To enhance the reliability and safety of the system, it is necessary to implement measures to compensate the adverse influences of faults on the system. In this context, fault-tolerant consensus control has attracted widespread attention as an effective method to compensate for the impact of faults [15]. A virtual actuator framework-based adaptive fault-tolerant control method was proposed to achieve leader-follower consensus control under time-varying actuator faults [16]. Based on an observer framework, a reliable consensus control design method under stochastic actuator failures was proposed to achieve multi-agent consensus [17]. A distributed fault-tolerant consensus protocol based on a distributed intermediate observer was proposed to achieve finite-time fault-tolerant consensus control with enhanced dissipation rate [18]. However, although [18] has addressed the consensus problem of MASs under faults, they have not considered the impact of external disturbances present in practical environments on estimation performance. Fortunately, The unknown input observer, as an effective method based on disturbance decoupling technology to handle external disturbances in estimation error systems, has been widely applied [19]- [20]. Depending on [19], to address the problem of distributed secure control in MASs, a decentralized unknown input observer-based distributed secure control scheme was proposed [21].
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
This work was supported in part by the National Natural Science Foundation of China under Grant 51939001, Grant 62273072, and Grant 62203088, in part by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0903. (Corresponding author: Tieshan Li.)
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
Based on these observations, a distributed unknown input observer and a fault-tolerant average consensus controller based on relative estimation error are proposed in this paper. Major contributions of this work are summarized below:
|
| 54 |
+
|
| 55 |
+
(1) Compared with reference [18], a control scheme utilizing disturbance decoupling technology to handle external disturbances is proposed. This scheme effectively reduces the adverse influence of disturbances on estimation performance and achieves global average consensus for MASs.
|
| 56 |
+
|
| 57 |
+
(2) Distinguished from [21], a novel distributed unknown input observer utilizing relative estimation error is proposed to obtain the estimations of the state and the fault experienced by each agent. Specifically, it uses relative estimation error to determine fault estimation, incorporating output estimates rather than just the outputs themselves into the distributed algorithm.
|
| 58 |
+
|
| 59 |
+
The structure is given as follows: Section II presents the problem formulation and give some useful assumptions. In Section III, the main results including distributed unknown input observer-based global fault-tolerant average consensus control scheme and stability analysis are given. Simulations are given in Section IV. Finally, the conclusion of this work is presented in Section V.
|
| 60 |
+
|
| 61 |
+
## II. Preparations
|
| 62 |
+
|
| 63 |
+
## A. Graph Theory
|
| 64 |
+
|
| 65 |
+
An undirected graph $\mathfrak{g}$ is defined as a pair $\left( {v,\epsilon ,\mathfrak{A}}\right)$ , where $v = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ represents a nonempty finite set of nodes, and $\epsilon \subseteq v \times v$ represents a set of edges. An edge $\left( {{v}_{i},{v}_{j}}\right)$ denotes a pair of nodes ${v}_{i}$ and ${v}_{j}$ . The adjacency matrix, denoted as $\mathfrak{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{N \times N}$ , has elements ${a}_{ij}$ representing the weight coefficient of the edge $\left( {{v}_{i},{v}_{j}}\right)$ , with ${a}_{ii} = 0$ and ${a}_{ij} = 1$ if $\left( {{v}_{i},{v}_{j}}\right) \in \epsilon$ . The Laplacian matrix, denoted as $\mathfrak{L} = \mathfrak{D} - \mathfrak{A}$ , is constructed where $\mathfrak{D} = \left\lbrack {d}_{ii}\right\rbrack$ is a diagonal matrix with ${d}_{ii} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}$ .
|
| 66 |
+
|
| 67 |
+
## B. Problem Formulation
|
| 68 |
+
|
| 69 |
+
Considering a MASs with $N$ agents $\left( {i \in \{ 1,\ldots , N\} }\right)$ , and the dynamics of $i$ th agent with actuator faults are denoted as follows:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{y}_{i}\left( t\right) = C{x}_{i}\left( t\right) \tag{1}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where ${x}_{i}\left( t\right) \in {\mathbf{R}}^{n},{u}_{i}\left( t\right) \in {\mathbf{R}}^{m},{y}_{i}\left( t\right) \in {\mathbf{R}}^{p}$ represent the agent’s state, input, and output, respectively. The terms ${f}_{i}\left( t\right) \in$ ${\mathbf{R}}^{q}$ and ${\omega }_{i}\left( t\right) \in {\mathbf{R}}^{s}$ denote the actuator fault and external disturbance, respectively. The matrices $A, B, C$ , and $D$ are constants with appropriate dimensions.
|
| 80 |
+
|
| 81 |
+
This paper aims to propose a global fault-tolerant average consensus controller, so that the state of all agents can achieve global average consensus, i.e., global average consensus error ${\widetilde{x}}_{i}\left( t\right)$ satisfy:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{\widetilde{x}}_{i}\left( t\right) = {x}_{i}\left( t\right) - \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{x}_{i} \Rightarrow 0. \tag{2}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
To facilitate subsequent analysis, the following useful assumptions and lemma are given:
|
| 88 |
+
|
| 89 |
+
Assumption 1.
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\operatorname{rank}\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack = n + \operatorname{rank}\left( D\right) . \tag{3}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
Assumption 2. [22] The actuator fault ${f}_{i}\left( t\right)$ is differentiable with respect to time, and its time derivative ${\dot{f}}_{i}\left( t\right)$ belongs to ${L}_{2}\lbrack 0,\infty )$ . Similarly, the external disturbance ${\omega }_{i}\left( t\right)$ is bounded and also belongs to ${L}_{2}\lbrack 0,\infty )$ .
|
| 96 |
+
|
| 97 |
+
Lemma 1. [21] For the undirected and connected graph $\mathfrak{g}$ , one has $\mathfrak{L}\mathcal{M} = \mathcal{M}\mathfrak{L} = \mathfrak{L}$ .
|
| 98 |
+
|
| 99 |
+
## III. MAIN RESULTS
|
| 100 |
+
|
| 101 |
+
A. Distributed unknown input observer-based global fault-tolerant average consensus control scheme
|
| 102 |
+
|
| 103 |
+
To reconstruct the state and actuator fault of the agent, the relative estimation error-based distributed unknown input observer for agent $i$ is proposed:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{\dot{m}}_{i}\left( t\right) = {\Upsilon A}{\widehat{x}}_{i}\left( t\right) + {\Upsilon B}\left( {{u}_{i}\left( t\right) + {\widehat{f}}_{i}\left( t\right) }\right)
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
+ {L}_{1}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\widehat{x}}_{i}\left( t\right) = {m}_{i}\left( t\right) + \Theta {y}_{i}\left( t\right)
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{\dot{\widehat{f}}}_{i}\left( t\right) = {L}_{2}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
{\widehat{y}}_{i} = C{\widehat{x}}_{i}\left( t\right) \tag{4}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where ${m}_{i}\left( t\right) ,{\widehat{x}}_{i}\left( t\right) ,{\widehat{f}}_{i}\left( t\right)$ , and ${\widehat{y}}_{i}$ denote the state of unknown input observer, state estimation, actuator fault estimation, and output estimation for agent $i$ , respectively. And ${\eta }_{i} = {y}_{i}\left( t\right) -$ ${\widehat{y}}_{i}\left( t\right)$ denotes output estimation error, ${\eta }_{i} - {\eta }_{j}$ denotes the relative estimation error. In addition, the global fault-tolerant average consensus controller for agent $i$ is proposed:
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
{u}_{i}\left( t\right) = E{\widehat{x}}_{i}\left( t\right) - {\widehat{f}}_{i}\left( t\right) + K\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\} . \tag{5}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
Then, for agent $i$ , the state estimation error system can be denoted as below:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
{\dot{e}}_{xi}\left( t\right) = {\dot{x}}_{i}\left( t\right) - {\dot{m}}_{i}\left( t\right) - {\Theta C}{\dot{x}}_{i}\left( t\right) . \tag{6}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
The following condition for the matrices $\Upsilon$ and $\Theta$ can be obtained based on Assumption 1:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\left\lbrack \begin{array}{ll} \mathbf{\Upsilon } & \Theta \end{array}\right\rbrack \left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{I} & \mathbf{0} \end{array}\right\rbrack
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
which could be re-written as follows
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
{\Upsilon D} = \mathbf{0},\mathbf{I} - {\Theta C} = \Upsilon . \tag{7}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
Then, based on the above conditions, one has
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
{\dot{e}}_{xi}\left( t\right) = {\Upsilon A}{x}_{i}\left( t\right) + {\Upsilon B}\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + {\Upsilon D}{\omega }_{i}\left( t\right) - {\Upsilon A}{\widehat{x}}_{i}\left( t\right)
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
- {\Upsilon B}\left( {{u}_{i}\left( t\right) + {\widehat{f}}_{i}\left( t\right) }\right) - {L}_{1}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
= {\Upsilon A}{e}_{xi}\left( t\right) + {\Upsilon B}{e}_{fi}\left( t\right)
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
- {L}_{1}C\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} , \tag{8}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
and the fault estimation error system can be denoted as:
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
{\dot{e}}_{fi}\left( t\right) = - {L}_{2}C\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} + {\dot{f}}_{i}\left( t\right) . \tag{9}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
Denote vector ${e}_{i}\left( t\right) = \left\lbrack {{e}_{xi}^{T}\left( t\right) ,{e}_{fi}^{T}\left( t\right) }\right\rbrack$ , the augmented estimation error system can be obtained:
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
{\dot{e}}_{i}\left( t\right) = \widetilde{A}{e}_{i}\left( t\right) - L\bar{C}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{i}\left( t\right) - {e}_{j}\left( t\right) }\right\rbrack }\right\} + \widehat{I}{\dot{f}}_{i}\left( t\right)
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
(10)
|
| 180 |
+
|
| 181 |
+
where
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\widetilde{A} = \left\lbrack \begin{matrix} {\Upsilon A} & {\Upsilon B} \\ \mathbf{0} & \mathbf{0} \end{matrix}\right\rbrack , L = \left\lbrack \begin{array}{l} {L}_{1} \\ {L}_{2} \end{array}\right\rbrack ,\bar{C} = \left\lbrack \begin{array}{ll} C & \mathbf{0} \end{array}\right\rbrack ,\widehat{I} = \left\lbrack \begin{array}{l} \mathbf{0} \\ \mathbf{I} \end{array}\right\rbrack .
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
Defining vector
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\dot{f}\left( t\right) = {\left\lbrack \begin{array}{lll} {\dot{f}}_{1}\left( t\right) & \ldots & {\dot{f}}_{N}\left( t\right) \end{array}\right\rbrack }^{T},
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
e\left( t\right) = {\left\lbrack \begin{array}{lll} {e}_{1}^{T}\left( t\right) & \ldots & {e}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T}.
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
Then, the estimation error system can be rewritten as:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\dot{e}\left( t\right) = \left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) e\left( t\right) + {I}_{N} \otimes \widehat{I}\dot{f}\left( t\right) . \tag{11}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
In addition, for agent $i$ :, the closed-loop system can be denoted as:
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {E{\widehat{x}}_{i}\left( t\right) - {\widehat{f}}_{i}\left( t\right) + K\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\} }\right.
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
\left. {+{f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
= \left( {A + {BE}}\right) {x}_{i}\left( t\right) - {BE}{e}_{xi}\left( t\right) + B{e}_{fi}\left( t\right)
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
+ {BKC}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} + D{\omega }_{i}\left( t\right)
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
= \left( {A + {BE}}\right) {x}_{i}\left( t\right) + \widetilde{B}{e}_{i}\left( t\right)
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
+ {BK}\bar{C}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{i}\left( t\right) - {e}_{j}\left( t\right) }\right\rbrack }\right\} + D{\omega }_{i}\left( t\right) \tag{12}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
where $\widetilde{B} = \left\lbrack \begin{array}{ll} - {BE} & B \end{array}\right\rbrack$ .
|
| 230 |
+
|
| 231 |
+
To achieve global average consensus, recall the global average consensus error (2) for agent $i$ , defining vector
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\widetilde{x}\left( t\right) = {\left\lbrack \begin{array}{lll} {\widetilde{x}}_{1}^{T}\left( t\right) & \ldots & {\widetilde{x}}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T},
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
x\left( t\right) = {\left\lbrack \begin{array}{lll} {x}_{1}^{T}\left( t\right) & \ldots & {x}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T},
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
\omega \left( t\right) = {\left\lbrack \begin{array}{lll} {\omega }_{1}^{T}\left( t\right) & \ldots & {\omega }_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T}.
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
Then, the closed-loop system can be rewritten as:
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
\dot{x}\left( t\right) = \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) x\left( t\right) + \left( {{I}_{N} \otimes \widetilde{B}}\right.
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
+ \mathfrak{L} \otimes {BK}\bar{C})e\left( t\right) + \left( {{I}_{N} \otimes D}\right) \omega \left( t\right) . \tag{13}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
So, for the global average consensus error
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\widetilde{x}\left( t\right) = \left( {\mathcal{M} \otimes {I}_{n}}\right) x\left( t\right) \tag{14}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
where $\mathcal{M} = {I}_{N} - \frac{{1}_{N}{1}_{N}^{T}}{N}$ , it can be denoted as
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
\dot{\widetilde{x}}\left( t\right) = \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \left( {{\mathcal{M}}^{-1} \otimes {I}_{n}^{-1}}\right) \widetilde{x}\left( t\right)
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
+ \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
+ \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes D}\right) \omega \left( t\right)
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
= \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \widetilde{x}\left( t\right) + \left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
+ \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) \text{.} \tag{15}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
## B. Stability analysis
|
| 284 |
+
|
| 285 |
+
Theorem 1. For given scalar $\alpha > 0$ , matrices $\Upsilon ,\Theta , L, K$ , controller feedback gain matrix $E$ , Laplacian matrix $\mathfrak{L}$ , matrix $\mathcal{M}$ , if there exist matrices $Q = {Q}^{T} > 0, P = {P}^{T} > 0$ with appropriate dimensions, such that the following condition holds
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
\Phi = \left\lbrack \begin{matrix} {\Phi }_{1} & {\Phi }_{2} & {\Phi }_{3} & \mathbf{0} \\ * & {\Phi }_{4} & \mathbf{0} & {\Phi }_{5} \\ * & * & {\Phi }_{6} & \mathbf{0} \\ * & * & * & {\Phi }_{7} \end{matrix}\right\rbrack < 0 \tag{16}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
where ${\Phi }_{1} = \operatorname{He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} + \alpha {I}_{N} \otimes Q,{\Phi }_{2} =$ $\mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C},{\Phi }_{3} = \mathcal{M} \otimes {QD},{\Phi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - }\right.$ $\mathfrak{L} \otimes {PL}\bar{C}\} + \alpha {I}_{N} \otimes P,{\Phi }_{5} = {I}_{N} \otimes P\widehat{I},{\Phi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},{\Phi }_{7} =$ $- {I}_{N} \otimes {I}_{{n}_{f}}$ , then the all the signals of the estimation error system (11) and the global average consensus error system (15) are bounded.
|
| 292 |
+
|
| 293 |
+
Proof. The Lyapunov function can be chosen as below:
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
V\left( t\right) = {V}_{1}\left( t\right) + {V}_{2}\left( t\right) \tag{17}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
where ${V}_{1}\left( t\right) = {\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\widetilde{x}\left( t\right) ,{V}_{2}\left( t\right) = {e}^{T}\left( t\right) \widetilde{P}e\left( t\right) ,\widetilde{P} = {I}_{N} \otimes$ $P,\widetilde{Q} = {I}_{N} \otimes Q$ . Take the derivative of the above function, the following can be obtained:
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
\dot{V}\left( t\right) \leq 2{e}^{T}\left( t\right) \widetilde{P}\dot{e}\left( t\right) + 2{\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\dot{\widetilde{x}}\left( t\right)
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
\leq 2{e}^{T}\left( t\right) \widetilde{P}\left( {\left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) e\left( t\right) + {I}_{N} \otimes \widehat{I}\dot{f}\left( t\right) }\right)
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\left( {\left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \widetilde{x}\left( t\right) }\right.
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\left. {+\left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right) + \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) }\right)
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
\leq {e}^{T}\left( t\right) \operatorname{He}\left\{ {\left( {{I}_{N} \otimes P}\right) \left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) }\right\} e\left( t\right)
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
+ 2{e}^{T}\left( t\right) \left( {{I}_{N} \otimes P}\right) \left( {{I}_{N} \otimes \widehat{I}}\right) \dot{f}\left( t\right)
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
+ {\widetilde{x}}^{T}\left( t\right) {He}\left\{ {\left( {{I}_{N} \otimes Q}\right) \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) }\right\} \widetilde{x}\left( t\right)
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \left( {{I}_{N} \otimes Q}\right) \left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \left( {{I}_{N} \otimes Q}\right) \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) . \tag{18}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
According to the properties of the Kronecker product, we can get:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\dot{V}\left( t\right) \leq {e}^{T}\left( t\right) \operatorname{He}\left\{ {{I}_{N} \otimes P\widetilde{A} - \mathfrak{L} \otimes {PL}\bar{C}}\right\} e\left( t\right)
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
+ {\widetilde{x}}^{T}\left( t\right) {He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} \widetilde{x}\left( t\right)
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \left( {\mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C}}\right) e\left( t\right)
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \left( {\mathcal{M} \otimes {QD}}\right) \omega \left( t\right) + 2{e}^{T}\left( t\right) \left( {{I}_{N} \otimes P\widehat{I}}\right) \dot{f}\left( t\right) .
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
Define $\xi \left( t\right) = \left\lbrack {{\widetilde{x}}^{T}\left( t\right) ,{e}^{T}\left( t\right) ,{\omega }^{T}\left( t\right) ,{\dot{f}}^{T}\left( t\right) }\right\rbrack$ , if the following linear matrix inequality holds
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
\Phi = \left\lbrack \begin{matrix} {\Phi }_{1} & {\Phi }_{2} & {\Phi }_{3} & \mathbf{0} \\ * & {\Phi }_{4} & \mathbf{0} & {\Phi }_{5} \\ * & * & {\Phi }_{6} & \mathbf{0} \\ * & * & * & {\Phi }_{7} \end{matrix}\right\rbrack < 0 \tag{19}
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
where
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
{\Phi }_{1} = {He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} + \alpha {I}_{N} \otimes Q,
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
{\Phi }_{2} = \mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C},
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
{\Phi }_{3} = \mathcal{M} \otimes {QD},
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
{\Phi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - \mathfrak{L} \otimes {PL}\bar{C}}\right\} + \alpha {I}_{N} \otimes P,
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
{\Phi }_{5} = {I}_{N} \otimes P\widehat{I},
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
$$
|
| 384 |
+
{\Phi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
{\Phi }_{7} = - {I}_{N} \otimes {I}_{{n}_{f}},
|
| 389 |
+
$$
|
| 390 |
+
|
| 391 |
+
we have
|
| 392 |
+
|
| 393 |
+
$$
|
| 394 |
+
\dot{V}\left( t\right) \leq - \alpha {e}^{T}\left( t\right) \widetilde{P}e\left( t\right) - \alpha {\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\widetilde{x}\left( t\right) + \parallel \omega \left( t\right) {\parallel }^{2} + \parallel \dot{f}\left( t\right) {\parallel }^{2}
|
| 395 |
+
$$
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\leq - {\alpha V}\left( t\right) + \Delta \left( t\right) \text{.} \tag{20}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
As can be seen from the above conclusion, the global average consensus of MASs (1) and the boundedness of the estimation error system (11) can be guaranteed. The proof is completed.
|
| 402 |
+
|
| 403 |
+
Without loss of generality, the gain matrices $L, K$ can be solved by some algebraic operations, and the theorem is given as follows.
|
| 404 |
+
|
| 405 |
+
Theorem 2. For given scalar $\alpha > 0$ , matrices $\Upsilon ,\Theta$ , controller feedback gain matrix $E$ , Laplacian matrix $\mathfrak{L}$ , matrix $\mathcal{M}$ , if there exist symmetric positive definite matrices $S, P$ , matrices $K,{P}_{L}$ with appropriate dimensions, such that the following condition holds
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
\Psi = \left\lbrack \begin{matrix} {\Psi }_{1} & {\Psi }_{2} & {\Psi }_{3} & \mathbf{0} \\ * & {\Psi }_{4} & \mathbf{0} & {\Psi }_{5} \\ * & * & {\Psi }_{6} & \mathbf{0} \\ * & * & * & {\Psi }_{7} \end{matrix}\right\rbrack < 0 \tag{21}
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
where ${\Psi }_{1} = \operatorname{He}\left\{ {{I}_{N} \otimes \left( {{AS} + {BES}}\right) }\right\} + \alpha {I}_{N} \otimes S,{\Psi }_{2} =$ $\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C},{\Psi }_{3} = \mathcal{M} \otimes D,{\Psi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - }\right.$ $\left. {\mathfrak{L} \otimes {P}_{L}\bar{C}}\right\} + \alpha {I}_{N} \otimes P,{\Psi }_{5} = {I}_{N} \otimes P\widehat{I},{\Psi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},$ ${\Psi }_{7} = - {I}_{N} \otimes {I}_{{n}_{f}}, S = {Q}^{-1}$ , then the all the signals of the estimation error system (11) and the global average consensus error system (15) are bounded, and gain matrix $L = {P}^{-1}{P}_{L}$ .
|
| 412 |
+
|
| 413 |
+
Proof. Post- and pre-multiplying (19) by $\operatorname{diag}\left\{ {{I}_{N} \otimes }\right.$ $\left. {{Q}^{-1},{I}_{N} \otimes {I}_{{n}_{x} + {n}_{f}},{I}_{N} \otimes {I}_{{n}_{\omega }},{I}_{N} \otimes {I}_{{n}_{f}}}\right\}$ , the linear matrix inequality (21) can be deduced. This proof is completed.
|
| 414 |
+
|
| 415 |
+
## IV. EXAMPLE
|
| 416 |
+
|
| 417 |
+
In this example, a group of five agents is considered. And the dynamics of the agents are in the form of
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
$$
|
| 424 |
+
{y}_{i}\left( t\right) = C{x}_{i}\left( t\right) \tag{22}
|
| 425 |
+
$$
|
| 426 |
+
|
| 427 |
+
which are borrowed from [23], and parameter matrices are given as below
|
| 428 |
+
|
| 429 |
+
$$
|
| 430 |
+
A = \left\lbrack \begin{matrix} 0 & 1 \\ {0.2} & - 2 \end{matrix}\right\rbrack , B = \left\lbrack \begin{array}{l} 0 \\ 1 \end{array}\right\rbrack , C = \left\lbrack \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right\rbrack , D = \left\lbrack \begin{array}{l} {0.1} \\ {0.1} \end{array}\right\rbrack .
|
| 431 |
+
$$
|
| 432 |
+
|
| 433 |
+
The communication graph considered in this paper is shown below:
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
|
| 437 |
+
Fig. 1: Communication graph.
|
| 438 |
+
|
| 439 |
+
From Fig. 1, one has
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\mathfrak{L} = \left\lbrack \begin{matrix} 2 & 0 & - 1 & - 1 & 0 \\ 0 & 2 & 0 & - 1 & - 1 \\ - 1 & 0 & 2 & - 1 & 0 \\ - 1 & - 1 & - 1 & 3 & 0 \\ 0 & - 1 & 0 & 0 & 1 \end{matrix}\right\rbrack
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
To obtain the pre-design unknown input observer gain matrices, the matrix ${M}_{\varkappa }$ can be selected as follows:
|
| 446 |
+
|
| 447 |
+
$$
|
| 448 |
+
{M}_{\varkappa } = \left\lbrack \begin{array}{llll} - {6.7245} & - {9.1869} & - {9.4050} & - {7.5082} \\ - {5.2013} & - {8.2981} & - {7.0737} & - {8.8809} \end{array}\right\rbrack ,
|
| 449 |
+
$$
|
| 450 |
+
|
| 451 |
+
according to the following condition
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
\left\lbrack \begin{array}{ll} \mathbf{\Upsilon } & \Theta \end{array}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{I} & \mathbf{0} \end{array}\right\rbrack \times {\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack }^{ \dagger }
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
- {M}_{\varkappa }\left( {\mathbf{I} - \left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack \times {\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack }^{ \dagger }}\right) ,
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
the pre-design unknown input observer gain matrices can be obtained:
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
\Upsilon = \left\lbrack \begin{matrix} {0.1086} & - {0.1086} \\ - {1.4760} & {1.4760} \end{matrix}\right\rbrack ,\Theta = \left\lbrack \begin{matrix} {0.1086} & {0.8914} \\ - {0.4760} & {1.4760} \end{matrix}\right\rbrack .
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
Then, the parameters required to solve Theorem 2 are selected as $E = \left\lbrack {-{18.7279} - {7.9363}}\right\rbrack ,\alpha = {0.4}$ . the following matrices exist to make inequality (21) negative definite:
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
P = \left\lbrack \begin{matrix} {22.2529} & {0.8245} & {0.2564} \\ {0.8245} & {7.9547} & - {2.6069} \\ {0.2564} & - {2.6069} & {1.0677} \end{matrix}\right\rbrack ,
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
S = \left\lbrack \begin{matrix} {14.8878} & - {23.6762} \\ - {23.6762} & {47.0985} \end{matrix}\right\rbrack ,
|
| 475 |
+
$$
|
| 476 |
+
|
| 477 |
+
$$
|
| 478 |
+
K = \left\lbrack \begin{array}{ll} - {2.7207} & - {6.7659} \end{array}\right\rbrack ,
|
| 479 |
+
$$
|
| 480 |
+
|
| 481 |
+
$$
|
| 482 |
+
{P}_{L} = \left\lbrack \begin{matrix} {0.3396} & {12.6409} \\ - {0.9358} & {1.4581} \\ {6.4466} & - {0.6400} \end{matrix}\right\rbrack
|
| 483 |
+
$$
|
| 484 |
+
|
| 485 |
+
where gain matrix
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
L = {P}^{-1}{P}_{L} = \left\lbrack \begin{matrix} - {0.7049} & {0.6177} \\ {9.9518} & - {0.6290} \\ {30.5037} & - {2.2834} \end{matrix}\right\rbrack .
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
Next, experimental results are presented below to verify the effectiveness of the proposed scheme: The initial state values of agents can be selected as ${x}_{1}\left( 0\right) = \left\lbrack {8;8}\right\rbrack ,{x}_{2}\left( 0\right) = \left\lbrack {8; - 8}\right\rbrack$ , ${x}_{3}\left( 0\right) = \left\lbrack {-8;8}\right\rbrack ,{x}_{4}\left( 0\right) = \left\lbrack {-8; - 8}\right\rbrack ,{x}_{5}\left( 0\right) = \left\lbrack {7;{12}}\right\rbrack$ . The external disturbance is ${\omega }_{i}\left( t\right) = {30}\sin \left( {2t}\right)$ , and agent 1 and 2 are considered to be faulty agents and faults they encounter are shown as follows:
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
{f}_{1}\left( t\right) = \left\{ {\begin{array}{ll} 2{e}^{-{0.1}\left( {t - 5}\right) }\sin \left( {{1.2}\left( {t - 5}\right) }\right) , & t \in \left\lbrack {5,{10}}\right\rbrack \\ 0, & \text{ otherwise } \end{array},}\right.
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
$$
|
| 498 |
+
{f}_{2}\left( t\right) = \left\{ {\begin{array}{ll} 2\sin \left( {{1.2}\left( {t - {15}}\right) }\right) , & t \in \left\lbrack {{15},{20}}\right\rbrack \\ 0, & \text{ otherwise } \end{array}.}\right.
|
| 499 |
+
$$
|
| 500 |
+
|
| 501 |
+

|
| 502 |
+
|
| 503 |
+
Fig. 2: Curves of state/fault and their estimations (agent 1).
|
| 504 |
+
|
| 505 |
+

|
| 506 |
+
|
| 507 |
+
Fig. 3: Curves of state/fault and their estimations (agent 2).
|
| 508 |
+
|
| 509 |
+

|
| 510 |
+
|
| 511 |
+
Fig. 4: Curves of state/fault and their estimations (agent 3).
|
| 512 |
+
|
| 513 |
+

|
| 514 |
+
|
| 515 |
+
Fig. 5: Curves of state/fault and their estimations (agent 4).
|
| 516 |
+
|
| 517 |
+

|
| 518 |
+
|
| 519 |
+
Fig. 6: Curves of state/fault and their estimations (agent 5).
|
| 520 |
+
|
| 521 |
+

|
| 522 |
+
|
| 523 |
+
Fig. 7: Curves of global average consensus error ${\widetilde{x}}_{i}\left( t\right)$ .
|
| 524 |
+
|
| 525 |
+
As can be seen from Figs. 2-6, the proposed scheme (4) can effectively reduce the influence of external disturbance ${\omega }_{i}\left( t\right)$ on the estimation performance and realize accurate estimations of the agent state and fault. Based on the accurate estimations obtained by scheme (4) and the relative estimation error ${\eta }_{i} - {\eta }_{j}$ , the proposed global fault-tolerant average consensus controller (5) can make the global average consensus errors ${\widetilde{x}}_{i}\left( t\right)$ approach zero, as shown in Fig. 7.
|
| 526 |
+
|
| 527 |
+
## V. CONCLUSION
|
| 528 |
+
|
| 529 |
+
In this paper, the distributed unknown input observer-based global fault-tolerant average consensus control problem for linear MASs has been investigated. First, a distributed unknown input observer based on relative estimation error has been proposed, which can mitigate the impact of external disturbances on estimation performance, thereby achieving accurate estimations of state and fault. Then, based on the obtained estimations and the relative estimation error, a global fault-tolerant average consensus controller has been developed. The proposed scheme can compensate for fault impacts while ensuring global average consensus of the MASs. Finally, simulation experiments have been given to validate the effectiveness of the proposed control scheme.
|
| 530 |
+
|
| 531 |
+
## REFERENCES
|
| 532 |
+
|
| 533 |
+
[1] L. Ding, Q.-L. Han, X. Ge, and X.-M. Zhang, "An overview of recent advances in event-triggered consensus of multiagent systems," IEEE Transactions on Cybernetics, vol. 48, no. 4, pp. 1110-1123, 2018.
|
| 534 |
+
|
| 535 |
+
[2] J. Long, W. Wang, C. Wen, J. Huang, and Y. Guo, "Output-feedback-based adaptive leaderless consensus for heterogenous nonlinear multia-gent systems with switching topologies," IEEE Transactions on Cybernetics, 2024, doi:10.1109/TCYB.2024.3418825.
|
| 536 |
+
|
| 537 |
+
[3] H. Zhang, W. Zhao, X. Xie, and D. Yue, "Dynamic leader"cfollower output containment control of heterogeneous multiagent systems using reinforcement learning," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, doi:10.1109/TSMC.2024.3406777.
|
| 538 |
+
|
| 539 |
+
[4] H. Zhou and S. Tong, "Fuzzy adaptive event-triggered resilient formation control for nonlinear multiagent systems under dos attacks and input saturation," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 6, pp. 3665-3674, 2024.
|
| 540 |
+
|
| 541 |
+
[5] B. Wang, S. Sun, and W. Ren, "Distributed time-varying quadratic optimal resource allocation subject to nonidentical time-varying hessians with application to multiquadrotor hose transportation," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 10, pp. 6109-6119, 2022.
|
| 542 |
+
|
| 543 |
+
[6] B. Ning, Q.-L. Han, and Z. Zuo, "Distributed optimization for multi-agent systems: An edge-based fixed-time consensus approach," IEEE Transactions on Cybernetics, vol. 49, no. 1, pp. 122-132, 2019.
|
| 544 |
+
|
| 545 |
+
[7] S. Z. Tajalli, A. Kavousi-Fard, M. Mardaneh, A. Khosravi, and R. Razavi-Far, "Uncertainty-aware management of smart grids using cloud-based lstm-prediction interval," IEEE Transactions on Cybernetics, vol. 52, no. 10, pp. 9964-9977, 2022.
|
| 546 |
+
|
| 547 |
+
[8] R. Nie, W. Du, Z. Li, and S. He, "Finite-time consensus control for MASs under hidden markov model mechanism," IEEE Transactions on Automatic Control, vol. 69, no. 7, pp. 4726-4733, 2024.
|
| 548 |
+
|
| 549 |
+
[9] C. Chen, F. L. Lewis, and X. Li, "Event-triggered coordination of multi-agent systems via a lyapunov-based approach for leaderless consensus," Automatica, vol. 136, p. 109936, 2022.
|
| 550 |
+
|
| 551 |
+
[10] Z. Hu and B. Chen, "Sliding mode control for multi-agent systems under event-triggering hybrid scheduling strategy," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 71, no. 4, pp. 2184- 2188, 2024.
|
| 552 |
+
|
| 553 |
+
[11] H. Du, S. Li, and X. Lin, "Finite-time formation control of multiagent systems via dynamic output feedback," International Journal of Robust and Nonlinear Control, vol. 23, no. 14, pp. 1609-1628, 2013.
|
| 554 |
+
|
| 555 |
+
[12] Q. Shen, P. Shi, J. Zhu, S. Wang, and Y. Shi, "Neural networks-based distributed adaptive control of nonlinear multiagent systems," IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 3, pp. 1010-1021, 2020.
|
| 556 |
+
|
| 557 |
+
[13] K.-K. Oh and H.-S. Ahn, "Formation control and network localization via orientation alignment," IEEE Transactions on Automatic Control, vol. 59, no. 2, pp. 540-545, 2014.
|
| 558 |
+
|
| 559 |
+
[14] S. Tong, H. Zhou, and Y. Li, "Neural network event-triggered formation fault-tolerant control for nonlinear multiagent systems with actuator faults," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 12, pp. 7571-7582, 2023.
|
| 560 |
+
|
| 561 |
+
[15] X. Guo, G. Wei, and D. Ding, "Fault-tolerant consensus control for discrete-time multi-agent systems: A distributed adaptive sliding-mode scheme," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 70, no. 7, pp. 2515-2519, 2023.
|
| 562 |
+
|
| 563 |
+
[16] M. Yadegar and N. Meskin, "Fault-tolerant control of nonlinear heterogeneous multi-agent systems," Automatica, vol. 127, p. 109514, 2021.
|
| 564 |
+
|
| 565 |
+
[17] R. Sakthivel, B. Kaviarasan, C. K. Ahn, and H. R. Karimi, "Observer and stochastic faulty actuator-based reliable consensus protocol for mul-tiagent system," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 12, pp. 2383-2393, 2018.
|
| 566 |
+
|
| 567 |
+
[18] X. Zhu, Y. Xia, J. Han, X. Hu, and H. Yang, "Extended dissipative finite-time distributed time-varying delay active fault-tolerant consensus control for semi-markov jump nonlinear multi-agent systems," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 71, no. 4, pp. 2269-2273, 2024.
|
| 568 |
+
|
| 569 |
+
[19] Q. Jia, W. Chen, Y. Zhang, and H. Li, "Fault reconstruction and fault-tolerant control via learning observers in Takagi-Sugeno fuzzy descriptor systems with time delays," IEEE Transactions on industrial electronics, vol. 62, no. 6, pp. 3885-3895, 2015.
|
| 570 |
+
|
| 571 |
+
[20] Y. Mu, H. Zhang, Z. Gao, and J. Zhang, "A fuzzy lyapunov function approach for fault estimation of T-S fuzzy fractional-order systems based on unknown input observer," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 2, pp. 1246-1255, 2023.
|
| 572 |
+
|
| 573 |
+
[21] C. Liu, B. Jiang, X. Wang, Y. Zhang, and S. Xie, "Event-based distributed secure control of unmanned surface vehicles with DoS attacks," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 54, no. 4, pp. 2159-2170, 2024.
|
| 574 |
+
|
| 575 |
+
[22] J. C. L. Chan, T. H. Lee, C. P. Tan, H. Trinh, and J. H. Park, "A nonlinear observer for robust fault reconstruction in one-sided lipschitz and quadratically inner-bounded nonlinear descriptor systems," IEEE Access, vol. 9, pp. 22455-22469, 2021.
|
| 576 |
+
|
| 577 |
+
[23] A.-Y. Lu and G.-H. Yang, "Distributed consensus control for multi-agent systems under denial-of-service," Information Sciences, vol. 439, pp. 95-107, 2018.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3KOwuI0B5z/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,525 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DISTRIBUTED UNKNOWN INPUT OBSERVER-BASED GLOBAL FAULT-TOLERANT AVERAGE CONSENSUS CONTROL FOR LINEAR MULTI-AGENT SYSTEMS
|
| 2 |
+
|
| 3 |
+
Ximing Yang
|
| 4 |
+
|
| 5 |
+
School of Automation Engineering
|
| 6 |
+
|
| 7 |
+
University of Electronic Science and Technology of China
|
| 8 |
+
|
| 9 |
+
Chengdu 611731, China
|
| 10 |
+
|
| 11 |
+
yxm961115123@163.com
|
| 12 |
+
|
| 13 |
+
Tieshan Li
|
| 14 |
+
|
| 15 |
+
School of Automation Engineering
|
| 16 |
+
|
| 17 |
+
University of Electronic Science and Technology of China
|
| 18 |
+
|
| 19 |
+
Chengdu 611731, China
|
| 20 |
+
|
| 21 |
+
tieshanli@126.com
|
| 22 |
+
|
| 23 |
+
Yue Long
|
| 24 |
+
|
| 25 |
+
School of Automation Engineering
|
| 26 |
+
|
| 27 |
+
University of Electronic Science and Technology of China
|
| 28 |
+
|
| 29 |
+
Chengdu 611731, China
|
| 30 |
+
|
| 31 |
+
longyue@uestc.edu.cn Hanqing Yang School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
|
| 32 |
+
|
| 33 |
+
hqyang5517@uestc.edu.cn
|
| 34 |
+
|
| 35 |
+
${Abstract}$ -The paper mainly investigates the distributed unknown input observer-based global fault-tolerant average consensus control problem for multi-agent systems (MASs). First, a distributed unknown input observer based on relative estimation error is proposed, which can effectively reduce the impact of external disturbances and achieve accurate estimation of the agent states and the faults they suffered. Then, based on the obtained accurate estimations and using the relative estimation error, a global fault-tolerant average consensus controller is proposed. The proposed controller can compensate for the effects of faults and enable the MASs to achieve global average consensus. Finally, simulations are given to verify the effectiveness of the proposed scheme.
|
| 36 |
+
|
| 37 |
+
Index Terms-Multi-agent systems, fault-tolerant control, distributed unknown input observer, global average consensus.
|
| 38 |
+
|
| 39 |
+
§ I. INTRODUCTION
|
| 40 |
+
|
| 41 |
+
In the past decades, the study of multi-agent systems (MASs) has been highly emphasized. Due to their extensive civilian and military applications, MASs are subject to stringent performance requirements, such as adaptability, flexibility, and robustness [1]. To meet these requirements, considerable attention has been given to coordination issues in MASs, such as consensus [2], containment control [3], and formation control [4]. These coordination mechanisms have been utilized in a wide range of applications such as intelligent transportation systems [5], drone formation [6], and smart grids [7]. However, the scalability and complexity of MASs render traditional centralized control schemes insufficient to meet these requirements. Therefore, the exploration of distributed control schemes for MASs is of significant importance.
|
| 42 |
+
|
| 43 |
+
Compared with centralized control schemes, distributed control schemes are more suitable for the coordinated control of autonomous agents in MASs [8]. Currently, the existing control schemes can be categorized into two types based on the structure of MASs: leaderless and leader-follower. The goal of control in leaderless MASs is to reach the consensus among the agents [9]. In contrast, the control objective of leader-follower MASs is for the follower agents to track the state of the leader [10]. A formation control scheme based on dynamic output feedback was proposed for cases where velocity cannot be measured, ensuring that the agents converge to the desired formation pattern within a finite time [11]. In [12], an adaptive control strategy with a fully distributed neural network was proposed to ensure that all followers track the leader's state and that the synchronization error remains within a specified range. A formation control method based on constructing a direction alignment law and formation control law using the displacement between agents was proposed to address the direction misalignment issue caused by local reference frames [13]. Overall, distributed control has emerged as a popular research direction, attracting considerable research efforts and yielding abundant results. However, many research outcomes focus solely on the control methods design and consider relatively idealized cases, assuming precise knowledge of system states and the absence of system faults, which diminishes their engineering feasibility.
|
| 44 |
+
|
| 45 |
+
In practical applications, MASs consist of numerous agents distributed across a spatial area, with each agent facing distinct environmental challenges. Agents may encounter uncertainties, such as actuator faults, which can incapacitate the entire control system [14]. To enhance the reliability and safety of the system, it is necessary to implement measures to compensate the adverse influences of faults on the system. In this context, fault-tolerant consensus control has attracted widespread attention as an effective method to compensate for the impact of faults [15]. A virtual actuator framework-based adaptive fault-tolerant control method was proposed to achieve leader-follower consensus control under time-varying actuator faults [16]. Based on an observer framework, a reliable consensus control design method under stochastic actuator failures was proposed to achieve multi-agent consensus [17]. A distributed fault-tolerant consensus protocol based on a distributed intermediate observer was proposed to achieve finite-time fault-tolerant consensus control with enhanced dissipation rate [18]. However, although [18] has addressed the consensus problem of MASs under faults, they have not considered the impact of external disturbances present in practical environments on estimation performance. Fortunately, The unknown input observer, as an effective method based on disturbance decoupling technology to handle external disturbances in estimation error systems, has been widely applied [19]- [20]. Depending on [19], to address the problem of distributed secure control in MASs, a decentralized unknown input observer-based distributed secure control scheme was proposed [21].
|
| 46 |
+
|
| 47 |
+
This work was supported in part by the National Natural Science Foundation of China under Grant 51939001, Grant 62273072, and Grant 62203088, in part by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0903. (Corresponding author: Tieshan Li.)
|
| 48 |
+
|
| 49 |
+
Based on these observations, a distributed unknown input observer and a fault-tolerant average consensus controller based on relative estimation error are proposed in this paper. Major contributions of this work are summarized below:
|
| 50 |
+
|
| 51 |
+
(1) Compared with reference [18], a control scheme utilizing disturbance decoupling technology to handle external disturbances is proposed. This scheme effectively reduces the adverse influence of disturbances on estimation performance and achieves global average consensus for MASs.
|
| 52 |
+
|
| 53 |
+
(2) Distinguished from [21], a novel distributed unknown input observer utilizing relative estimation error is proposed to obtain the estimations of the state and the fault experienced by each agent. Specifically, it uses relative estimation error to determine fault estimation, incorporating output estimates rather than just the outputs themselves into the distributed algorithm.
|
| 54 |
+
|
| 55 |
+
The structure is given as follows: Section II presents the problem formulation and give some useful assumptions. In Section III, the main results including distributed unknown input observer-based global fault-tolerant average consensus control scheme and stability analysis are given. Simulations are given in Section IV. Finally, the conclusion of this work is presented in Section V.
|
| 56 |
+
|
| 57 |
+
§ II. PREPARATIONS
|
| 58 |
+
|
| 59 |
+
§ A. GRAPH THEORY
|
| 60 |
+
|
| 61 |
+
An undirected graph $\mathfrak{g}$ is defined as a pair $\left( {v,\epsilon ,\mathfrak{A}}\right)$ , where $v = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ represents a nonempty finite set of nodes, and $\epsilon \subseteq v \times v$ represents a set of edges. An edge $\left( {{v}_{i},{v}_{j}}\right)$ denotes a pair of nodes ${v}_{i}$ and ${v}_{j}$ . The adjacency matrix, denoted as $\mathfrak{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{N \times N}$ , has elements ${a}_{ij}$ representing the weight coefficient of the edge $\left( {{v}_{i},{v}_{j}}\right)$ , with ${a}_{ii} = 0$ and ${a}_{ij} = 1$ if $\left( {{v}_{i},{v}_{j}}\right) \in \epsilon$ . The Laplacian matrix, denoted as $\mathfrak{L} = \mathfrak{D} - \mathfrak{A}$ , is constructed where $\mathfrak{D} = \left\lbrack {d}_{ii}\right\rbrack$ is a diagonal matrix with ${d}_{ii} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}$ .
|
| 62 |
+
|
| 63 |
+
§ B. PROBLEM FORMULATION
|
| 64 |
+
|
| 65 |
+
Considering a MASs with $N$ agents $\left( {i \in \{ 1,\ldots ,N\} }\right)$ , and the dynamics of $i$ th agent with actuator faults are denoted as follows:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{y}_{i}\left( t\right) = C{x}_{i}\left( t\right) \tag{1}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where ${x}_{i}\left( t\right) \in {\mathbf{R}}^{n},{u}_{i}\left( t\right) \in {\mathbf{R}}^{m},{y}_{i}\left( t\right) \in {\mathbf{R}}^{p}$ represent the agent’s state, input, and output, respectively. The terms ${f}_{i}\left( t\right) \in$ ${\mathbf{R}}^{q}$ and ${\omega }_{i}\left( t\right) \in {\mathbf{R}}^{s}$ denote the actuator fault and external disturbance, respectively. The matrices $A,B,C$ , and $D$ are constants with appropriate dimensions.
|
| 76 |
+
|
| 77 |
+
This paper aims to propose a global fault-tolerant average consensus controller, so that the state of all agents can achieve global average consensus, i.e., global average consensus error ${\widetilde{x}}_{i}\left( t\right)$ satisfy:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{\widetilde{x}}_{i}\left( t\right) = {x}_{i}\left( t\right) - \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{x}_{i} \Rightarrow 0. \tag{2}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
To facilitate subsequent analysis, the following useful assumptions and lemma are given:
|
| 84 |
+
|
| 85 |
+
Assumption 1.
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\operatorname{rank}\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack = n + \operatorname{rank}\left( D\right) . \tag{3}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
Assumption 2. [22] The actuator fault ${f}_{i}\left( t\right)$ is differentiable with respect to time, and its time derivative ${\dot{f}}_{i}\left( t\right)$ belongs to ${L}_{2}\lbrack 0,\infty )$ . Similarly, the external disturbance ${\omega }_{i}\left( t\right)$ is bounded and also belongs to ${L}_{2}\lbrack 0,\infty )$ .
|
| 92 |
+
|
| 93 |
+
Lemma 1. [21] For the undirected and connected graph $\mathfrak{g}$ , one has $\mathfrak{L}\mathcal{M} = \mathcal{M}\mathfrak{L} = \mathfrak{L}$ .
|
| 94 |
+
|
| 95 |
+
§ III. MAIN RESULTS
|
| 96 |
+
|
| 97 |
+
A. Distributed unknown input observer-based global fault-tolerant average consensus control scheme
|
| 98 |
+
|
| 99 |
+
To reconstruct the state and actuator fault of the agent, the relative estimation error-based distributed unknown input observer for agent $i$ is proposed:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{\dot{m}}_{i}\left( t\right) = {\Upsilon A}{\widehat{x}}_{i}\left( t\right) + {\Upsilon B}\left( {{u}_{i}\left( t\right) + {\widehat{f}}_{i}\left( t\right) }\right)
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
+ {L}_{1}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{\widehat{x}}_{i}\left( t\right) = {m}_{i}\left( t\right) + \Theta {y}_{i}\left( t\right)
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\dot{\widehat{f}}}_{i}\left( t\right) = {L}_{2}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{\widehat{y}}_{i} = C{\widehat{x}}_{i}\left( t\right) \tag{4}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where ${m}_{i}\left( t\right) ,{\widehat{x}}_{i}\left( t\right) ,{\widehat{f}}_{i}\left( t\right)$ , and ${\widehat{y}}_{i}$ denote the state of unknown input observer, state estimation, actuator fault estimation, and output estimation for agent $i$ , respectively. And ${\eta }_{i} = {y}_{i}\left( t\right) -$ ${\widehat{y}}_{i}\left( t\right)$ denotes output estimation error, ${\eta }_{i} - {\eta }_{j}$ denotes the relative estimation error. In addition, the global fault-tolerant average consensus controller for agent $i$ is proposed:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{u}_{i}\left( t\right) = E{\widehat{x}}_{i}\left( t\right) - {\widehat{f}}_{i}\left( t\right) + K\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\} . \tag{5}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
Then, for agent $i$ , the state estimation error system can be denoted as below:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{\dot{e}}_{xi}\left( t\right) = {\dot{x}}_{i}\left( t\right) - {\dot{m}}_{i}\left( t\right) - {\Theta C}{\dot{x}}_{i}\left( t\right) . \tag{6}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
The following condition for the matrices $\Upsilon$ and $\Theta$ can be obtained based on Assumption 1:
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\left\lbrack \begin{array}{ll} \mathbf{\Upsilon } & \Theta \end{array}\right\rbrack \left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{I} & \mathbf{0} \end{array}\right\rbrack
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
which could be re-written as follows
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{\Upsilon D} = \mathbf{0},\mathbf{I} - {\Theta C} = \Upsilon . \tag{7}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
Then, based on the above conditions, one has
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
{\dot{e}}_{xi}\left( t\right) = {\Upsilon A}{x}_{i}\left( t\right) + {\Upsilon B}\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + {\Upsilon D}{\omega }_{i}\left( t\right) - {\Upsilon A}{\widehat{x}}_{i}\left( t\right)
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
- {\Upsilon B}\left( {{u}_{i}\left( t\right) + {\widehat{f}}_{i}\left( t\right) }\right) - {L}_{1}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
= {\Upsilon A}{e}_{xi}\left( t\right) + {\Upsilon B}{e}_{fi}\left( t\right)
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
- {L}_{1}C\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} , \tag{8}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
and the fault estimation error system can be denoted as:
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
{\dot{e}}_{fi}\left( t\right) = - {L}_{2}C\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} + {\dot{f}}_{i}\left( t\right) . \tag{9}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
Denote vector ${e}_{i}\left( t\right) = \left\lbrack {{e}_{xi}^{T}\left( t\right) ,{e}_{fi}^{T}\left( t\right) }\right\rbrack$ , the augmented estimation error system can be obtained:
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
{\dot{e}}_{i}\left( t\right) = \widetilde{A}{e}_{i}\left( t\right) - L\bar{C}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{i}\left( t\right) - {e}_{j}\left( t\right) }\right\rbrack }\right\} + \widehat{I}{\dot{f}}_{i}\left( t\right)
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
(10)
|
| 176 |
+
|
| 177 |
+
where
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\widetilde{A} = \left\lbrack \begin{matrix} {\Upsilon A} & {\Upsilon B} \\ \mathbf{0} & \mathbf{0} \end{matrix}\right\rbrack ,L = \left\lbrack \begin{array}{l} {L}_{1} \\ {L}_{2} \end{array}\right\rbrack ,\bar{C} = \left\lbrack \begin{array}{ll} C & \mathbf{0} \end{array}\right\rbrack ,\widehat{I} = \left\lbrack \begin{array}{l} \mathbf{0} \\ \mathbf{I} \end{array}\right\rbrack .
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
Defining vector
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\dot{f}\left( t\right) = {\left\lbrack \begin{array}{lll} {\dot{f}}_{1}\left( t\right) & \ldots & {\dot{f}}_{N}\left( t\right) \end{array}\right\rbrack }^{T},
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
e\left( t\right) = {\left\lbrack \begin{array}{lll} {e}_{1}^{T}\left( t\right) & \ldots & {e}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T}.
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
Then, the estimation error system can be rewritten as:
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
\dot{e}\left( t\right) = \left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) e\left( t\right) + {I}_{N} \otimes \widehat{I}\dot{f}\left( t\right) . \tag{11}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
In addition, for agent $i$ :, the closed-loop system can be denoted as:
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {E{\widehat{x}}_{i}\left( t\right) - {\widehat{f}}_{i}\left( t\right) + K\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{\eta }_{i} - {\eta }_{j}}\right\rbrack }\right\} }\right.
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\left. {+{f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
= \left( {A + {BE}}\right) {x}_{i}\left( t\right) - {BE}{e}_{xi}\left( t\right) + B{e}_{fi}\left( t\right)
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
+ {BKC}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{xi}\left( t\right) - {e}_{xj}\left( t\right) }\right\rbrack }\right\} + D{\omega }_{i}\left( t\right)
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
= \left( {A + {BE}}\right) {x}_{i}\left( t\right) + \widetilde{B}{e}_{i}\left( t\right)
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
+ {BK}\bar{C}\left\{ {\mathop{\sum }\limits_{{j \in {N}_{i}}}{a}_{ij}\left\lbrack {{e}_{i}\left( t\right) - {e}_{j}\left( t\right) }\right\rbrack }\right\} + D{\omega }_{i}\left( t\right) \tag{12}
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
where $\widetilde{B} = \left\lbrack \begin{array}{ll} - {BE} & B \end{array}\right\rbrack$ .
|
| 226 |
+
|
| 227 |
+
To achieve global average consensus, recall the global average consensus error (2) for agent $i$ , defining vector
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
\widetilde{x}\left( t\right) = {\left\lbrack \begin{array}{lll} {\widetilde{x}}_{1}^{T}\left( t\right) & \ldots & {\widetilde{x}}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T},
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
x\left( t\right) = {\left\lbrack \begin{array}{lll} {x}_{1}^{T}\left( t\right) & \ldots & {x}_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T},
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\omega \left( t\right) = {\left\lbrack \begin{array}{lll} {\omega }_{1}^{T}\left( t\right) & \ldots & {\omega }_{N}^{T}\left( t\right) \end{array}\right\rbrack }^{T}.
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
Then, the closed-loop system can be rewritten as:
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\dot{x}\left( t\right) = \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) x\left( t\right) + \left( {{I}_{N} \otimes \widetilde{B}}\right.
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
+ \mathfrak{L} \otimes {BK}\bar{C})e\left( t\right) + \left( {{I}_{N} \otimes D}\right) \omega \left( t\right) . \tag{13}
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
So, for the global average consensus error
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\widetilde{x}\left( t\right) = \left( {\mathcal{M} \otimes {I}_{n}}\right) x\left( t\right) \tag{14}
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
where $\mathcal{M} = {I}_{N} - \frac{{1}_{N}{1}_{N}^{T}}{N}$ , it can be denoted as
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
\dot{\widetilde{x}}\left( t\right) = \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \left( {{\mathcal{M}}^{-1} \otimes {I}_{n}^{-1}}\right) \widetilde{x}\left( t\right)
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
+ \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
+ \left( {\mathcal{M} \otimes {I}_{n}}\right) \left( {{I}_{N} \otimes D}\right) \omega \left( t\right)
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
= \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \widetilde{x}\left( t\right) + \left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
+ \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) \text{ . } \tag{15}
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
§ B. STABILITY ANALYSIS
|
| 280 |
+
|
| 281 |
+
Theorem 1. For given scalar $\alpha > 0$ , matrices $\Upsilon ,\Theta ,L,K$ , controller feedback gain matrix $E$ , Laplacian matrix $\mathfrak{L}$ , matrix $\mathcal{M}$ , if there exist matrices $Q = {Q}^{T} > 0,P = {P}^{T} > 0$ with appropriate dimensions, such that the following condition holds
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
\Phi = \left\lbrack \begin{matrix} {\Phi }_{1} & {\Phi }_{2} & {\Phi }_{3} & \mathbf{0} \\ * & {\Phi }_{4} & \mathbf{0} & {\Phi }_{5} \\ * & * & {\Phi }_{6} & \mathbf{0} \\ * & * & * & {\Phi }_{7} \end{matrix}\right\rbrack < 0 \tag{16}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
where ${\Phi }_{1} = \operatorname{He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} + \alpha {I}_{N} \otimes Q,{\Phi }_{2} =$ $\mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C},{\Phi }_{3} = \mathcal{M} \otimes {QD},{\Phi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - }\right.$ $\mathfrak{L} \otimes {PL}\bar{C}\} + \alpha {I}_{N} \otimes P,{\Phi }_{5} = {I}_{N} \otimes P\widehat{I},{\Phi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},{\Phi }_{7} =$ $- {I}_{N} \otimes {I}_{{n}_{f}}$ , then the all the signals of the estimation error system (11) and the global average consensus error system (15) are bounded.
|
| 288 |
+
|
| 289 |
+
Proof. The Lyapunov function can be chosen as below:
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
V\left( t\right) = {V}_{1}\left( t\right) + {V}_{2}\left( t\right) \tag{17}
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
where ${V}_{1}\left( t\right) = {\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\widetilde{x}\left( t\right) ,{V}_{2}\left( t\right) = {e}^{T}\left( t\right) \widetilde{P}e\left( t\right) ,\widetilde{P} = {I}_{N} \otimes$ $P,\widetilde{Q} = {I}_{N} \otimes Q$ . Take the derivative of the above function, the following can be obtained:
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
\dot{V}\left( t\right) \leq 2{e}^{T}\left( t\right) \widetilde{P}\dot{e}\left( t\right) + 2{\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\dot{\widetilde{x}}\left( t\right)
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
\leq 2{e}^{T}\left( t\right) \widetilde{P}\left( {\left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) e\left( t\right) + {I}_{N} \otimes \widehat{I}\dot{f}\left( t\right) }\right)
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\left( {\left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) \widetilde{x}\left( t\right) }\right.
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
\left. {+\left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right) + \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) }\right)
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\leq {e}^{T}\left( t\right) \operatorname{He}\left\{ {\left( {{I}_{N} \otimes P}\right) \left( {{I}_{N} \otimes \widetilde{A} - \mathfrak{L} \otimes L\bar{C}}\right) }\right\} e\left( t\right)
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
+ 2{e}^{T}\left( t\right) \left( {{I}_{N} \otimes P}\right) \left( {{I}_{N} \otimes \widehat{I}}\right) \dot{f}\left( t\right)
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
+ {\widetilde{x}}^{T}\left( t\right) {He}\left\{ {\left( {{I}_{N} \otimes Q}\right) \left( {{I}_{N} \otimes \left( {A + {BE}}\right) }\right) }\right\} \widetilde{x}\left( t\right)
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \left( {{I}_{N} \otimes Q}\right) \left( {\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C}}\right) e\left( t\right)
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \left( {{I}_{N} \otimes Q}\right) \left( {\mathcal{M} \otimes D}\right) \omega \left( t\right) . \tag{18}
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
According to the properties of the Kronecker product, we can get:
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
\dot{V}\left( t\right) \leq {e}^{T}\left( t\right) \operatorname{He}\left\{ {{I}_{N} \otimes P\widetilde{A} - \mathfrak{L} \otimes {PL}\bar{C}}\right\} e\left( t\right)
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
+ {\widetilde{x}}^{T}\left( t\right) {He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} \widetilde{x}\left( t\right)
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \left( {\mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C}}\right) e\left( t\right)
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
+ 2{\widetilde{x}}^{T}\left( t\right) \left( {\mathcal{M} \otimes {QD}}\right) \omega \left( t\right) + 2{e}^{T}\left( t\right) \left( {{I}_{N} \otimes P\widehat{I}}\right) \dot{f}\left( t\right) .
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
Define $\xi \left( t\right) = \left\lbrack {{\widetilde{x}}^{T}\left( t\right) ,{e}^{T}\left( t\right) ,{\omega }^{T}\left( t\right) ,{\dot{f}}^{T}\left( t\right) }\right\rbrack$ , if the following linear matrix inequality holds
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\Phi = \left\lbrack \begin{matrix} {\Phi }_{1} & {\Phi }_{2} & {\Phi }_{3} & \mathbf{0} \\ * & {\Phi }_{4} & \mathbf{0} & {\Phi }_{5} \\ * & * & {\Phi }_{6} & \mathbf{0} \\ * & * & * & {\Phi }_{7} \end{matrix}\right\rbrack < 0 \tag{19}
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
where
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
{\Phi }_{1} = {He}\left\{ {{I}_{N} \otimes \left( {{QA} + {QBE}}\right) }\right\} + \alpha {I}_{N} \otimes Q,
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
{\Phi }_{2} = \mathcal{M} \otimes Q\widetilde{B} + \mathfrak{L} \otimes {QBK}\bar{C},
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
{\Phi }_{3} = \mathcal{M} \otimes {QD},
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
{\Phi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - \mathfrak{L} \otimes {PL}\bar{C}}\right\} + \alpha {I}_{N} \otimes P,
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
{\Phi }_{5} = {I}_{N} \otimes P\widehat{I},
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
{\Phi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
$$
|
| 384 |
+
{\Phi }_{7} = - {I}_{N} \otimes {I}_{{n}_{f}},
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
we have
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
\dot{V}\left( t\right) \leq - \alpha {e}^{T}\left( t\right) \widetilde{P}e\left( t\right) - \alpha {\widetilde{x}}^{T}\left( t\right) \widetilde{Q}\widetilde{x}\left( t\right) + \parallel \omega \left( t\right) {\parallel }^{2} + \parallel \dot{f}\left( t\right) {\parallel }^{2}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
$$
|
| 394 |
+
\leq - {\alpha V}\left( t\right) + \Delta \left( t\right) \text{ . } \tag{20}
|
| 395 |
+
$$
|
| 396 |
+
|
| 397 |
+
As can be seen from the above conclusion, the global average consensus of MASs (1) and the boundedness of the estimation error system (11) can be guaranteed. The proof is completed.
|
| 398 |
+
|
| 399 |
+
Without loss of generality, the gain matrices $L,K$ can be solved by some algebraic operations, and the theorem is given as follows.
|
| 400 |
+
|
| 401 |
+
Theorem 2. For given scalar $\alpha > 0$ , matrices $\Upsilon ,\Theta$ , controller feedback gain matrix $E$ , Laplacian matrix $\mathfrak{L}$ , matrix $\mathcal{M}$ , if there exist symmetric positive definite matrices $S,P$ , matrices $K,{P}_{L}$ with appropriate dimensions, such that the following condition holds
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
\Psi = \left\lbrack \begin{matrix} {\Psi }_{1} & {\Psi }_{2} & {\Psi }_{3} & \mathbf{0} \\ * & {\Psi }_{4} & \mathbf{0} & {\Psi }_{5} \\ * & * & {\Psi }_{6} & \mathbf{0} \\ * & * & * & {\Psi }_{7} \end{matrix}\right\rbrack < 0 \tag{21}
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
where ${\Psi }_{1} = \operatorname{He}\left\{ {{I}_{N} \otimes \left( {{AS} + {BES}}\right) }\right\} + \alpha {I}_{N} \otimes S,{\Psi }_{2} =$ $\mathcal{M} \otimes \widetilde{B} + \mathfrak{L} \otimes {BK}\bar{C},{\Psi }_{3} = \mathcal{M} \otimes D,{\Psi }_{4} = {He}\left\{ {{I}_{N} \otimes P\widetilde{A} - }\right.$ $\left. {\mathfrak{L} \otimes {P}_{L}\bar{C}}\right\} + \alpha {I}_{N} \otimes P,{\Psi }_{5} = {I}_{N} \otimes P\widehat{I},{\Psi }_{6} = - {I}_{N} \otimes {I}_{{n}_{\omega }},$ ${\Psi }_{7} = - {I}_{N} \otimes {I}_{{n}_{f}},S = {Q}^{-1}$ , then the all the signals of the estimation error system (11) and the global average consensus error system (15) are bounded, and gain matrix $L = {P}^{-1}{P}_{L}$ .
|
| 408 |
+
|
| 409 |
+
Proof. Post- and pre-multiplying (19) by $\operatorname{diag}\left\{ {{I}_{N} \otimes }\right.$ $\left. {{Q}^{-1},{I}_{N} \otimes {I}_{{n}_{x} + {n}_{f}},{I}_{N} \otimes {I}_{{n}_{\omega }},{I}_{N} \otimes {I}_{{n}_{f}}}\right\}$ , the linear matrix inequality (21) can be deduced. This proof is completed.
|
| 410 |
+
|
| 411 |
+
§ IV. EXAMPLE
|
| 412 |
+
|
| 413 |
+
In this example, a group of five agents is considered. And the dynamics of the agents are in the form of
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
{\dot{x}}_{i}\left( t\right) = A{x}_{i}\left( t\right) + B\left( {{u}_{i}\left( t\right) + {f}_{i}\left( t\right) }\right) + D{\omega }_{i}\left( t\right)
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
{y}_{i}\left( t\right) = C{x}_{i}\left( t\right) \tag{22}
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
which are borrowed from [23], and parameter matrices are given as below
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
A = \left\lbrack \begin{matrix} 0 & 1 \\ {0.2} & - 2 \end{matrix}\right\rbrack ,B = \left\lbrack \begin{array}{l} 0 \\ 1 \end{array}\right\rbrack ,C = \left\lbrack \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right\rbrack ,D = \left\lbrack \begin{array}{l} {0.1} \\ {0.1} \end{array}\right\rbrack .
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
The communication graph considered in this paper is shown below:
|
| 430 |
+
|
| 431 |
+
< g r a p h i c s >
|
| 432 |
+
|
| 433 |
+
Fig. 1: Communication graph.
|
| 434 |
+
|
| 435 |
+
From Fig. 1, one has
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\mathfrak{L} = \left\lbrack \begin{matrix} 2 & 0 & - 1 & - 1 & 0 \\ 0 & 2 & 0 & - 1 & - 1 \\ - 1 & 0 & 2 & - 1 & 0 \\ - 1 & - 1 & - 1 & 3 & 0 \\ 0 & - 1 & 0 & 0 & 1 \end{matrix}\right\rbrack
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
To obtain the pre-design unknown input observer gain matrices, the matrix ${M}_{\varkappa }$ can be selected as follows:
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
{M}_{\varkappa } = \left\lbrack \begin{array}{llll} - {6.7245} & - {9.1869} & - {9.4050} & - {7.5082} \\ - {5.2013} & - {8.2981} & - {7.0737} & - {8.8809} \end{array}\right\rbrack ,
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
according to the following condition
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
\left\lbrack \begin{array}{ll} \mathbf{\Upsilon } & \Theta \end{array}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{I} & \mathbf{0} \end{array}\right\rbrack \times {\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack }^{ \dagger }
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
- {M}_{\varkappa }\left( {\mathbf{I} - \left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack \times {\left\lbrack \begin{matrix} \mathbf{I} & D \\ C & \mathbf{0} \end{matrix}\right\rbrack }^{ \dagger }}\right) ,
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
the pre-design unknown input observer gain matrices can be obtained:
|
| 458 |
+
|
| 459 |
+
$$
|
| 460 |
+
\Upsilon = \left\lbrack \begin{matrix} {0.1086} & - {0.1086} \\ - {1.4760} & {1.4760} \end{matrix}\right\rbrack ,\Theta = \left\lbrack \begin{matrix} {0.1086} & {0.8914} \\ - {0.4760} & {1.4760} \end{matrix}\right\rbrack .
|
| 461 |
+
$$
|
| 462 |
+
|
| 463 |
+
Then, the parameters required to solve Theorem 2 are selected as $E = \left\lbrack {-{18.7279} - {7.9363}}\right\rbrack ,\alpha = {0.4}$ . the following matrices exist to make inequality (21) negative definite:
|
| 464 |
+
|
| 465 |
+
$$
|
| 466 |
+
P = \left\lbrack \begin{matrix} {22.2529} & {0.8245} & {0.2564} \\ {0.8245} & {7.9547} & - {2.6069} \\ {0.2564} & - {2.6069} & {1.0677} \end{matrix}\right\rbrack ,
|
| 467 |
+
$$
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
S = \left\lbrack \begin{matrix} {14.8878} & - {23.6762} \\ - {23.6762} & {47.0985} \end{matrix}\right\rbrack ,
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
K = \left\lbrack \begin{array}{ll} - {2.7207} & - {6.7659} \end{array}\right\rbrack ,
|
| 475 |
+
$$
|
| 476 |
+
|
| 477 |
+
$$
|
| 478 |
+
{P}_{L} = \left\lbrack \begin{matrix} {0.3396} & {12.6409} \\ - {0.9358} & {1.4581} \\ {6.4466} & - {0.6400} \end{matrix}\right\rbrack
|
| 479 |
+
$$
|
| 480 |
+
|
| 481 |
+
where gain matrix
|
| 482 |
+
|
| 483 |
+
$$
|
| 484 |
+
L = {P}^{-1}{P}_{L} = \left\lbrack \begin{matrix} - {0.7049} & {0.6177} \\ {9.9518} & - {0.6290} \\ {30.5037} & - {2.2834} \end{matrix}\right\rbrack .
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
Next, experimental results are presented below to verify the effectiveness of the proposed scheme: The initial state values of agents can be selected as ${x}_{1}\left( 0\right) = \left\lbrack {8;8}\right\rbrack ,{x}_{2}\left( 0\right) = \left\lbrack {8; - 8}\right\rbrack$ , ${x}_{3}\left( 0\right) = \left\lbrack {-8;8}\right\rbrack ,{x}_{4}\left( 0\right) = \left\lbrack {-8; - 8}\right\rbrack ,{x}_{5}\left( 0\right) = \left\lbrack {7;{12}}\right\rbrack$ . The external disturbance is ${\omega }_{i}\left( t\right) = {30}\sin \left( {2t}\right)$ , and agent 1 and 2 are considered to be faulty agents and faults they encounter are shown as follows:
|
| 488 |
+
|
| 489 |
+
$$
|
| 490 |
+
{f}_{1}\left( t\right) = \left\{ {\begin{array}{ll} 2{e}^{-{0.1}\left( {t - 5}\right) }\sin \left( {{1.2}\left( {t - 5}\right) }\right) , & t \in \left\lbrack {5,{10}}\right\rbrack \\ 0, & \text{ otherwise } \end{array},}\right.
|
| 491 |
+
$$
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
{f}_{2}\left( t\right) = \left\{ {\begin{array}{ll} 2\sin \left( {{1.2}\left( {t - {15}}\right) }\right) , & t \in \left\lbrack {{15},{20}}\right\rbrack \\ 0, & \text{ otherwise } \end{array}.}\right.
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
< g r a p h i c s >
|
| 498 |
+
|
| 499 |
+
Fig. 2: Curves of state/fault and their estimations (agent 1).
|
| 500 |
+
|
| 501 |
+
< g r a p h i c s >
|
| 502 |
+
|
| 503 |
+
Fig. 3: Curves of state/fault and their estimations (agent 2).
|
| 504 |
+
|
| 505 |
+
< g r a p h i c s >
|
| 506 |
+
|
| 507 |
+
Fig. 4: Curves of state/fault and their estimations (agent 3).
|
| 508 |
+
|
| 509 |
+
< g r a p h i c s >
|
| 510 |
+
|
| 511 |
+
Fig. 5: Curves of state/fault and their estimations (agent 4).
|
| 512 |
+
|
| 513 |
+
< g r a p h i c s >
|
| 514 |
+
|
| 515 |
+
Fig. 6: Curves of state/fault and their estimations (agent 5).
|
| 516 |
+
|
| 517 |
+
< g r a p h i c s >
|
| 518 |
+
|
| 519 |
+
Fig. 7: Curves of global average consensus error ${\widetilde{x}}_{i}\left( t\right)$ .
|
| 520 |
+
|
| 521 |
+
As can be seen from Figs. 2-6, the proposed scheme (4) can effectively reduce the influence of external disturbance ${\omega }_{i}\left( t\right)$ on the estimation performance and realize accurate estimations of the agent state and fault. Based on the accurate estimations obtained by scheme (4) and the relative estimation error ${\eta }_{i} - {\eta }_{j}$ , the proposed global fault-tolerant average consensus controller (5) can make the global average consensus errors ${\widetilde{x}}_{i}\left( t\right)$ approach zero, as shown in Fig. 7.
|
| 522 |
+
|
| 523 |
+
§ V. CONCLUSION
|
| 524 |
+
|
| 525 |
+
In this paper, the distributed unknown input observer-based global fault-tolerant average consensus control problem for linear MASs has been investigated. First, a distributed unknown input observer based on relative estimation error has been proposed, which can mitigate the impact of external disturbances on estimation performance, thereby achieving accurate estimations of state and fault. Then, based on the obtained estimations and the relative estimation error, a global fault-tolerant average consensus controller has been developed. The proposed scheme can compensate for fault impacts while ensuring global average consensus of the MASs. Finally, simulation experiments have been given to validate the effectiveness of the proposed control scheme.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,397 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Privacy-Preserving Event-Triggered Predefined Time Containment Control for Networked Agent Systems
|
| 2 |
+
|
| 3 |
+
Weihao ${\mathrm{{Li}}}^{1,2,3, \dagger }$ , Jiangfeng ${\mathrm{{Yue}}}^{1,2,3, \dagger }$ , Mengji ${\mathrm{{Shi}}}^{1,2,3, * }$ , Boxian ${\mathrm{{Lin}}}^{1,2,3}$ , Kaiyu ${\mathrm{{Qin}}}^{1,2,3}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu, China.
|
| 6 |
+
|
| 7 |
+
${}^{2}$ Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu, China.
|
| 8 |
+
|
| 9 |
+
${}^{3}$ National Laboratory on Adaptive Optics, Chengdu,610209, China.
|
| 10 |
+
|
| 11 |
+
Email: maangat@126.com
|
| 12 |
+
|
| 13 |
+
Abstract-This paper addresses the privacy-preserving event-triggered predefined time containment control problem for networked agent systems. A novel containment control scheme is developed that integrates privacy protection with event-triggered mechanisms, optimizing network efficiency by minimizing unnecessary data transmission while ensuring robust containment within a specified time frame. The proposed control scheme ensures the confidentiality of agents' information through output masking, thereby maintaining both privacy and control accuracy. Furthermore, it provides a distinct advantage over traditional finite-time and fixed-time control methods by guaranteeing convergence to the desired state within a predefined time, regardless of initial conditions. Finally, some simulation results are given to verify the effectiveness of the proposed containment control scheme.
|
| 14 |
+
|
| 15 |
+
Index Terms-Containment Control; Privacy-preserving; Predefined Time; Event-triggered Control; Networked Agent Systems.
|
| 16 |
+
|
| 17 |
+
## I. INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Networked agent systems have garnered significant attention across various fields due to their broad range of applications, including robotics [1], autonomous vehicles [2], and distributed sensor networks [3]. The cooperative control of networked agent systems involves designing strategies that enable agents to work together effectively to achieve shared objectives. A prominent approach within cooperative control is containment control [4], [5], which aims to ensure that a group of agents (followers) remains within a specified region or adheres to a particular trajectory, while another group of agents (leaders) directs their behavior. Containment control is particularly crucial in scenarios requiring strict spatial or operational constraint adherence. For instance, in a formation flying scenario, containment control can ensure that a group of drones maintains a specific formation while another set of drones guides their collective movement [6].
|
| 20 |
+
|
| 21 |
+
Convergence speed is a critical performance metric in the containment control of networked agent systems. Current research explores several approaches to achieving convergence, including asymptotic convergence [7], finite-time convergence [8], and fixed-time convergence [9]. Asymptotic convergence guarantees that the system will eventually converge to the desired state over time, although the convergence rate may not be specified. Finite-time convergence ensures that the system reaches the desired state within a finite period, though the exact time depends on system parameters and states. Fixed-time convergence provides a guarantee of convergence within a predetermined time, irrespective of initial conditions, thereby offering more predictability in performance. However, the convergence time in both finite-time and fixed-time approaches is influenced by system parameters and states. To address this, researchers have developed predefined time control schemes that enable the specification of a desired convergence time [10], [11]. The primary advantages of predefined-time control include the ability to guarantee convergence within a specified time frame, thereby providing more predictable and controllable system behavior, and enhancing system performance by setting precise deadlines for achieving the desired state.
|
| 22 |
+
|
| 23 |
+
The existing literature [10]-[13] on predefined-time convergence in networked agent systems generally overlooks the issue of information privacy during transmission. However, privacy protection is of paramount importance in containment control, where safeguarding the confidentiality of agents' information is critical. Several methods for privacy protection have been proposed, including state decomposition [14], differential privacy [15], additive noise [16], and output masking [17]. Among these, output masking has received considerable attention due to its simplicity and ease of implementation. This method involves obscuring the output of agents to protect sensitive information while still allowing effective control. However, output masking relies on continuous information exchange, which can impose constraints on communication bandwidth. To address this limitation, it is necessary to develop privacy protection schemes under event-triggered mechanisms [18], [19], which can alleviate communication bandwidth constraints. In [19], the authors integrated both privacy preservation and event-triggered mechanisms into the consensus and containment control but overlooked predefined performance. Zhang et al. [20] incorporated prescribed-time theory and privacy preservation into consensus control but neglected bandwidth constraints. In conclusion, to the best of the author's knowledge, no existing solution simultaneously addresses the challenges of communication bandwidth, convergence time, and privacy protection in containment control, making this an area of significant research opportunity.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
$\dagger$ :These authors contribute equally to this paper.
|
| 28 |
+
|
| 29 |
+
This work was supported by the Natural Science Foundation of Sichuan Province (2022NSFSC0037), the Sichuan Science and Technology Programs (2022JDR0107, 2021YFG0130, MZGC20230069, MZGC20240139), the Fundamental Research Funds for the Central Universities (ZYGX2020J020), the Wuhu Science and Technology Plan Project (2022yf23). (Corresponding author: Mengji Shi.)
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
According to the above discussion, this paper focuses on the privacy-preserving event-triggered predefined time containment control problem of networked agent systems. The main contributions of this paper are summarized as follows:
|
| 34 |
+
|
| 35 |
+
(1) A novel event-triggered predefined-time containment control scheme is developed to optimize network efficiency while ensuring robust containment performance within a specified time frame. By employing event-triggered control, the scheme significantly reduces unnecessary data transmission, ensuring that agents communicate only when necessary. This approach effectively balances communication efficiency and system performance.
|
| 36 |
+
|
| 37 |
+
(2) The proposed control scheme guarantees convergence within a predefined time, offering a distinct advantage over finite-time and fixed-time methods. Unlike these traditional methods, where convergence time is often influenced by initial conditions and system parameters, the predefined time control ensures that the desired state is consistently reached within the predetermined time frame, thereby enhancing the predictability and reliability of the system.
|
| 38 |
+
|
| 39 |
+
(3) Furthermore, a privacy-preserving containment control scheme is designed to safeguard the confidentiality of agents' information by masking their outputs while maintaining accurate control. Compared to alternative privacy protection methods such as differential privacy or state decomposition, this scheme provides a simpler and more efficient solution. It ensures both privacy and communication efficiency without compromising the overall system performance, making it particularly suitable for applications with stringent privacy and bandwidth requirements.
|
| 40 |
+
|
| 41 |
+
The remainder of the paper is listed below. Some preliminaries are formulated in Section II and Section III formulates the problem. Section IV designs a privacy-preserving containment control input. Numerical simulation examples are provided in Section V, and Section VI sums up the whole paper.
|
| 42 |
+
|
| 43 |
+
## II. PRELIMINARY AND PROBLEM FORMULATION
|
| 44 |
+
|
| 45 |
+
## A. Preliminaries
|
| 46 |
+
|
| 47 |
+
The communication structure among agents in this study is represented by a graph topology denoted as $\mathcal{G} = \langle \mathcal{V},\mathcal{E},\mathcal{A}\rangle$ , where $\mathcal{V},\mathcal{E}$ , and $\mathcal{A}$ correspond to the set of nodes, the set of edges, and the adjacency matrix, respectively. The network consists of a total of $N = m + n$ agents, with $n$ being the number of follower agents and $m$ being the number of leader agents. The leader and follower agents are categorized into sets ${\mathcal{V}}_{L} = \{ 1,2,\ldots , m\}$ and ${\mathcal{V}}_{F} = \{ m + 1, m + 2,\ldots , m + n\}$ , respectively. Consequently, the overall set of nodes is formed by the union of these two sets, $\mathcal{V} = {\mathcal{V}}_{F} \cup {\mathcal{V}}_{L}$ . Following the definitions of the node sets, the adjacency matrix is represented as $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathcal{R}}^{\left( {n + m}\right) \times \left( {n + m}\right) }$ , where the element ${a}_{ij}$ is positive if there exists an edge from node $j$ to $i$ within the set $\mathcal{E}$ , and zero otherwise. Assuming leaders do not have adjacent nodes, implying that leaders solely disseminate information to followers, the Laplacian matrix $\mathcal{L}$ for the network of agents is derived as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ . The degree matrix, denoted by $\mathcal{D}$ , is a diagonal matrix with elements ${d}_{i}$ on the diagonal, where ${d}_{i}$ is the sum of the adjacency matrix elements in the $i$ -th row, calculated as ${d}_{i} = \mathop{\sum }\limits_{{k = 1}}^{{n + m}}{a}_{ik}$ .
|
| 48 |
+
|
| 49 |
+
Based on the aforementioned definitions, the Laplacian matrix is constructed as follows:
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
\mathcal{L} = \left\lbrack \begin{matrix} {\mathbf{0}}_{m \times n} & {\mathbf{0}}_{m \times m} \\ {\mathcal{L}}_{F} & {\mathcal{L}}_{L} \end{matrix}\right\rbrack , \tag{1}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
where the sub-Laplacian matrix specific to the follower agents is denoted as ${\mathcal{L}}_{F} \in {\mathcal{R}}^{n \times n}$ , and the sub-Laplacian matrix that captures the interactions between leader and follower agents is represented by ${\mathcal{L}}_{L} \in {\mathcal{R}}^{n \times m}$ . The elements of ${\mathcal{L}}_{F}$ , denoted as $\left\lbrack {l}_{ij}\right\rbrack$ , are defined such that when indices match, ${l}_{ij}$ equals the sum of the adjacency matrix entries ${a}_{ip}$ for all $p$ in the set of nodes $\mathcal{V}$ , and when indices differ, ${l}_{ij}$ is the negation of the corresponding adjacency entry ${a}_{ij}$ . Mathematically, this is expressed as:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{l}_{ij} = \left\{ \begin{array}{ll} \mathop{\sum }\limits_{{p = 1}}^{{m + n}}{a}_{ip}, & \text{ if }i = j, \\ - {a}_{ij}, & \text{ otherwise. } \end{array}\right.
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
The subsequent assumption about the communication framework is established to guarantee the feasibility of containment control within the networked agent systems.
|
| 62 |
+
|
| 63 |
+
Assumption 1: This paper posits that each follower is associated with at least one leader, with whom there exists a directed path leading to the follower.
|
| 64 |
+
|
| 65 |
+
Definition 1 ([21]): Let ${Z}_{n}$ be the collection of all $n \times n$ square matrices with non-positive off-diagonal elements, denoted as ${Z}_{n} \subset {\mathcal{R}}^{n \times n}$ . A matrix $Y$ is classified as a nonsingular M-matrix if it belongs to ${Z}_{n}$ and all its eigenvalues possess positive real parts.
|
| 66 |
+
|
| 67 |
+
Lemma 1 ([4]): Under Assumption 1, it is established that the matrix ${\mathcal{L}}_{F}$ qualifies as a nonsingular M-matrix. Furthermore, it holds that $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{\mathbf{1}}_{m} = {\mathbf{1}}_{n}$ , and every component of $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}$ is nonnegative.
|
| 68 |
+
|
| 69 |
+
Definition 2 ([22]): Let $\Lambda$ be a subset of ${\mathcal{R}}^{n}$ . If for any ${z}_{1},{z}_{2} \in \Lambda$ and a scalar $0 < \gamma < 1$ , the linear combination $\left( {1 - \gamma }\right) {z}_{1} + \gamma {z}_{2}$ also belongs to $\Lambda$ , then $\Lambda$ is deemed a convex set. Given a vector $\chi$ with elements ${\chi }_{i}$ , the convex hull of $\chi$ , denoted as $\operatorname{Co}\left( \chi \right)$ , is the set of all vectors that can be expressed as $\mathop{\sum }\limits_{{i = 1}}^{n}{\gamma }_{i}{\chi }_{i}$ , where each ${\gamma }_{i} \geq 0$ and the sum $\mathop{\sum }\limits_{{i = 1}}^{n}{\gamma }_{i} = 1$ .
|
| 70 |
+
|
| 71 |
+
## B. Time-varying transformation
|
| 72 |
+
|
| 73 |
+
The objective of privacy-preserving containment control is to guide the followers into the convex hull spanned by the leaders, without revealing the initial states of the participating agents. To address this, the paper integrates a dynamic, time-variant transformation into the traditional containment control paradigm. This transformation enables each agent to modify its state according to the evolving function before sharing information with its neighbors. The employed transformation function is both standardized and perpetually updating, characterized as
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
p : {\mathcal{R}}^{ + } \times {\mathcal{R}}^{h} \times {\mathcal{R}}^{d} \rightarrow {\mathcal{R}}^{h} \tag{2}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\left( {t, x, m}\right) \mapsto y\left( t\right) = \Lambda \left( {t, x\left( t\right) , m}\right) ,
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where $x = {\left\lbrack {x}_{1},\ldots ,{x}_{h}\right\rbrack }^{T} \in {\mathcal{R}}^{h}$ is the agent’s true states, the hidden state output after the time-varying transformation is $y = {\left\lbrack {y}_{1},\ldots ,{y}_{h}\right\rbrack }^{T} \in {\mathcal{R}}^{h}$ , both states have equal dimensions, the parameter set $m \in {\mathcal{R}}^{d}$ represents the key of time-varying transformation. The state output after the time-varying transformation is uniformly referred to as the hidden state in this paper. It is postulated that there exists a common system $\dot{x} = f\left( x\right)$ , and the dynamics following the application of time-varying transformation can be expressed as $\dot{x} = f\left( y\right)$ and $y = \Lambda \left( {t, x, m}\right)$ . If $\left| {\Lambda \left( {t, x, m}\right) - x\left( t\right) }\right|$ is approaching zero under the given key $m$ , it is referred to as a finite time-varying transformation, and the following condition holds
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\left\{ \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow \Omega }}\Lambda \left( {t, x\left( t\right) , m}\right) = x\left( t\right) , \\ \Lambda \left( {t, x\left( t\right) , m}\right) = x\left( t\right) , t \in \lbrack \Omega ,\infty ), \end{array}\right.
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where $\Omega$ denotes a finite time constant indicates that the final hidden state converges to the real state over time. The range of $\Omega$ is primarily influenced by the values of each parameter in the key $m$ .
|
| 90 |
+
|
| 91 |
+
## C. Containment control problem description
|
| 92 |
+
|
| 93 |
+
In this paper, we focus on a single-integrator networked agent system. The dynamics of the follower agents are characterized by the following equation:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{\dot{x}}_{i}\left( t\right) = {u}_{i}\left( t\right) , i \in {\mathcal{V}}_{F}, \tag{3}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where ${x}_{i}\left( t\right)$ and ${u}_{i}\left( t\right)$ denote the position and control input of $i$ th follower agent, respectively.
|
| 100 |
+
|
| 101 |
+
Additionally, the dynamics of the leader agents are governed by the following equation:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{\dot{x}}_{i}\left( t\right) = 0, i \in {\mathcal{V}}_{L}, \tag{4}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where ${x}_{i}\left( t\right)$ denotes the position of $i$ th leader agent. The above dynamics mean that the leader agents' position is stationary.
|
| 108 |
+
|
| 109 |
+
Definition 3: Consider a single-integrator networked agent system comprising $m$ leader agents and $n$ follower agents, the implementation of predefined time containment control necessitates that the position states of the followers converge to the convex hull defined by the leaders within specified time $T$ . Specifically, for any given initial condition, the convergence is characterized by the satisfaction of the following set of equations:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\mathop{\lim }\limits_{{t \rightarrow T}}\left| {{x}_{i}\left( t\right) - \mathop{\sum }\limits_{{k = 1}}^{m}{\varepsilon }_{ik}{x}_{k}\left( t\right) }\right| = 0, \tag{5}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where ${\varepsilon }_{ik} \in \mathcal{R},{\varepsilon }_{ik} \geq 0$ and $\mathop{\sum }\limits_{{k = 1}}^{m}{\varepsilon }_{ik} = 1, i \in {\mathcal{V}}_{F}, k \in {\mathcal{V}}_{L}$ .
|
| 116 |
+
|
| 117 |
+
## III. MAIN RESULTS
|
| 118 |
+
|
| 119 |
+
This section designs a decentralized finite-time varying transformation function to serve as a privacy mask and incorporates the event-triggered mechanism and predefined time theory to enhance the performance of networked agent systems. The proposed containment controller synthetically considers privacy-preserving, communication bandwidth constraint, and convergence speed.
|
| 120 |
+
|
| 121 |
+
To safeguard the confidentiality of agents' initial state information, we introduce mutually independent functions into the process of information exchange among agents. Furthermore, the aforementioned time-varying function can be implemented as
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\left\{ \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow {T}_{i}}}{\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , t \in \lbrack {T}_{i},\infty ). \end{array}\right. \tag{6}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
According to the requirements of the finite-time varying function, the received information of follower agent $j$ from agent $i$ can be designed as
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\left\{ \begin{array}{ll} {\mathrm{R}}_{i}^{m}\left( t\right) = {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) & \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) + {a}_{i}{t}^{2} + {b}_{i}t + {c}_{i}, & t \in \left\lbrack {0,{\Omega }_{i}}\right) \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , & t \in \left\lbrack {{\Omega }_{i},\infty }\right) \end{array}\right.
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where ${\Omega }_{i}$ satisfies
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\left\{ \begin{array}{l} {\Omega }_{i} = \frac{-{b}_{i} - \sqrt{{b}_{i}{}^{2} - 4{a}_{i}{c}_{i}}}{2{a}_{i}},{b}_{i} \geq 0,{c}_{i} \geq 0,\text{ if }\mathrm{a} \in \lbrack 0,\infty ), \\ {\Omega }_{i} = \frac{-{b}_{i} + \sqrt{{b}_{i}{}^{2} - 4{a}_{i}{c}_{i}}}{2{a}_{i}},{b}_{i} < 0,{c}_{i} < 0,\text{ if }\mathrm{a} \in \left( {-\infty ,0}\right) , \end{array}\right.
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
and ${a}_{i},{b}_{i},{c}_{i} \in \mathcal{R}$ , each agent has its distinctive encode key, denoted as ${m}_{i} = \left\{ {{a}_{i},{b}_{i},{c}_{i}}\right\}$ , noting that individual encode keys remain undisclosed to other agents.
|
| 140 |
+
|
| 141 |
+
Building upon the previously devised time-varying function and the acquired hidden information from neighboring agents, the predefined time containment control input for the $i$ th agent can be expressed as follows
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\left\{ \begin{array}{l} {u}_{i}\left( t\right) = - \left( {\rho + {\delta }_{\mu }^{\dot{\mu }}}\right) \mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( t\right) - {\mathrm{R}}_{j}^{m}\left( t\right) }\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) + {a}_{i}{t}^{2} + {b}_{i}t + {c}_{i}, t \in \left\lbrack {0,{\Omega }_{i}}\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , t \in \left\lbrack {{\Omega }_{i},\infty }\right) , \end{array}\right. \tag{7}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where $\rho > 0$ represents the control gain, and $\mu$ denotes a time-varying scaling function, which takes the form of
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\mu \left( t\right) = \left\{ \begin{matrix} {\left( \frac{T}{T - t}\right) }^{h}, & t \in \lbrack 0, T), \\ 0, & t \in \lbrack T,\infty ), \end{matrix}\right.
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where the real number $h$ holds the condition $h > 2$ .
|
| 154 |
+
|
| 155 |
+
Considering the practical challenges encountered in networked agent systems, which frequently involve communication limitations, the incorporation of an event-triggered mechanism can considerably reduce the utilization of communication resources. In this paper, we integrate the event-triggered mechanism into the aforementioned controller.
|
| 156 |
+
|
| 157 |
+
Assumption 2: When employing an event-triggered mechanism, it is presupposed that every agent has the capability to actively monitor its state information in real time. Furthermore, agents are designed to disseminate relevant state updates contingent upon the fulfillment of designed event-triggering condition.
|
| 158 |
+
|
| 159 |
+
To ensure synchronization among all agents, we establish a triggering sequence denoted as $\left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{k}}\right\}$ . This sequential arrangement guarantees that all agents update their controllers simultaneously at a unified triggering time. As a result, the control input (7) can be reformulated as
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
{u}_{i}\left( t\right) = - \left( {\rho + \delta \frac{\dot{\mu }}{\mu }}\right) \mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{j}^{m}\left( {t}_{k}\right) }\right) . \tag{8}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
For each agent, the state measurement error between triggering and true state is
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
{e}_{i}^{m}\left( t\right) = {\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{i}^{m}\left( t\right) , t \in \left\lbrack {{t}_{k},{t}_{k + 1}}\right) . \tag{9}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
Substituting the state measurement error and the controller into the agent's dynamics, yields
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
{\dot{x}}_{i}\left( t\right) = - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{j}^{m}\left( {t}_{k}\right) }\right)
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
= - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{e}_{i}^{m}\left( t\right) + {\mathrm{R}}_{i}^{m}\left( t\right) - \left( {{e}_{j}^{m}\left( t\right) + {\mathrm{R}}_{j}^{m}\left( t\right) }\right) }\right)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
= - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{e}_{i}^{m}\left( t\right) - {e}_{j}^{m}\left( t\right) }\right)
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
- {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( t\right) - {\mathrm{R}}_{j}^{m}\left( t\right) }\right) ,
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
where ${\mathrm{K}}_{\rho } = \rho + \delta \frac{\dot{\mu }}{\mu }$ , and its corresponding compact form can be represented as
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\dot{x}\left( t\right) = - {\mathrm{K}}_{\rho }\mathcal{L}{\mathrm{R}}^{m}\left( t\right) - {\mathrm{K}}_{\rho }\mathcal{L}{e}^{m}\left( t\right)
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
= - {\mathrm{K}}_{\rho }\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) .
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
where $x\left( t\right) = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {{x}_{i}\left( t\right) }\right\rbrack ,{\mathrm{R}}_{F}^{m}\left( t\right) = {\mathbf{{col}}}_{i}^{n}\left\lbrack {{\mathrm{R}}_{Fi}^{m}\left( t\right) }\right\rbrack$ , ${\mathrm{R}}_{L}^{m}\left( t\right) = {\operatorname{col}}_{i}^{m}\left\lbrack {{\mathrm{R}}_{Li}^{m}\left( t\right) }\right\rbrack ,{e}_{L}^{m}\left( t\right) = {\operatorname{col}}_{i}^{m}\left\lbrack {{e}_{Li}^{m}\left( t\right) }\right\rbrack$ and ${e}_{F}^{m}\left( t\right) =$ ${\mathbf{{col}}}_{i}^{n}\left\lbrack {{e}_{Fi}^{m}\left( t\right) }\right\rbrack$ . Besides, let $A = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {a}_{i}\right\rbrack , B = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {b}_{i}\right\rbrack$ and $C = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {c}_{i}\right\rbrack$ .
|
| 200 |
+
|
| 201 |
+
Accordingly, the whole closed-loop error system is
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
\left\{ \begin{array}{l} \dot{x}\left( t\right) = - {\mathrm{K}}_{\rho }\mathcal{L}{\mathrm{R}}^{m}\left( t\right) - {\mathrm{K}}_{\rho }\mathcal{L}{e}^{m}\left( t\right) \\ {\mathrm{R}}^{m}\left( t\right) = x\left( t\right) + m\left( t\right) \end{array}\right. \tag{10}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
where
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
m\left( t\right) = \left\{ \begin{array}{l} A{t}^{2} + {Bt} + C, t \in \left\lbrack {0,{T}^{1}}\right) \\ {A}_{{m}_{1}}{t}^{2} + {B}_{{m}_{1}}t + C, t \in \left\lbrack {{T}^{1},{T}^{2}}\right) \\ \vdots \\ {A}_{{m}_{1}\ldots {m}_{N - 1}}{t}^{2} + {B}_{{m}_{1}\ldots {m}_{N - 1}}t + C, t \in \left\lbrack {{T}^{N - 1},{T}^{N}}\right) \\ 0, t \in \left\lbrack {{T}^{N},\infty }\right) \end{array}\right.
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
To address the predefined time privacy-preserving containment control under the event-triggered mechanism, we design the event-triggering condition (ETC) for the networked agent systems as
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
{t}_{k + 1} = \inf \left\{ {t > {t}_{k} : \begin{Vmatrix}{{e}^{m}\left( t\right) }\end{Vmatrix} \geq \left( {1 - \varepsilon }\right) \frac{{\mathrm{K}}_{\rho }^{\lambda }}{{\mathrm{K}}_{\rho }}\frac{\parallel \varpi \left( t\right) \parallel }{\parallel \mathcal{L}\parallel }}\right\} . \tag{11}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
where ${\mathrm{K}}_{\rho } = \rho + \delta \frac{\dot{\mu }}{\mu }$ and ${\mathrm{K}}_{\rho }^{\lambda } = \rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) + \delta \frac{\dot{\mu }}{\mu },\varepsilon \in \left( {0,1}\right)$ and ${\lambda }_{2}\left( {\mathcal{L}}_{F}\right)$ is the second smallest eigenvalue of the Laplacian matrix ${\mathcal{L}}_{F}$ . Upon the occurrence of a triggering event, all agents discard their previous state and proceed to sample their current state to update their controller. Subsequently, they transmit the newly sampled state to their neighboring agents. Throughout the inter-event period, their control inputs remain constant until the next triggering instance, which forcibly violates the event-triggering condition.
|
| 220 |
+
|
| 221 |
+
Theorem 1: Under the event-triggering condition (11) and control input (8), the predefined time privacy-preserving containment control for networked agent system with graph $\mathcal{G}$ can be achieved. While the parameter in ETC satisfies $\varepsilon \in \left( {0,1}\right)$ .
|
| 222 |
+
|
| 223 |
+
Proof: The proof of Theorem 1 includes convergence analysis and privacy analysis, respectively.
|
| 224 |
+
|
| 225 |
+
(I) Convergence analysis: The vector $x\left( t\right)$ can be divided into sub-vector ${x}_{F}\left( t\right)$ and ${x}_{L}\left( t\right)$ . Based on Definition 3, we define the containment error as $\varpi \left( t\right) = {x}_{F}\left( t\right) -$ $\left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right)$ , and Lyapunov function is adopted as
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
V\left( t\right) = \varpi {\left( t\right) }^{T}\varpi \left( t\right) . \tag{12}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
Note that the leader agents' dynamics model (4), it yields
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\dot{\varpi }\left( t\right) = {\dot{x}}_{F}\left( t\right) - \left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{\dot{x}}_{L}\left( t\right) }\right) = {\dot{x}}_{F}\left( t\right) .
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
Taking the derivative of the Lyapunov function $V\left( t\right)$ , one obtains the following expression
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
\dot{V}\left( t\right) = \varpi {\left( t\right) }^{T}\dot{\varpi }\left( t\right) = \varpi {\left( t\right) }^{T}{\dot{x}}_{F}\left( t\right)
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
= \varpi {\left( t\right) }^{T}\left( {-{\mathrm{K}}_{\rho }\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) }\right)
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
= - \rho \varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right)
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
- \delta \frac{\dot{\mu }}{\mu }\varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) .
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
To satisfy the privacy-preserving requirement of designing a time-varying transformation function, it is essential to ensure that ${T}^{N}$ , the moment at which the final time-varying function converges to its corresponding true state, is less than $T$ , for all $t \in \lbrack 0, T)$ . Notably, the value of $m\left( t\right)$ decreases monotonically as $t$ increases in the interval $t \in \left\lbrack {0,{T}^{N}}\right)$ , and it attains zero if $t \in \left\lbrack {{T}^{N}, T}\right)$ . The result further derives the condition $\mathop{\lim }\limits_{{t \rightarrow {T}_{N}}}{\mathrm{R}}_{F}^{m}\left( t\right) = {x}_{F}\left( t\right) ,\mathop{\lim }\limits_{{t \rightarrow {T}_{N}}}{\mathrm{R}}_{L}^{m}\left( t\right) = {x}_{L}\left( t\right)$ . Based on Lemma 1 in [11], it follows that
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
{\mathcal{L}}_{F}\left( {{x}_{F}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{x}_{L}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right)
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
= {\mathcal{L}}_{F}\left( {\left( {{x}_{F}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}\left( {{x}_{L}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right)
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
= {\mathcal{L}}_{F}\left( {{x}_{F}\left( t\right) + {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right) + {\mathcal{L}}_{F}{e}_{F}^{m}\left( t\right) + {\mathcal{L}}_{L}{e}_{L}^{m}\left( t\right)
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
= {\mathcal{L}}_{F}\varpi \left( t\right) + \mathcal{L}{e}^{m}\left( t\right) .
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
It is noted that ${\mathcal{L}}_{F} \in {\mathcal{R}}^{n \times n}$ denotes the sub-Laplacian matrix among follower agents, we can obtain $\varpi {\left( t\right) }^{T}{\mathcal{L}}_{F}\varpi \left( t\right) \leq$ ${\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \varpi {\left( t\right) }^{T}\varpi \left( t\right)$ , and it derives
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
\dot{V}\left( t\right) \leq - {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - {\mathrm{K}}_{\rho }\varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}{e}_{F}^{m}\left( t\right) + {\mathcal{L}}_{L}{e}_{L}^{m}\left( t\right) }\right)
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
= - \varepsilon {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - {\mathrm{K}}_{\rho }\varpi {\left( t\right) }^{T}\mathcal{L}{e}^{m}\left( t\right)
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
\leq - \varepsilon {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }\parallel \varpi {\parallel }^{2} + {\mathrm{K}}_{\rho }\parallel \varpi \parallel \begin{Vmatrix}{\mathcal{L}{e}^{m}}\end{Vmatrix}.
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
Considering the designed event-triggering condition (11) and the condition $\varepsilon \in \left( {0,1}\right)$ , it concludes
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
{\mathrm{K}}_{\rho }\begin{Vmatrix}{\mathcal{L}{e}^{m}\left( t\right) }\end{Vmatrix} \leq \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }\parallel \varpi \left( t\right) \parallel .
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
Accordingly, since $\delta \geq 1$ , it yields
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
\dot{V}\left( t\right) \leq - \left( {\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) + \frac{\dot{\mu }}{\mu }}\right) \varpi {\left( t\right) }^{T}\varpi \left( t\right) = \rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) V - \frac{\dot{\mu }}{\mu }V.
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
|
| 301 |
+
Fig. 1. The communication topology among twelve agents.
|
| 302 |
+
|
| 303 |
+
According to the Lemma 1 in [11], one has
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
V\left( t\right) \leq \mu {\left( t\right) }^{-2}{\exp }^{-\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \left( {t - {T}^{N}}\right) }V\left( {T}^{N}\right) . \tag{13}
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
And then $\parallel \varpi \left( t\right) \parallel \leq \mu {\left( t\right) }^{-1}{\exp }^{-\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \left( {t - {T}^{N}}\right) }\begin{Vmatrix}{\varpi \left( {T}^{N}\right) }\end{Vmatrix}$ . Note that $\mathop{\lim }\limits_{{t \rightarrow {T}^{ - }}}\mu {\left( t\right) }^{-1} = 0$ , it yields $\mathop{\lim }\limits_{{t \rightarrow {T}^{ - }}}\parallel \varpi \left( t\right) \parallel =$ 0. That is, when $t \rightarrow {T}^{ - }$ , the condition ${x}_{F}\left( t\right) -$ $\left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right) = 0$ holds. Based on the equation (46) of [19] and Definition (2)-(3), $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right)$ is the convex hull signal formed by the leaders, when $\varpi \left( t\right) = 0$ , it implies that all followers converge within the convex hull formed by the leaders. Therefore, the containment control of the networked agent system is achieved within the predefined time $T$ . Since the finite time-varying transformation is only applied to the interval $\lbrack 0, T)$ , the problem of predefined-time containment can be transformed into the general case discussed in [11] for $t \in \lbrack T,\infty )$ . For further information, interested readers can refer to Theorem 1 in [11], which provides detailed proof.
|
| 310 |
+
|
| 311 |
+
(II) Privacy analysis: Consider a scene where the dynamics $f\left( \cdot \right)$ of all agents are widely known and each agent has access to the hidden output states ${\mathrm{R}}_{i}^{m}\left( t\right)$ of its neighboring agents. While the true states ${x}_{i}\left( t\right)$ and the encode keys $\left\{ {{a}_{i},{b}_{i},{c}_{i}}\right\}$ are regarded as private information exclusive to each agent. For an honest-but-curious agent, the information accessible includes the unsigned graph $\mathcal{G}$ , the state of the honest-but-curious agents and the set of neighboring agents, and the hidden state of both the honest-but-curious agents and their neighbors. Following the application of a finite time-varying transformation to conceal agent $i$ ’s initial state, the resulting hidden output ${\mathrm{R}}_{i}^{m}\left( t\right)$ bears no resemblance to the true initial value ${x}_{i}\left( 0\right)$ . As a result, any information set acquired by an honest but curious agent proves futile in determining agent $i$ ’s true initial state. Additionally, the agent cannot reconstruct their true initial state by employing the findings presented in [23]. Importantly, even external eavesdroppers are unable to obtain the true initial state, as evidenced by the process mentioned above. Thus, it becomes apparent that the integrity of the initial state remains elusive to all parties involved, substantiating the claim of its unattainability by external eavesdroppers.
|
| 312 |
+
|
| 313 |
+
## IV. Simulation
|
| 314 |
+
|
| 315 |
+
In this section, several numerical simulations are conducted to verify the effectiveness of the theoretical analysis. The simulation consists of the networked agent systems comprising 12 agents, which include six followers and six leaders. Fig. 1 displays the communication topology among agents. The numerical simulations are performed in the 2- D space. The initial position states of all agents are set as ${x}^{1}\left( 0\right) = {\left\lbrack -{10},0,{10},{10},0, - {10}, - {30}, - 5,{20},{30},5, - {15}\right\rbrack }^{T}$ and ${x}^{2}\left( 0\right) = {\left\lbrack 5,5,5, - 5, - 5, - 5,5,{20},{25}, - {10}, - {15}, - {20}\right\rbrack }^{T}$ . And the parameter $\varepsilon$ is equal to 0.5, the predefined time is $T = {1.5}\mathrm{\;s}$ . The encode keys are selected as
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
A = {\left\lbrack -5, - 9, - 5,8, - 3,6, - 4,5,6, - 4,5, - 3\right\rbrack }^{T},
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
B = {\left\lbrack 2,4,3, - 4,1, - 3,2, - 1, - 3,2, - 1,1\right\rbrack }^{T},
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
C = {\left\lbrack 3,4,1, - 3,2, - 2,1, - 3, - 2,1, - 3,2\right\rbrack }^{T}.
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
|
| 331 |
+
Fig. 2. The true and masked states of all agents.
|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+
Fig. 3. The control input of the follower agents.
|
| 336 |
+
|
| 337 |
+
The simulation results are depicted in Fig 2-5. The trajectory of agents in the ${x}^{1}$ direction is illustrated in Fig 2, with the subfigure highlighting the masked trajectories of all agents. This indicates that the proposed method effectively preserves the privacy of the agents' initial states and achieves the predefined time convergence within 1.5s. Fig 3 presents the control input trajectories of all follower agents, where abrupt changes in the trajectories are attributed to the event-triggered mechanism. Fig 4 demonstrates the fulfillment of the event-triggering conditions, when the designed boundary threshold is exceeded, the agents' states are sampled and updated. Fig 5 shows that all followers successfully move from their initial positions into the convex hull formed by the fixed leaders, achieving privacy-preserving event-triggered predefined time containment control for the networked agent system.
|
| 338 |
+
|
| 339 |
+

|
| 340 |
+
|
| 341 |
+
Fig. 4. The trajectory of the state measurement error and boundary threshold.
|
| 342 |
+
|
| 343 |
+

|
| 344 |
+
|
| 345 |
+
Fig. 5. The trajectory of all agents in the 2-D plane under designed containment control input. (Square markers represent the followers, and circular markers represent the leaders. Leaders form a rectangular convex hull.)
|
| 346 |
+
|
| 347 |
+
## V. CONCULSION
|
| 348 |
+
|
| 349 |
+
This paper has addressed the privacy-preserving event-triggered predefined-time containment control problem for networked agent systems. A novel containment control scheme has been developed, effectively integrating privacy protection with event-triggered mechanisms. This integration has optimized network efficiency by minimizing unnecessary data transmission while ensuring robust containment within a specified time frame. The proposed control scheme has successfully ensured the confidentiality of agents' information through output masking, thereby maintaining both privacy and control accuracy. The effectiveness of the proposed scheme has been verified through simulation results. It is important to note that this study has focused on static leaders, and future research will extend the investigation to address containment control problems under dynamic leaders.
|
| 350 |
+
|
| 351 |
+
## REFERENCES
|
| 352 |
+
|
| 353 |
+
[1] Z. Wang, H. Li, J. Liu, T. Zhang, X. Ma, S. Xie, and J. Luo, "Static group-bipartite consensus in networked robot systems with integral action," International Journal of Advanced Robotic Systems, vol. 20, no. 3, p. 17298806231177148, 2023.
|
| 354 |
+
|
| 355 |
+
[2] C. Feng, Z. Xu, X. Zhu, P. V. Klaine, and L. Zhang, "Wireless distributed consensus in vehicle to vehicle networks for autonomous driving," IEEE Transactions on Vehicular Technology, vol. 72, no. 6, pp. 8061-8073, 2023.
|
| 356 |
+
|
| 357 |
+
[3] E. Arabi, T. Yucelen, and W. M. Haddad, "Mitigating the effects of sensor uncertainties in networked multi-agent systems," Journal of
|
| 358 |
+
|
| 359 |
+
Dynamic Systems, Measurement, and Control, vol. 139, no. 4, p. 041003, 2017.
|
| 360 |
+
|
| 361 |
+
[4] H. Zhou and S. Tong, "Adaptive neural network event-triggered output-feedback containment control for nonlinear mass with input quantization," IEEE Transactions on Cybernetics, vol. 53, no. 11, pp. 7406-7416, 2023.
|
| 362 |
+
|
| 363 |
+
[5] X. Wang, N. Pang, Y. Xu, T. Huang, and J. Kurths, "On state-constrained containment control for nonlinear multiagent systems using event-triggered input," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, doi: 10.1109/TSMC.2023.3345365.
|
| 364 |
+
|
| 365 |
+
[6] X. Shao, H. Liu, W. Zhang, J. Zhao, and Q. Zhang, "Path driven formation-containment control of multiple uavs: A path-following framework," Aerospace Science and Technology, vol. 135, p. 108168, 2023.
|
| 366 |
+
|
| 367 |
+
[7] X. Wang, R. Xu, T. Huang, and J. Kurths, "Event-triggered adaptive containment control for heterogeneous stochastic nonlinear multiagent systems," IEEE Transactions on Neural Networks and Learning Systems, 2023, doi: 10.1109/TNNLS.2022.3230508.
|
| 368 |
+
|
| 369 |
+
[8] S. Tong and H. Zhou, "Finite-time adaptive fuzzy event-triggered output-feedback containment control for nonlinear multiagent systems with input saturation," IEEE Transactions on Fuzzy Systems, vol. 31, no. 9, pp. 3135-3147, 2023.
|
| 370 |
+
|
| 371 |
+
[9] Z. Zhu, Y. Yin, F. Wang, Z. Liu, and Z. Chen, "Practical robust fixed-time containment control for multi-agent systems under actuator faults," Expert Systems with Applications, vol. 245, p. 123152, 2024.
|
| 372 |
+
|
| 373 |
+
[10] T. Yang and J. Dong, "Funnel-based predefined-time containment control of heterogeneous multiagent systems with sensor and actuator faults," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, doi: 10.1109/TSMC.2023.3330942.
|
| 374 |
+
|
| 375 |
+
[11] Y. Wang, Y. Song, D. J. Hill, and M. Krstic, "Prescribed-time consensus and containment control of networked multiagent systems," IEEE Transactions on Cybernetics, vol. 49, no. 4, pp. 1138-1147, 2018.
|
| 376 |
+
|
| 377 |
+
[12] X. Gong and X. Li, "Fault-tolerant practical prescribed-time formation-containment control of multi-agent systems on directed graphs," IEEE Transactions on Network Science and Engineering, 2023, doi: 10.1109/TNSE.2023.3298719.
|
| 378 |
+
|
| 379 |
+
[13] S. Chang, C. Wang, and X. Luo, "Predefined-time bipartite containment control of multi-agent systems with novel super-twisting extended state observer," Information Sciences, p. 120952, 2024.
|
| 380 |
+
|
| 381 |
+
[14] X. Chen, L. Huang, L. He, S. Dey, and L. Shi, "A differentially private method for distributed optimization in directed networks via state decomposition," IEEE Transactions on Control of Network Systems, vol. 10, no. 4, pp. 2165-2177, 2023.
|
| 382 |
+
|
| 383 |
+
[15] C. Gao, D. Zhao, J. Li, and H. Lin, "Private bipartite consensus control for multi-agent systems: A hierarchical differential privacy scheme," Information Fusion, vol. 105, p. 102259, 2024.
|
| 384 |
+
|
| 385 |
+
[16] L. Liang, R. Ding, S. Liu, and R. Su, "Event-triggered privacy preserving consensus control with edge-based additive noise," IEEE Transactions on Automatic Control, 2024, doi: 10.1109/TAC.2024.3390574.
|
| 386 |
+
|
| 387 |
+
[17] Y. Gong, L. Cao, Y. Pan, and Q. Lu, "Adaptive containment control of nonlinear multi-agent systems about privacy preservation with multiple attacks," International Journal of Robust and Nonlinear Control, vol. 33, no. 11, pp. 6103-6120, 2023.
|
| 388 |
+
|
| 389 |
+
[18] M. Zhang, Y. Sun, H. Liu, X. Yi, and D. Ding, "Event-triggered formation-containment control for multi-agent systems based on sliding mode control approaches," Neurocomputing, vol. 562, p. 126905, 2023.
|
| 390 |
+
|
| 391 |
+
[19] Y. Liu, X. Xie, J. Sun, and D. Yang, "Event-triggered privacy preservation consensus control and containment control for nonlinear mass: An output mask approach," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, doi: 10.1109/TSMC.2024.3379375.
|
| 392 |
+
|
| 393 |
+
[20] J. Zhang, J. Lu, and J. Lou, "Privacy-preserving average consensus via finite time-varying transformation," IEEE Transactions on Network Science and Engineering, vol. 9, no. 3, pp. 1756-1764, 2022.
|
| 394 |
+
|
| 395 |
+
[21] A. Berman and R. J. Plemmons, Nonnegative matrices in the mathematical sciences. SIAM, 1994.
|
| 396 |
+
|
| 397 |
+
[22] R. T. Rockafellar, Convex analysis. Princeton University Press, 1970. [23] J. Yue, K. Qin, M. Shi, B. Jiang, W. Li, and L. Shi, "Event-trigger-based finite-time privacy-preserving formation control for multi-uav system," Drones, vol. 7, no. 4, p. 235, 2023.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/3dNL0Q0j8f/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,345 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PRIVACY-PRESERVING EVENT-TRIGGERED PREDEFINED TIME CONTAINMENT CONTROL FOR NETWORKED AGENT SYSTEMS
|
| 2 |
+
|
| 3 |
+
Weihao ${\mathrm{{Li}}}^{1,2,3, \dagger }$ , Jiangfeng ${\mathrm{{Yue}}}^{1,2,3, \dagger }$ , Mengji ${\mathrm{{Shi}}}^{1,2,3, * }$ , Boxian ${\mathrm{{Lin}}}^{1,2,3}$ , Kaiyu ${\mathrm{{Qin}}}^{1,2,3}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu, China.
|
| 6 |
+
|
| 7 |
+
${}^{2}$ Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu, China.
|
| 8 |
+
|
| 9 |
+
${}^{3}$ National Laboratory on Adaptive Optics, Chengdu,610209, China.
|
| 10 |
+
|
| 11 |
+
Email: maangat@126.com
|
| 12 |
+
|
| 13 |
+
Abstract-This paper addresses the privacy-preserving event-triggered predefined time containment control problem for networked agent systems. A novel containment control scheme is developed that integrates privacy protection with event-triggered mechanisms, optimizing network efficiency by minimizing unnecessary data transmission while ensuring robust containment within a specified time frame. The proposed control scheme ensures the confidentiality of agents' information through output masking, thereby maintaining both privacy and control accuracy. Furthermore, it provides a distinct advantage over traditional finite-time and fixed-time control methods by guaranteeing convergence to the desired state within a predefined time, regardless of initial conditions. Finally, some simulation results are given to verify the effectiveness of the proposed containment control scheme.
|
| 14 |
+
|
| 15 |
+
Index Terms-Containment Control; Privacy-preserving; Predefined Time; Event-triggered Control; Networked Agent Systems.
|
| 16 |
+
|
| 17 |
+
§ I. INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Networked agent systems have garnered significant attention across various fields due to their broad range of applications, including robotics [1], autonomous vehicles [2], and distributed sensor networks [3]. The cooperative control of networked agent systems involves designing strategies that enable agents to work together effectively to achieve shared objectives. A prominent approach within cooperative control is containment control [4], [5], which aims to ensure that a group of agents (followers) remains within a specified region or adheres to a particular trajectory, while another group of agents (leaders) directs their behavior. Containment control is particularly crucial in scenarios requiring strict spatial or operational constraint adherence. For instance, in a formation flying scenario, containment control can ensure that a group of drones maintains a specific formation while another set of drones guides their collective movement [6].
|
| 20 |
+
|
| 21 |
+
Convergence speed is a critical performance metric in the containment control of networked agent systems. Current research explores several approaches to achieving convergence, including asymptotic convergence [7], finite-time convergence [8], and fixed-time convergence [9]. Asymptotic convergence guarantees that the system will eventually converge to the desired state over time, although the convergence rate may not be specified. Finite-time convergence ensures that the system reaches the desired state within a finite period, though the exact time depends on system parameters and states. Fixed-time convergence provides a guarantee of convergence within a predetermined time, irrespective of initial conditions, thereby offering more predictability in performance. However, the convergence time in both finite-time and fixed-time approaches is influenced by system parameters and states. To address this, researchers have developed predefined time control schemes that enable the specification of a desired convergence time [10], [11]. The primary advantages of predefined-time control include the ability to guarantee convergence within a specified time frame, thereby providing more predictable and controllable system behavior, and enhancing system performance by setting precise deadlines for achieving the desired state.
|
| 22 |
+
|
| 23 |
+
The existing literature [10]-[13] on predefined-time convergence in networked agent systems generally overlooks the issue of information privacy during transmission. However, privacy protection is of paramount importance in containment control, where safeguarding the confidentiality of agents' information is critical. Several methods for privacy protection have been proposed, including state decomposition [14], differential privacy [15], additive noise [16], and output masking [17]. Among these, output masking has received considerable attention due to its simplicity and ease of implementation. This method involves obscuring the output of agents to protect sensitive information while still allowing effective control. However, output masking relies on continuous information exchange, which can impose constraints on communication bandwidth. To address this limitation, it is necessary to develop privacy protection schemes under event-triggered mechanisms [18], [19], which can alleviate communication bandwidth constraints. In [19], the authors integrated both privacy preservation and event-triggered mechanisms into the consensus and containment control but overlooked predefined performance. Zhang et al. [20] incorporated prescribed-time theory and privacy preservation into consensus control but neglected bandwidth constraints. In conclusion, to the best of the author's knowledge, no existing solution simultaneously addresses the challenges of communication bandwidth, convergence time, and privacy protection in containment control, making this an area of significant research opportunity.
|
| 24 |
+
|
| 25 |
+
$\dagger$ :These authors contribute equally to this paper.
|
| 26 |
+
|
| 27 |
+
This work was supported by the Natural Science Foundation of Sichuan Province (2022NSFSC0037), the Sichuan Science and Technology Programs (2022JDR0107, 2021YFG0130, MZGC20230069, MZGC20240139), the Fundamental Research Funds for the Central Universities (ZYGX2020J020), the Wuhu Science and Technology Plan Project (2022yf23). (Corresponding author: Mengji Shi.)
|
| 28 |
+
|
| 29 |
+
According to the above discussion, this paper focuses on the privacy-preserving event-triggered predefined time containment control problem of networked agent systems. The main contributions of this paper are summarized as follows:
|
| 30 |
+
|
| 31 |
+
(1) A novel event-triggered predefined-time containment control scheme is developed to optimize network efficiency while ensuring robust containment performance within a specified time frame. By employing event-triggered control, the scheme significantly reduces unnecessary data transmission, ensuring that agents communicate only when necessary. This approach effectively balances communication efficiency and system performance.
|
| 32 |
+
|
| 33 |
+
(2) The proposed control scheme guarantees convergence within a predefined time, offering a distinct advantage over finite-time and fixed-time methods. Unlike these traditional methods, where convergence time is often influenced by initial conditions and system parameters, the predefined time control ensures that the desired state is consistently reached within the predetermined time frame, thereby enhancing the predictability and reliability of the system.
|
| 34 |
+
|
| 35 |
+
(3) Furthermore, a privacy-preserving containment control scheme is designed to safeguard the confidentiality of agents' information by masking their outputs while maintaining accurate control. Compared to alternative privacy protection methods such as differential privacy or state decomposition, this scheme provides a simpler and more efficient solution. It ensures both privacy and communication efficiency without compromising the overall system performance, making it particularly suitable for applications with stringent privacy and bandwidth requirements.
|
| 36 |
+
|
| 37 |
+
The remainder of the paper is listed below. Some preliminaries are formulated in Section II and Section III formulates the problem. Section IV designs a privacy-preserving containment control input. Numerical simulation examples are provided in Section V, and Section VI sums up the whole paper.
|
| 38 |
+
|
| 39 |
+
§ II. PRELIMINARY AND PROBLEM FORMULATION
|
| 40 |
+
|
| 41 |
+
§ A. PRELIMINARIES
|
| 42 |
+
|
| 43 |
+
The communication structure among agents in this study is represented by a graph topology denoted as $\mathcal{G} = \langle \mathcal{V},\mathcal{E},\mathcal{A}\rangle$ , where $\mathcal{V},\mathcal{E}$ , and $\mathcal{A}$ correspond to the set of nodes, the set of edges, and the adjacency matrix, respectively. The network consists of a total of $N = m + n$ agents, with $n$ being the number of follower agents and $m$ being the number of leader agents. The leader and follower agents are categorized into sets ${\mathcal{V}}_{L} = \{ 1,2,\ldots ,m\}$ and ${\mathcal{V}}_{F} = \{ m + 1,m + 2,\ldots ,m + n\}$ , respectively. Consequently, the overall set of nodes is formed by the union of these two sets, $\mathcal{V} = {\mathcal{V}}_{F} \cup {\mathcal{V}}_{L}$ . Following the definitions of the node sets, the adjacency matrix is represented as $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathcal{R}}^{\left( {n + m}\right) \times \left( {n + m}\right) }$ , where the element ${a}_{ij}$ is positive if there exists an edge from node $j$ to $i$ within the set $\mathcal{E}$ , and zero otherwise. Assuming leaders do not have adjacent nodes, implying that leaders solely disseminate information to followers, the Laplacian matrix $\mathcal{L}$ for the network of agents is derived as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ . The degree matrix, denoted by $\mathcal{D}$ , is a diagonal matrix with elements ${d}_{i}$ on the diagonal, where ${d}_{i}$ is the sum of the adjacency matrix elements in the $i$ -th row, calculated as ${d}_{i} = \mathop{\sum }\limits_{{k = 1}}^{{n + m}}{a}_{ik}$ .
|
| 44 |
+
|
| 45 |
+
Based on the aforementioned definitions, the Laplacian matrix is constructed as follows:
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\mathcal{L} = \left\lbrack \begin{matrix} {\mathbf{0}}_{m \times n} & {\mathbf{0}}_{m \times m} \\ {\mathcal{L}}_{F} & {\mathcal{L}}_{L} \end{matrix}\right\rbrack , \tag{1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where the sub-Laplacian matrix specific to the follower agents is denoted as ${\mathcal{L}}_{F} \in {\mathcal{R}}^{n \times n}$ , and the sub-Laplacian matrix that captures the interactions between leader and follower agents is represented by ${\mathcal{L}}_{L} \in {\mathcal{R}}^{n \times m}$ . The elements of ${\mathcal{L}}_{F}$ , denoted as $\left\lbrack {l}_{ij}\right\rbrack$ , are defined such that when indices match, ${l}_{ij}$ equals the sum of the adjacency matrix entries ${a}_{ip}$ for all $p$ in the set of nodes $\mathcal{V}$ , and when indices differ, ${l}_{ij}$ is the negation of the corresponding adjacency entry ${a}_{ij}$ . Mathematically, this is expressed as:
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{l}_{ij} = \left\{ \begin{array}{ll} \mathop{\sum }\limits_{{p = 1}}^{{m + n}}{a}_{ip}, & \text{ if }i = j, \\ - {a}_{ij}, & \text{ otherwise. } \end{array}\right.
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
The subsequent assumption about the communication framework is established to guarantee the feasibility of containment control within the networked agent systems.
|
| 58 |
+
|
| 59 |
+
Assumption 1: This paper posits that each follower is associated with at least one leader, with whom there exists a directed path leading to the follower.
|
| 60 |
+
|
| 61 |
+
Definition 1 ([21]): Let ${Z}_{n}$ be the collection of all $n \times n$ square matrices with non-positive off-diagonal elements, denoted as ${Z}_{n} \subset {\mathcal{R}}^{n \times n}$ . A matrix $Y$ is classified as a nonsingular M-matrix if it belongs to ${Z}_{n}$ and all its eigenvalues possess positive real parts.
|
| 62 |
+
|
| 63 |
+
Lemma 1 ([4]): Under Assumption 1, it is established that the matrix ${\mathcal{L}}_{F}$ qualifies as a nonsingular M-matrix. Furthermore, it holds that $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{\mathbf{1}}_{m} = {\mathbf{1}}_{n}$ , and every component of $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}$ is nonnegative.
|
| 64 |
+
|
| 65 |
+
Definition 2 ([22]): Let $\Lambda$ be a subset of ${\mathcal{R}}^{n}$ . If for any ${z}_{1},{z}_{2} \in \Lambda$ and a scalar $0 < \gamma < 1$ , the linear combination $\left( {1 - \gamma }\right) {z}_{1} + \gamma {z}_{2}$ also belongs to $\Lambda$ , then $\Lambda$ is deemed a convex set. Given a vector $\chi$ with elements ${\chi }_{i}$ , the convex hull of $\chi$ , denoted as $\operatorname{Co}\left( \chi \right)$ , is the set of all vectors that can be expressed as $\mathop{\sum }\limits_{{i = 1}}^{n}{\gamma }_{i}{\chi }_{i}$ , where each ${\gamma }_{i} \geq 0$ and the sum $\mathop{\sum }\limits_{{i = 1}}^{n}{\gamma }_{i} = 1$ .
|
| 66 |
+
|
| 67 |
+
§ B. TIME-VARYING TRANSFORMATION
|
| 68 |
+
|
| 69 |
+
The objective of privacy-preserving containment control is to guide the followers into the convex hull spanned by the leaders, without revealing the initial states of the participating agents. To address this, the paper integrates a dynamic, time-variant transformation into the traditional containment control paradigm. This transformation enables each agent to modify its state according to the evolving function before sharing information with its neighbors. The employed transformation function is both standardized and perpetually updating, characterized as
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
p : {\mathcal{R}}^{ + } \times {\mathcal{R}}^{h} \times {\mathcal{R}}^{d} \rightarrow {\mathcal{R}}^{h} \tag{2}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\left( {t,x,m}\right) \mapsto y\left( t\right) = \Lambda \left( {t,x\left( t\right) ,m}\right) ,
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $x = {\left\lbrack {x}_{1},\ldots ,{x}_{h}\right\rbrack }^{T} \in {\mathcal{R}}^{h}$ is the agent’s true states, the hidden state output after the time-varying transformation is $y = {\left\lbrack {y}_{1},\ldots ,{y}_{h}\right\rbrack }^{T} \in {\mathcal{R}}^{h}$ , both states have equal dimensions, the parameter set $m \in {\mathcal{R}}^{d}$ represents the key of time-varying transformation. The state output after the time-varying transformation is uniformly referred to as the hidden state in this paper. It is postulated that there exists a common system $\dot{x} = f\left( x\right)$ , and the dynamics following the application of time-varying transformation can be expressed as $\dot{x} = f\left( y\right)$ and $y = \Lambda \left( {t,x,m}\right)$ . If $\left| {\Lambda \left( {t,x,m}\right) - x\left( t\right) }\right|$ is approaching zero under the given key $m$ , it is referred to as a finite time-varying transformation, and the following condition holds
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\left\{ \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow \Omega }}\Lambda \left( {t,x\left( t\right) ,m}\right) = x\left( t\right) , \\ \Lambda \left( {t,x\left( t\right) ,m}\right) = x\left( t\right) ,t \in \lbrack \Omega ,\infty ), \end{array}\right.
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $\Omega$ denotes a finite time constant indicates that the final hidden state converges to the real state over time. The range of $\Omega$ is primarily influenced by the values of each parameter in the key $m$ .
|
| 86 |
+
|
| 87 |
+
§ C. CONTAINMENT CONTROL PROBLEM DESCRIPTION
|
| 88 |
+
|
| 89 |
+
In this paper, we focus on a single-integrator networked agent system. The dynamics of the follower agents are characterized by the following equation:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\dot{x}}_{i}\left( t\right) = {u}_{i}\left( t\right) ,i \in {\mathcal{V}}_{F}, \tag{3}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where ${x}_{i}\left( t\right)$ and ${u}_{i}\left( t\right)$ denote the position and control input of $i$ th follower agent, respectively.
|
| 96 |
+
|
| 97 |
+
Additionally, the dynamics of the leader agents are governed by the following equation:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
{\dot{x}}_{i}\left( t\right) = 0,i \in {\mathcal{V}}_{L}, \tag{4}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
where ${x}_{i}\left( t\right)$ denotes the position of $i$ th leader agent. The above dynamics mean that the leader agents' position is stationary.
|
| 104 |
+
|
| 105 |
+
Definition 3: Consider a single-integrator networked agent system comprising $m$ leader agents and $n$ follower agents, the implementation of predefined time containment control necessitates that the position states of the followers converge to the convex hull defined by the leaders within specified time $T$ . Specifically, for any given initial condition, the convergence is characterized by the satisfaction of the following set of equations:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\mathop{\lim }\limits_{{t \rightarrow T}}\left| {{x}_{i}\left( t\right) - \mathop{\sum }\limits_{{k = 1}}^{m}{\varepsilon }_{ik}{x}_{k}\left( t\right) }\right| = 0, \tag{5}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${\varepsilon }_{ik} \in \mathcal{R},{\varepsilon }_{ik} \geq 0$ and $\mathop{\sum }\limits_{{k = 1}}^{m}{\varepsilon }_{ik} = 1,i \in {\mathcal{V}}_{F},k \in {\mathcal{V}}_{L}$ .
|
| 112 |
+
|
| 113 |
+
§ III. MAIN RESULTS
|
| 114 |
+
|
| 115 |
+
This section designs a decentralized finite-time varying transformation function to serve as a privacy mask and incorporates the event-triggered mechanism and predefined time theory to enhance the performance of networked agent systems. The proposed containment controller synthetically considers privacy-preserving, communication bandwidth constraint, and convergence speed.
|
| 116 |
+
|
| 117 |
+
To safeguard the confidentiality of agents' initial state information, we introduce mutually independent functions into the process of information exchange among agents. Furthermore, the aforementioned time-varying function can be implemented as
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\left\{ \begin{array}{l} \mathop{\lim }\limits_{{t \rightarrow {T}_{i}}}{\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) ,t \in \lbrack {T}_{i},\infty ). \end{array}\right. \tag{6}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
According to the requirements of the finite-time varying function, the received information of follower agent $j$ from agent $i$ can be designed as
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\left\{ \begin{array}{ll} {\mathrm{R}}_{i}^{m}\left( t\right) = {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) & \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) + {a}_{i}{t}^{2} + {b}_{i}t + {c}_{i}, & t \in \left\lbrack {0,{\Omega }_{i}}\right) \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) , & t \in \left\lbrack {{\Omega }_{i},\infty }\right) \end{array}\right.
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where ${\Omega }_{i}$ satisfies
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\left\{ \begin{array}{l} {\Omega }_{i} = \frac{-{b}_{i} - \sqrt{{b}_{i}{}^{2} - 4{a}_{i}{c}_{i}}}{2{a}_{i}},{b}_{i} \geq 0,{c}_{i} \geq 0,\text{ if }\mathrm{a} \in \lbrack 0,\infty ), \\ {\Omega }_{i} = \frac{-{b}_{i} + \sqrt{{b}_{i}{}^{2} - 4{a}_{i}{c}_{i}}}{2{a}_{i}},{b}_{i} < 0,{c}_{i} < 0,\text{ if }\mathrm{a} \in \left( {-\infty ,0}\right) , \end{array}\right.
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
and ${a}_{i},{b}_{i},{c}_{i} \in \mathcal{R}$ , each agent has its distinctive encode key, denoted as ${m}_{i} = \left\{ {{a}_{i},{b}_{i},{c}_{i}}\right\}$ , noting that individual encode keys remain undisclosed to other agents.
|
| 136 |
+
|
| 137 |
+
Building upon the previously devised time-varying function and the acquired hidden information from neighboring agents, the predefined time containment control input for the $i$ th agent can be expressed as follows
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\left\{ \begin{array}{l} {u}_{i}\left( t\right) = - \left( {\rho + {\delta }_{\mu }^{\dot{\mu }}}\right) \mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( t\right) - {\mathrm{R}}_{j}^{m}\left( t\right) }\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) + {a}_{i}{t}^{2} + {b}_{i}t + {c}_{i},t \in \left\lbrack {0,{\Omega }_{i}}\right) , \\ {\Lambda }_{i}\left( {t,{x}_{i}\left( t\right) ,{m}_{i}}\right) = {x}_{i}\left( t\right) ,t \in \left\lbrack {{\Omega }_{i},\infty }\right) , \end{array}\right. \tag{7}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
where $\rho > 0$ represents the control gain, and $\mu$ denotes a time-varying scaling function, which takes the form of
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mu \left( t\right) = \left\{ \begin{matrix} {\left( \frac{T}{T - t}\right) }^{h}, & t \in \lbrack 0,T), \\ 0, & t \in \lbrack T,\infty ), \end{matrix}\right.
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where the real number $h$ holds the condition $h > 2$ .
|
| 150 |
+
|
| 151 |
+
Considering the practical challenges encountered in networked agent systems, which frequently involve communication limitations, the incorporation of an event-triggered mechanism can considerably reduce the utilization of communication resources. In this paper, we integrate the event-triggered mechanism into the aforementioned controller.
|
| 152 |
+
|
| 153 |
+
Assumption 2: When employing an event-triggered mechanism, it is presupposed that every agent has the capability to actively monitor its state information in real time. Furthermore, agents are designed to disseminate relevant state updates contingent upon the fulfillment of designed event-triggering condition.
|
| 154 |
+
|
| 155 |
+
To ensure synchronization among all agents, we establish a triggering sequence denoted as $\left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{k}}\right\}$ . This sequential arrangement guarantees that all agents update their controllers simultaneously at a unified triggering time. As a result, the control input (7) can be reformulated as
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{u}_{i}\left( t\right) = - \left( {\rho + \delta \frac{\dot{\mu }}{\mu }}\right) \mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{j}^{m}\left( {t}_{k}\right) }\right) . \tag{8}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
For each agent, the state measurement error between triggering and true state is
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
{e}_{i}^{m}\left( t\right) = {\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{i}^{m}\left( t\right) ,t \in \left\lbrack {{t}_{k},{t}_{k + 1}}\right) . \tag{9}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
Substituting the state measurement error and the controller into the agent's dynamics, yields
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
{\dot{x}}_{i}\left( t\right) = - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( {t}_{k}\right) - {\mathrm{R}}_{j}^{m}\left( {t}_{k}\right) }\right)
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
= - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{e}_{i}^{m}\left( t\right) + {\mathrm{R}}_{i}^{m}\left( t\right) - \left( {{e}_{j}^{m}\left( t\right) + {\mathrm{R}}_{j}^{m}\left( t\right) }\right) }\right)
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
= - {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{e}_{i}^{m}\left( t\right) - {e}_{j}^{m}\left( t\right) }\right)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
- {\mathrm{K}}_{\rho }\mathop{\sum }\limits_{{j \in {\mathcal{V}}_{L} \cup {\mathcal{V}}_{F}}}{a}_{ij}\left( {{\mathrm{R}}_{i}^{m}\left( t\right) - {\mathrm{R}}_{j}^{m}\left( t\right) }\right) ,
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
where ${\mathrm{K}}_{\rho } = \rho + \delta \frac{\dot{\mu }}{\mu }$ , and its corresponding compact form can be represented as
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\dot{x}\left( t\right) = - {\mathrm{K}}_{\rho }\mathcal{L}{\mathrm{R}}^{m}\left( t\right) - {\mathrm{K}}_{\rho }\mathcal{L}{e}^{m}\left( t\right)
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
= - {\mathrm{K}}_{\rho }\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) .
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where $x\left( t\right) = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {{x}_{i}\left( t\right) }\right\rbrack ,{\mathrm{R}}_{F}^{m}\left( t\right) = {\mathbf{{col}}}_{i}^{n}\left\lbrack {{\mathrm{R}}_{Fi}^{m}\left( t\right) }\right\rbrack$ , ${\mathrm{R}}_{L}^{m}\left( t\right) = {\operatorname{col}}_{i}^{m}\left\lbrack {{\mathrm{R}}_{Li}^{m}\left( t\right) }\right\rbrack ,{e}_{L}^{m}\left( t\right) = {\operatorname{col}}_{i}^{m}\left\lbrack {{e}_{Li}^{m}\left( t\right) }\right\rbrack$ and ${e}_{F}^{m}\left( t\right) =$ ${\mathbf{{col}}}_{i}^{n}\left\lbrack {{e}_{Fi}^{m}\left( t\right) }\right\rbrack$ . Besides, let $A = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {a}_{i}\right\rbrack ,B = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {b}_{i}\right\rbrack$ and $C = {\mathbf{{col}}}_{i}^{n + m}\left\lbrack {c}_{i}\right\rbrack$ .
|
| 196 |
+
|
| 197 |
+
Accordingly, the whole closed-loop error system is
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\left\{ \begin{array}{l} \dot{x}\left( t\right) = - {\mathrm{K}}_{\rho }\mathcal{L}{\mathrm{R}}^{m}\left( t\right) - {\mathrm{K}}_{\rho }\mathcal{L}{e}^{m}\left( t\right) \\ {\mathrm{R}}^{m}\left( t\right) = x\left( t\right) + m\left( t\right) \end{array}\right. \tag{10}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
where
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
m\left( t\right) = \left\{ \begin{array}{l} A{t}^{2} + {Bt} + C,t \in \left\lbrack {0,{T}^{1}}\right) \\ {A}_{{m}_{1}}{t}^{2} + {B}_{{m}_{1}}t + C,t \in \left\lbrack {{T}^{1},{T}^{2}}\right) \\ \vdots \\ {A}_{{m}_{1}\ldots {m}_{N - 1}}{t}^{2} + {B}_{{m}_{1}\ldots {m}_{N - 1}}t + C,t \in \left\lbrack {{T}^{N - 1},{T}^{N}}\right) \\ 0,t \in \left\lbrack {{T}^{N},\infty }\right) \end{array}\right.
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
To address the predefined time privacy-preserving containment control under the event-triggered mechanism, we design the event-triggering condition (ETC) for the networked agent systems as
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
{t}_{k + 1} = \inf \left\{ {t > {t}_{k} : \begin{Vmatrix}{{e}^{m}\left( t\right) }\end{Vmatrix} \geq \left( {1 - \varepsilon }\right) \frac{{\mathrm{K}}_{\rho }^{\lambda }}{{\mathrm{K}}_{\rho }}\frac{\parallel \varpi \left( t\right) \parallel }{\parallel \mathcal{L}\parallel }}\right\} . \tag{11}
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
where ${\mathrm{K}}_{\rho } = \rho + \delta \frac{\dot{\mu }}{\mu }$ and ${\mathrm{K}}_{\rho }^{\lambda } = \rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) + \delta \frac{\dot{\mu }}{\mu },\varepsilon \in \left( {0,1}\right)$ and ${\lambda }_{2}\left( {\mathcal{L}}_{F}\right)$ is the second smallest eigenvalue of the Laplacian matrix ${\mathcal{L}}_{F}$ . Upon the occurrence of a triggering event, all agents discard their previous state and proceed to sample their current state to update their controller. Subsequently, they transmit the newly sampled state to their neighboring agents. Throughout the inter-event period, their control inputs remain constant until the next triggering instance, which forcibly violates the event-triggering condition.
|
| 216 |
+
|
| 217 |
+
Theorem 1: Under the event-triggering condition (11) and control input (8), the predefined time privacy-preserving containment control for networked agent system with graph $\mathcal{G}$ can be achieved. While the parameter in ETC satisfies $\varepsilon \in \left( {0,1}\right)$ .
|
| 218 |
+
|
| 219 |
+
Proof: The proof of Theorem 1 includes convergence analysis and privacy analysis, respectively.
|
| 220 |
+
|
| 221 |
+
(I) Convergence analysis: The vector $x\left( t\right)$ can be divided into sub-vector ${x}_{F}\left( t\right)$ and ${x}_{L}\left( t\right)$ . Based on Definition 3, we define the containment error as $\varpi \left( t\right) = {x}_{F}\left( t\right) -$ $\left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right)$ , and Lyapunov function is adopted as
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
V\left( t\right) = \varpi {\left( t\right) }^{T}\varpi \left( t\right) . \tag{12}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
Note that the leader agents' dynamics model (4), it yields
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
\dot{\varpi }\left( t\right) = {\dot{x}}_{F}\left( t\right) - \left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{\dot{x}}_{L}\left( t\right) }\right) = {\dot{x}}_{F}\left( t\right) .
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
Taking the derivative of the Lyapunov function $V\left( t\right)$ , one obtains the following expression
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
\dot{V}\left( t\right) = \varpi {\left( t\right) }^{T}\dot{\varpi }\left( t\right) = \varpi {\left( t\right) }^{T}{\dot{x}}_{F}\left( t\right)
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
= \varpi {\left( t\right) }^{T}\left( {-{\mathrm{K}}_{\rho }\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) }\right)
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
= - \rho \varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right)
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
- \delta \frac{\dot{\mu }}{\mu }\varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}\left( {{\mathrm{R}}_{F}^{m}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{\mathrm{R}}_{L}^{m}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right) .
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
To satisfy the privacy-preserving requirement of designing a time-varying transformation function, it is essential to ensure that ${T}^{N}$ , the moment at which the final time-varying function converges to its corresponding true state, is less than $T$ , for all $t \in \lbrack 0,T)$ . Notably, the value of $m\left( t\right)$ decreases monotonically as $t$ increases in the interval $t \in \left\lbrack {0,{T}^{N}}\right)$ , and it attains zero if $t \in \left\lbrack {{T}^{N},T}\right)$ . The result further derives the condition $\mathop{\lim }\limits_{{t \rightarrow {T}_{N}}}{\mathrm{R}}_{F}^{m}\left( t\right) = {x}_{F}\left( t\right) ,\mathop{\lim }\limits_{{t \rightarrow {T}_{N}}}{\mathrm{R}}_{L}^{m}\left( t\right) = {x}_{L}\left( t\right)$ . Based on Lemma 1 in [11], it follows that
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
{\mathcal{L}}_{F}\left( {{x}_{F}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{L}\left( {{x}_{L}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right)
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
= {\mathcal{L}}_{F}\left( {\left( {{x}_{F}\left( t\right) + {e}_{F}^{m}\left( t\right) }\right) + {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}\left( {{x}_{L}\left( t\right) + {e}_{L}^{m}\left( t\right) }\right) }\right)
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
= {\mathcal{L}}_{F}\left( {{x}_{F}\left( t\right) + {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right) + {\mathcal{L}}_{F}{e}_{F}^{m}\left( t\right) + {\mathcal{L}}_{L}{e}_{L}^{m}\left( t\right)
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
= {\mathcal{L}}_{F}\varpi \left( t\right) + \mathcal{L}{e}^{m}\left( t\right) .
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
It is noted that ${\mathcal{L}}_{F} \in {\mathcal{R}}^{n \times n}$ denotes the sub-Laplacian matrix among follower agents, we can obtain $\varpi {\left( t\right) }^{T}{\mathcal{L}}_{F}\varpi \left( t\right) \leq$ ${\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \varpi {\left( t\right) }^{T}\varpi \left( t\right)$ , and it derives
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
\dot{V}\left( t\right) \leq - {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - {\mathrm{K}}_{\rho }\varpi {\left( t\right) }^{T}\left( {{\mathcal{L}}_{F}{e}_{F}^{m}\left( t\right) + {\mathcal{L}}_{L}{e}_{L}^{m}\left( t\right) }\right)
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
= - \varepsilon {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - {\mathrm{K}}_{\rho }\varpi {\left( t\right) }^{T}\mathcal{L}{e}^{m}\left( t\right)
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
\leq - \varepsilon {\mathrm{K}}_{\rho }^{\lambda }V\left( t\right) - \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }\parallel \varpi {\parallel }^{2} + {\mathrm{K}}_{\rho }\parallel \varpi \parallel \begin{Vmatrix}{\mathcal{L}{e}^{m}}\end{Vmatrix}.
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
Considering the designed event-triggering condition (11) and the condition $\varepsilon \in \left( {0,1}\right)$ , it concludes
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
{\mathrm{K}}_{\rho }\begin{Vmatrix}{\mathcal{L}{e}^{m}\left( t\right) }\end{Vmatrix} \leq \left( {1 - \varepsilon }\right) {\mathrm{K}}_{\rho }^{\lambda }\parallel \varpi \left( t\right) \parallel .
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
Accordingly, since $\delta \geq 1$ , it yields
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
\dot{V}\left( t\right) \leq - \left( {\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) + \frac{\dot{\mu }}{\mu }}\right) \varpi {\left( t\right) }^{T}\varpi \left( t\right) = \rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) V - \frac{\dot{\mu }}{\mu }V.
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
< g r a p h i c s >
|
| 296 |
+
|
| 297 |
+
Fig. 1. The communication topology among twelve agents.
|
| 298 |
+
|
| 299 |
+
According to the Lemma 1 in [11], one has
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
V\left( t\right) \leq \mu {\left( t\right) }^{-2}{\exp }^{-\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \left( {t - {T}^{N}}\right) }V\left( {T}^{N}\right) . \tag{13}
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
And then $\parallel \varpi \left( t\right) \parallel \leq \mu {\left( t\right) }^{-1}{\exp }^{-\rho {\lambda }_{2}\left( {\mathcal{L}}_{F}\right) \left( {t - {T}^{N}}\right) }\begin{Vmatrix}{\varpi \left( {T}^{N}\right) }\end{Vmatrix}$ . Note that $\mathop{\lim }\limits_{{t \rightarrow {T}^{ - }}}\mu {\left( t\right) }^{-1} = 0$ , it yields $\mathop{\lim }\limits_{{t \rightarrow {T}^{ - }}}\parallel \varpi \left( t\right) \parallel =$ 0. That is, when $t \rightarrow {T}^{ - }$ , the condition ${x}_{F}\left( t\right) -$ $\left( {-{\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right) }\right) = 0$ holds. Based on the equation (46) of [19] and Definition (2)-(3), $- {\mathcal{L}}_{F}^{-1}{\mathcal{L}}_{L}{x}_{L}\left( t\right)$ is the convex hull signal formed by the leaders, when $\varpi \left( t\right) = 0$ , it implies that all followers converge within the convex hull formed by the leaders. Therefore, the containment control of the networked agent system is achieved within the predefined time $T$ . Since the finite time-varying transformation is only applied to the interval $\lbrack 0,T)$ , the problem of predefined-time containment can be transformed into the general case discussed in [11] for $t \in \lbrack T,\infty )$ . For further information, interested readers can refer to Theorem 1 in [11], which provides detailed proof.
|
| 306 |
+
|
| 307 |
+
(II) Privacy analysis: Consider a scene where the dynamics $f\left( \cdot \right)$ of all agents are widely known and each agent has access to the hidden output states ${\mathrm{R}}_{i}^{m}\left( t\right)$ of its neighboring agents. While the true states ${x}_{i}\left( t\right)$ and the encode keys $\left\{ {{a}_{i},{b}_{i},{c}_{i}}\right\}$ are regarded as private information exclusive to each agent. For an honest-but-curious agent, the information accessible includes the unsigned graph $\mathcal{G}$ , the state of the honest-but-curious agents and the set of neighboring agents, and the hidden state of both the honest-but-curious agents and their neighbors. Following the application of a finite time-varying transformation to conceal agent $i$ ’s initial state, the resulting hidden output ${\mathrm{R}}_{i}^{m}\left( t\right)$ bears no resemblance to the true initial value ${x}_{i}\left( 0\right)$ . As a result, any information set acquired by an honest but curious agent proves futile in determining agent $i$ ’s true initial state. Additionally, the agent cannot reconstruct their true initial state by employing the findings presented in [23]. Importantly, even external eavesdroppers are unable to obtain the true initial state, as evidenced by the process mentioned above. Thus, it becomes apparent that the integrity of the initial state remains elusive to all parties involved, substantiating the claim of its unattainability by external eavesdroppers.
|
| 308 |
+
|
| 309 |
+
§ IV. SIMULATION
|
| 310 |
+
|
| 311 |
+
In this section, several numerical simulations are conducted to verify the effectiveness of the theoretical analysis. The simulation consists of the networked agent systems comprising 12 agents, which include six followers and six leaders. Fig. 1 displays the communication topology among agents. The numerical simulations are performed in the 2- D space. The initial position states of all agents are set as ${x}^{1}\left( 0\right) = {\left\lbrack -{10},0,{10},{10},0, - {10}, - {30}, - 5,{20},{30},5, - {15}\right\rbrack }^{T}$ and ${x}^{2}\left( 0\right) = {\left\lbrack 5,5,5, - 5, - 5, - 5,5,{20},{25}, - {10}, - {15}, - {20}\right\rbrack }^{T}$ . And the parameter $\varepsilon$ is equal to 0.5, the predefined time is $T = {1.5}\mathrm{\;s}$ . The encode keys are selected as
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
A = {\left\lbrack -5, - 9, - 5,8, - 3,6, - 4,5,6, - 4,5, - 3\right\rbrack }^{T},
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
B = {\left\lbrack 2,4,3, - 4,1, - 3,2, - 1, - 3,2, - 1,1\right\rbrack }^{T},
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
C = {\left\lbrack 3,4,1, - 3,2, - 2,1, - 3, - 2,1, - 3,2\right\rbrack }^{T}.
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
< g r a p h i c s >
|
| 326 |
+
|
| 327 |
+
Fig. 2. The true and masked states of all agents.
|
| 328 |
+
|
| 329 |
+
< g r a p h i c s >
|
| 330 |
+
|
| 331 |
+
Fig. 3. The control input of the follower agents.
|
| 332 |
+
|
| 333 |
+
The simulation results are depicted in Fig 2-5. The trajectory of agents in the ${x}^{1}$ direction is illustrated in Fig 2, with the subfigure highlighting the masked trajectories of all agents. This indicates that the proposed method effectively preserves the privacy of the agents' initial states and achieves the predefined time convergence within 1.5s. Fig 3 presents the control input trajectories of all follower agents, where abrupt changes in the trajectories are attributed to the event-triggered mechanism. Fig 4 demonstrates the fulfillment of the event-triggering conditions, when the designed boundary threshold is exceeded, the agents' states are sampled and updated. Fig 5 shows that all followers successfully move from their initial positions into the convex hull formed by the fixed leaders, achieving privacy-preserving event-triggered predefined time containment control for the networked agent system.
|
| 334 |
+
|
| 335 |
+
< g r a p h i c s >
|
| 336 |
+
|
| 337 |
+
Fig. 4. The trajectory of the state measurement error and boundary threshold.
|
| 338 |
+
|
| 339 |
+
< g r a p h i c s >
|
| 340 |
+
|
| 341 |
+
Fig. 5. The trajectory of all agents in the 2-D plane under designed containment control input. (Square markers represent the followers, and circular markers represent the leaders. Leaders form a rectangular convex hull.)
|
| 342 |
+
|
| 343 |
+
§ V. CONCULSION
|
| 344 |
+
|
| 345 |
+
This paper has addressed the privacy-preserving event-triggered predefined-time containment control problem for networked agent systems. A novel containment control scheme has been developed, effectively integrating privacy protection with event-triggered mechanisms. This integration has optimized network efficiency by minimizing unnecessary data transmission while ensuring robust containment within a specified time frame. The proposed control scheme has successfully ensured the confidentiality of agents' information through output masking, thereby maintaining both privacy and control accuracy. The effectiveness of the proposed scheme has been verified through simulation results. It is important to note that this study has focused on static leaders, and future research will extend the investigation to address containment control problems under dynamic leaders.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Unsupervised Feature Fusion Model for Marine Raft Aquaculture Sematic Segmentation Based on SAR Images
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Mengmeng Li
|
| 4 |
+
|
| 5 |
+
School of Information Science and Engineering
|
| 6 |
+
|
| 7 |
+
Dalian Polytechnic University
|
| 8 |
+
|
| 9 |
+
Dalian, China
|
| 10 |
+
|
| 11 |
+
220520854000601@xy.dlpu.edu.cn
|
| 12 |
+
|
| 13 |
+
${2}^{\text{nd }}$ Xinzhe Wang
|
| 14 |
+
|
| 15 |
+
School of Information Science and Engineering
|
| 16 |
+
|
| 17 |
+
Dalian Polytechnic University
|
| 18 |
+
|
| 19 |
+
Dalian, China
|
| 20 |
+
|
| 21 |
+
wxzagm@dlpu.edu.cn
|
| 22 |
+
|
| 23 |
+
${3}^{\text{rd }}$ Jianchao Fan *
|
| 24 |
+
|
| 25 |
+
School of Control Science and Engineering
|
| 26 |
+
|
| 27 |
+
Dalian University of Technology
|
| 28 |
+
|
| 29 |
+
Dalian, China
|
| 30 |
+
|
| 31 |
+
fjchao@dlut.edu.cn
|
| 32 |
+
|
| 33 |
+
Abstract-Marine aquaculture semantic segmentation provides a scientific basis for marine regulation and plays an important role in marine ecological protection and management. Currently, most high-performance marine aquaculture segmentation networks are trained by supervised learning. This approach requires collecting a large number of accurate manually labelled samples for training, but the labelled samples are difficult to obtain. To solve this problem, this paper proposes an unsupervised feature fusion model (UFFM) for marine raft aquaculture semantic segmentation. Firstly, a pseudo-label generator is designed to label the training samples, and a coarse mask is generated using saliency feature clustering. The training samples with pseudo-labels are inputted into a multilevel feature fusion module to extract further and continuously improve the graphical shapes and categories of the objects under the guidance of cross-entropy loss. The pseudo-labels are optimised under continuous iteration to improve the model segmentation performance. Comparison experiments on the GF-3 dataset demonstrate the effectiveness of UFFM.
|
| 34 |
+
|
| 35 |
+
Index Terms-unsupervised learning, pseudo-label, SAR images, semantic segmentation
|
| 36 |
+
|
| 37 |
+
## I. INTRODUCTION
|
| 38 |
+
|
| 39 |
+
China has witnessed rapid growth in the scale and benefits of marine aquaculture development in recent years [1]. However, while the marine aquaculture industry has made significant progress, it is also faced with problems such as pollution around aquaculture waters, irrational layout of aquaculture, and excessive density of offshore aquaculture [2]. Synthetic aperture radar (SAR) has the advantage of being all-weather and does not need to consider factors such as cloudy weather. It has become an essential tool for monitoring marine aquaculture. The backscattering features of the mariculture raft target in SAR images are much larger than the backscattering features of the seawater surface, which makes the aquaculture rafts and seawater background present a high contrast [3]. Researchers have adopted deep learning techniques to design various mariculture semantic segmentation methods to efficiently and accurately extract the mariculture information [4].
|
| 40 |
+
|
| 41 |
+
However, existing neural network models usually rely on a large amount of manually labeled data for training to obtain high-accuracy results. This approach faces two main problems: 1) the cost of obtaining high-quality manually labeled data is extremely high in complex scenarios and when dealing with massive remote sensing data, resulting in a large amount of remote sensing data that cannot be fully utilized. 2) the reliance on manual labelling as the only learning signal leads to limited feature learning. Several studies have proposed unsupervised methods for extracting information on marine aquaculture to address these challenges. Fan et al. [3] proposed using the multi-source characteristics of floating rafts and combining the neurodynamic optimization with the collective multi-core fuzzy C-means algorithm to classify unsupervised aquaculture. Wang et al. [5] designed an incremental dual unsupervised deep learning model based on the idea of alternating iterative optimization of pseudo-labels and segmentation results to maintain and strengthen the edge semantic information of pseudo-labels and effectively reduce the influence of coherent spot noise in SAR images. Subsequently, Zhou et al. [6] constructed an unsupervised semantic segmentation network for mariculture based on mutual information theory and su-perpixel algorithm, which improves the continuity and spatial consistency of mariculture target extraction through global feature learning, pseudo-label generation, and optimization with mutual information loss. However, the above unsupervised deep learning models mainly rely on single-area data training, which is difficult to generalize to intelligent image interpretation in wide-area and complex scenes.
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
This work was supported in part by the National Natural Science Foundation of China under Grant 42076184, Grant 41876109, and Grant 41706195; in part by the National Key Research and Development Program of China under Grant 2021YFC2801000; in part by the National High Resolution Special Research under Grant 41-Y30F07-9001-20/22; in part by the Fundamental Research Funds for the Central Universities under Grant DUT23RC(3)050; and in part by the Dalian High Level Talent Innovation Support Plan under Grant2021RD04. (Corresponding author: Jianchao Fan.)
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
With the emergence of transformer [7], a self-supervised representation learning model using unlabeled remote sensing big data to address regional feature differences. Self-supervised transformer network can learn its spatial features from a large amount of remote sensing data by constructing a pretexting task and pre-training the vision transformer model, which applies to a variety of downstream tasks by fine-tuning, e.g., change detection [8], classification [9], target detection [10], and semantic segmentation tasks [11]. Fan et al. [12] established a self-supervised feature fusion transformer model to obtain the essential features of mariculture through a large number of unlabeled samples, introduced contrast loss and mask loss, and paid attention to the global and local features of aquaculture at the same time, which mitigated the problems of mutual interference among multiple targets and imbalance of data between classes, and realized the accurate segmentation of mariculture. However, the self-supervised transformer model can rely on a large number of unlabeled floating raft aquaculture data for information extraction on a single sea area but still needs high-quality labeled data fine-tuning in the downstream segmentation network.
|
| 50 |
+
|
| 51 |
+
To solve the above problems, this paper applies the saliency information obtained from self-supervised representation learning to the downstream segmentation network. It combines it with a multi-stage feature fusion module to further enhance the semantic segmentation performance of the network. Specifically, a pseudo-label generator is first designed to generate saliency pseudo-labels. Then, the semantic segmentation results output by the multilevel feature fusion module is cross-entropy loss with the pseudo-labels, which are constrained and directionally passed parameters to the network. The pseudo-labels are optimised through continuous iteration to improve network segmentation performance further.
|
| 52 |
+
|
| 53 |
+
## II. RELATED WORK
|
| 54 |
+
|
| 55 |
+
## A. Self-supervised feature learning
|
| 56 |
+
|
| 57 |
+
Self-supervised learning mainly utilizes auxiliary tasks to mine supervised information from large-scale unmanually labeled data. It trains the network with this constructed supervised information to learn valuable representations for downstream tasks. Common auxiliary tasks include comparative learning, generative learning, and comparative generative methods that design learning paradigms based on data distribution characteristics to obtain better feature representations. However, these methods are mainly focused on image classification tasks and thus are typically designed to generate separate global vectors from images as input. This problem leads to poor results downstream for densely predicted segmentation tasks, requiring high-quality truth-labeled fine-tuned models. However, the emergence of self-supervised transformer has made it possible to extract dense feature vectors without requiring specialized dense contrast learning methods, which can reveal hidden semantic relationships in images. In this paper, inspired by DINO [13], the upstream trained image saliency features generate pseudo-labels for the training data to fine-tune the downstream segmentation network to construct a fully unsupervised semantic segmentation model.
|
| 58 |
+
|
| 59 |
+
## B. Unsupervised semantic segmentation
|
| 60 |
+
|
| 61 |
+
Unsupervised semantic segmentation aims at class prediction for each pixel point in an image without artificial labels. Ji et al. [14] proposed invariant information clustering (IIC), which ensures cross-view consistency by maximising the mutual information between neighbouring pixels of different views. Li et al. [15] constructed PiCIE to learn the invariance and isotropy of photometric and geometric variations by using geometric consistency as an inductive bias. This approach is that it only works on dataset MS COCO, which does not distinguish between foreground and background classes. MaskContrast [16] first generates object masks using DINO pre-trained ViT and then uses pixel-level embeddings obtained from contrast loss. However, the method can only be applied to saliency datasets. For the multi-stage paradigm, researchers tried to utilise class activation maps (CAM) [17] to obtain initial pixel-level pseudo-labels, which were then refined using a teacher-student network. However, this would result in losing features during training, decreasing segmentation accuracy. In this paper, to solve the above problems, Grad-CAM [18] is introduced in multi-stage to generate pseudo-labels and improve the segmentation performance by multi-scale feature fusion.
|
| 62 |
+
|
| 63 |
+
## III. Method
|
| 64 |
+
|
| 65 |
+
## A. Overall framework
|
| 66 |
+
|
| 67 |
+
In the upstream task, a large amount of unlabeled marine aquaculture data is trained from zero to obtain the pre-trained ViT weights ${\theta }_{t}$ and initialize the downstream feature extraction network. The overall architecture designed for the downstream segmentation task is shown in Fig. 1. The processed unlabeled marine aquaculture images are used as inputs to the network to obtain the segmentation results of the aquaculture. The designed network have two branches, one for generating pseudo labels using saliency features and the other is a segmentation branch for multi-layer feature fusion. First, in the upstream task, large-scale unlabeled data is used to pre-train the ViT [13] in order to obtain the initialization parameters ${\theta }_{t}$ of the downstream feature extraction network, which can accelerate the convergence of the downstream segmentation network by using the pre-training weights and is crucial for the extraction of the model to salient features. The designed network is shown in Fig. 1. First, an input unlabeled marine aquaculture image, which has been stretched in a linear phase and rotated randomly, is used to augment the original image with data. The input image will go through two branches: one is the saliency pseudo-label generation branch, which will be presented in III-B, and the other is the multi-layer transformer feature fusion branch, which will be presented in III-C. In network training, the supervisory loss ${\mathcal{L}}_{s}$ is the pixel-by-pixel cross-entropy loss between the pseudo-labeled pixel level and the prediction:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\mathcal{L}}_{s} = \frac{1}{N}\mathop{\sum }\limits_{{i = 0}}^{{N - 1}}\text{CrossEntropy}\left( {{\widetilde{y}}_{i},{y}_{i}}\right) \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Fig. 1. Overview model of UFFM. (a) Obtaining saliency pseudo-label: Input the multi-head self-attention mechanism of the last layer feature map in the transformer block into Grad-CAM to obtain saliency patch features and generate saliency pseudo-label. (b) Obtaining segmentation results: The semantic information is enhanced using a multilayer transformer with PPM, and the semantic segmentation results with pseudo-labels are output by backpropagation after the loss computation. After continuous iterative updates, the network segmentation performance is improved.
|
| 76 |
+
|
| 77 |
+
where $N$ denotes the number of pixels in the image $x \in$ ${\mathbb{R}}^{H \times W \times 3}$ and ${y}_{i} \in {\mathbb{R}}^{C}$ is the network’s prediction probability for pixel $i$ , where $C$ is the number of predicted classes and ${\widetilde{y}}_{i} \in {\mathbb{R}}^{C}$ is the labelling class of pixel $i$ in the pseudo-label.
|
| 78 |
+
|
| 79 |
+
During the network training, the loss will be gradient back to the feature extraction network, and in particular, the weights of the two branches will be shared and updated simultaneously. Through continuous iteration of the network, the pseudo-label is updated, thus improving the segmentation performance of the network.
|
| 80 |
+
|
| 81 |
+
## B. Saliency pseudo-label generation
|
| 82 |
+
|
| 83 |
+
In unsupervised tasks, the design of pseudo-labels is crucial. A simple approach is to use confidence thresholds followed by direct results output as pseudo-labels. However, this approach is unsatisfactory in processing complex data and produces poor results. To solve this problem, a variant of activation graph-like Grad-CAM is used in this paper to generate significance discriminative pseudo-labels by stepwise subdivision from the target localisation method. Given an image $x$ , generate a sequence of patch embeddings ${x}_{\text{patch }} \in {\mathbb{R}}^{P \times D}$ , where $P$ is the number of patches, and $D$ is the output dimension. Then, ${x}_{CLS} \in {\mathbb{R}}^{1 \times D}$ and position embedding $\mathrm{P}$ are also added to the concatenated inputs. Therefore, the input sequence ${z}_{0}$ of ViT is described as:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{z}_{0} = \left\lbrack {{x}_{\text{patch }},{x}_{CLS}}\right\rbrack + \mathrm{P} \tag{2}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
After that, the last layer of features is obtained through multiple layers of transformer encoders. The saliency feature map is computed using Grad-CAM. The first $\mathrm{k}$ salient patches with the largest absolute value of the gradient of the embedded image patch features are selected as the salient patches, and finally, a binary operation is performed to mark the first $k$ salient patches as 0 and the rest as 255 . The generated saliency pseudo-label $\widetilde{y}$ is written as:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{g}_{k} = \operatorname{Sum}\left| \frac{\partial L\left( {f\left( x\right) , y}\right) }{\partial {x}_{\text{patch }}^{k}}\right| \tag{3}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\widetilde{y} = \left\{ \begin{array}{l} 0,\text{ if }{g}_{k}\text{ in topk }\mathrm{G} \\ {255},\text{ otherwise } \end{array}\right. \tag{4}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where $\mathrm{G} \in \mathbb{R} = \left\{ {{g}_{1},{g}_{2},\ldots {g}_{K}}\right\}$ is the salience map of patches ${x}_{\text{patch }} = \left\{ {{x}_{\text{patch }}^{1},\ldots {x}_{\text{patch }}^{K}}\right\}$ topk is the set of selected salient patches.
|
| 100 |
+
|
| 101 |
+
## C. Multi-stage feature fusion
|
| 102 |
+
|
| 103 |
+
The segmentation decoder consists of a pyramid pooling module (PPM) and a multi-scale feature pyramid to enable the network to capture contextual semantic information better. Firstly, three feature maps $\left\{ {{V}_{2},{V}_{3},{V}_{4}}\right\}$ are generated at the transformer encoder. The output feature vectors are the same size since the model chosen is the base ViT model, and the last transversal ${L}_{5}$ is generated from the last feature map ${V}_{5}$ through the PPM module. The FPN sub-network then paths down from the top to the branch to obtain ${\mathrm{F}}_{i} =$ ${\mathrm{L}}_{i} + {\mathrm{{UP}}}_{2}\left( {\mathrm{\;F}}_{i + 1}\right) , i = \{ 2,3,4\}$ , where the operation Up denotes bilinear upsampling. The FPN then uses the convolutional block ${h}_{i}$ to obtain the output ${\mathrm{P}}_{i}$ respectively. The final feature fusion of the FPN output requires bilinear upsampling of each po to ensure that they have the same spatial size and is finally connected by the channel dimension and fused by the convolutional unit block $h$
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\mathrm{Z} = h\left( \left\lbrack {{P}_{2};{\mathrm{{UP}}}_{2}\left( {P}_{3}\right) ;{\mathrm{{UP}}}_{4}\left( {P}_{4}\right) ;{\mathrm{{UP}}}_{8}\left( {P}_{5}\right) }\right\rbrack \right) \tag{5}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Fig. 2. Visual comparison of raft marine aquqculture segmentation on the GF-3 dataset. (a) original images. (b) ground-truth labels. (c) IIC. (d) PiCIE. (e) IDUDL. (f) UFFM.
|
| 112 |
+
|
| 113 |
+
The fused feature $\mathrm{Z}$ is then subjected to $1 \times 1$ convolution and $4 \times$ bilinear upsampling to obtain the final prediction $y$ .
|
| 114 |
+
|
| 115 |
+
## IV. EXPERIMENTAL RESULTS
|
| 116 |
+
|
| 117 |
+
## A. Experiment Setup and Datasets
|
| 118 |
+
|
| 119 |
+
All experiments are conducted in PyTorch 1.8.1, using an Intel Xeon Platinum ${8255}\mathrm{C}$ with a clock speed of ${2.5}\mathrm{{GHz}}$ and an Nvidia GeForce RTX 3090. The data enhancement strategy was consistent with DINO [13]. A vit - s /16 model [7] trained with self jitter loss was used to extract features from the patches. The learning rate was set to 0.05 . In addition, a stochastic gradient descent (SGD) optimiser with a momentum of 0.9 was used. The encoder part uses ViT as the main network. The decoder part uses the UPerHead architecture to receive features from all levels of the encoder and generate the final prediction through pooling and upsampling operations. Meanwhile, the auxiliary head uses FCNHead architecture to receive features from specific encoder layers.
|
| 120 |
+
|
| 121 |
+
The study area is located in the sea water aquaculture area of Changhai County, China. The remote sensing images were preprocessed with radiometric calibration and geographic correction, and the remote sensing images with horizontal-horizontal(HH) polarisation mode are selected as the experimental data. The images are subsequently cropped to ${512} \times {512}$ pixels. The self-supervised pre-training of the GF-3 dataset is more than 13,000, the downstream train datasets is 369, and the test datasets is 160 .
|
| 122 |
+
|
| 123 |
+
## B. Evaluation Metrics
|
| 124 |
+
|
| 125 |
+
In SAR images, there are a large number of coherent spot noise effects on raft aquaculture targets, resulting in a large number of isolated noise points in the image, which affects the accurate extraction of raft aquaculture targets. Therefore, in this paper, multiple evaluation metrics are used to evaluate the segmentation results. The metrics refer to IDUDL, which contains mIoU ( ${mIoU}$ ), Kappa coefficient(K), Overall Accuracy(OA), Precision(P), Recall(R)and F1 score $\left( {F}_{1}\right)$ .
|
| 126 |
+
|
| 127 |
+
Where ${mIoU}$ evaluates the average degree of overlap between the predicted pixel categories and the true value pixel categories, which enables a better evaluation of the semantic continuity and consistency of the model predictions. $K$ considered the effect of chance coincidences when evaluating the degree of consistency. ${OA}$ evaluates the proportion of correctly predicted pixel classes in the overall correctly predicted pixel classes, reflecting the global accuracy. $P$ denotes the proportion of float samples predicted by the model. $R$ represents the ability of the model to find all positive samples. ${F}_{1}$ synthesis balances $P$ and $R$ .
|
| 128 |
+
|
| 129 |
+
TABLE I
|
| 130 |
+
|
| 131 |
+
QUANTITATIVE COMPARISON OF PROPOSED WITH OTHER UNSUPERVISED DEEP LEARNING METHODS ON THE SAME DATASET. THE BEST RESULTS ARE HIGHLIGHTED AS BOLD.
|
| 132 |
+
|
| 133 |
+
<table><tr><td>Methods</td><td>mloU</td><td>Kappa</td><td>OA(%)</td><td>$P\left( \% \right)$</td><td>$R\left( \% \right)$</td><td>F1</td></tr><tr><td>IIC [14]</td><td>0.4613</td><td>0.2375</td><td>70.95</td><td>72.76</td><td>89.60</td><td>0.8063</td></tr><tr><td>PiCIE [15]</td><td>0.4905</td><td>0.3504</td><td>68.73</td><td>80.98</td><td>70.60</td><td>0.7198</td></tr><tr><td>IDUDL [5]</td><td>0.6102</td><td>0.5364</td><td>78.46</td><td>83.07</td><td>91.34</td><td>0.8130</td></tr><tr><td>UFFM</td><td>0.6371</td><td>0.5890</td><td>79.44</td><td>91.74</td><td>75.30</td><td>0.8371</td></tr></table>
|
| 134 |
+
|
| 135 |
+
## C. Comparison Results for Semantic Segmentation
|
| 136 |
+
|
| 137 |
+
Two classical unsupervised deep learning IIC [14] methods with PiCIE [15] method and an unsupervised deep learning model IDUDL [5] specifically designed for marine aquaculture are selected. The semantic segmentation results of different methods are shown in Table I. The results showed that the proposed method improved the ${mIoU}$ by 0.0269 compared to IDUDL, while $P$ increased by ${8.67}\%$ .
|
| 138 |
+
|
| 139 |
+
The visualisation results are shown in Fig. 2. In Fig. 2, the proposed method performs better in continuity and can reduce the interference of coherent spot noise. The effect of coherent spot noise in SAR images leads to many bright noises, affecting the segmentation results. The method of utilising mutual information in IIC can enhance the degree of correlation between similar samples. However, the noisy pixels are still strongly correlated with the target pixels, which leads to the impossibility of removing a large number of noisy pixels in the segmentation results.PiCIE utilises the method of geometric invariance and photometric invariance to maintain semantic consistency, but a large number of misclassifications occur. IDUDL can extract semantic features, overcome many noisy pixels, and perform the floating boundary better. However, the lack of global information leads to many missed judgments. Sample (2) shows that the proposed method can reduce the underdetermination in rafting compared to IDUDL.
|
| 140 |
+
|
| 141 |
+
## V. Conclusion
|
| 142 |
+
|
| 143 |
+
This paper proposes a new unsupervised feature fusion model, UFFM, for marine raft aquaculture semantic segmentation based on SAR images. The saliency obtained from representational learning generates saliency pseudo-labels in the pseudo-label generator. During network training, multistage feature fusion is designed to enhance the semantic information and the extraction of raft aquaculture target boundaries and semantic continuity. The experimental results show that UFFM can effectively reduce the problem of omission and misjudgment of raft aquaculture targets.
|
| 144 |
+
|
| 145 |
+
## REFERENCES
|
| 146 |
+
|
| 147 |
+
[1] Junjie Wang, Arthur HW Beusen, Xiaochen Liu, and Alexander F Bouwman. Aquaculture production is a large, spatially concentrated source of nutrients in chinese freshwater and coastal seas. Environmental Science & Technology, 54(3):1464-1474, 2019.
|
| 148 |
+
|
| 149 |
+
[2] Marco Ottinger, Kersten Clauss, and Claudia Kuenzer. Aquaculture: Relevance, distribution, impacts and spatial assessments-a review. Ocean & Coastal Management, 119:244-266, 2016.
|
| 150 |
+
|
| 151 |
+
[3] Jianchao Fan, Jianhua Zhao, Wentao An, and Yuanyuan Hu. Marine floating raft aquaculture detection of GF-3 PolSAR images based on collective multikernel fuzzy clustering. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(8):2741-2754, 2019.
|
| 152 |
+
|
| 153 |
+
[4] Wantai Chen and Xiaofeng Li. Deep-learning-based marine aquaculture zone extractions from Dual-Polarimetric SAR imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17:8043-8057, 2024.
|
| 154 |
+
|
| 155 |
+
[5] Xinzhe Wang, Jianlin Zhou, and Jianchao Fan. IDUDL: Incremental double unsupervised deep learning model for marine aquaculture SAR images segmentation. IEEE Transactions on Geoscience and Remote Sensing, 60:1-12, 2022.
|
| 156 |
+
|
| 157 |
+
[6] Jianlin Zhou, Mengmeng Li, Xinzhe Wang, and Jianchao Fan. Unsupervised mutual information and superpixel constraints in SAR marine aquaculture extraction. pages 1-5, 2023.
|
| 158 |
+
|
| 159 |
+
[7] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weis-senborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
|
| 160 |
+
|
| 161 |
+
[8] Yuxiang Zhang, Yang Zhao, Yanni Dong, and Bo Du. Self-supervised pretraining via multimodality images with transformer for change detection. IEEE Transactions on Geoscience and Remote Sensing, 61:1-11, 2023.
|
| 162 |
+
|
| 163 |
+
[9] Lilin Tu, Jiayi Li, Xin Huang, Jianya Gong, Xing Xie, and Leiguang Wang. S2hm2: A spectral-spatial hierarchical masked modeling framework for self-supervised feature learning and classification of large-scale hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing, 62:1-19, 2024.
|
| 164 |
+
|
| 165 |
+
[10] Xi Chen, Yuxiang Zhang, Yanni Dong, and Bo Du. Generative self-supervised learning with spectral-spatial masking for hyperspectral target detection. IEEE Transactions on Geoscience and Remote Sensing, 62:1- 13, 2024.
|
| 166 |
+
|
| 167 |
+
[11] Zaiyi Hu, Junyu Gao, Yuan Yuan, and Xuelong Li. Contrastive tokens and label activation for remote sensing weakly supervised semantic segmentation. IEEE Transactions on Geoscience and Remote Sensing, 62:1-11, 2024.
|
| 168 |
+
|
| 169 |
+
[12] Jianchao Fan, Jianlin Zhou, Xinzhe Wang, and Jun Wang. A self-supervised transformer with feature fusion for SAR image semantic segmentation in marine aquaculture monitoring. IEEE Transactions on Geoscience and Remote Sensing, 61:1-15, 2023.
|
| 170 |
+
|
| 171 |
+
[13] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jegou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. 2021 IEEE/CVF International Conference on Computer Vision, pages 9630-9640, 2021.
|
| 172 |
+
|
| 173 |
+
[14] Xu Ji, Andrea Vedaldi, and Joao Henriques. Invariant information clustering for unsupervised image classification and segmentation. 2019 IEEE/CVF International Conference on Computer Vision, pages 9864- 9873, 2019.
|
| 174 |
+
|
| 175 |
+
[15] Jang Hyun Cho, Utkarsh Mall, Kavita Bala, and Bharath Hariharan. PiCIE: Unsupervised semantic segmentation using invariance and equiv-ariance in clustering. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16789-16799, 2021.
|
| 176 |
+
|
| 177 |
+
[16] Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, and Luc Van Gool. Unsupervised semantic segmentation by contrasting object mask proposals. 2021 IEEE/CVF International Conference on Computer Vision, pages 10032-10042, 2021.
|
| 178 |
+
|
| 179 |
+
[17] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921-2929, 2016.
|
| 180 |
+
|
| 181 |
+
[18] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE international conference on computer vision, pages 618- 626, 2017.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/4T963GENPI/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,155 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ UNSUPERVISED FEATURE FUSION MODEL FOR MARINE RAFT AQUACULTURE SEMATIC SEGMENTATION BASED ON SAR IMAGES
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Mengmeng Li
|
| 4 |
+
|
| 5 |
+
School of Information Science and Engineering
|
| 6 |
+
|
| 7 |
+
Dalian Polytechnic University
|
| 8 |
+
|
| 9 |
+
Dalian, China
|
| 10 |
+
|
| 11 |
+
220520854000601@xy.dlpu.edu.cn
|
| 12 |
+
|
| 13 |
+
${2}^{\text{ nd }}$ Xinzhe Wang
|
| 14 |
+
|
| 15 |
+
School of Information Science and Engineering
|
| 16 |
+
|
| 17 |
+
Dalian Polytechnic University
|
| 18 |
+
|
| 19 |
+
Dalian, China
|
| 20 |
+
|
| 21 |
+
wxzagm@dlpu.edu.cn
|
| 22 |
+
|
| 23 |
+
${3}^{\text{ rd }}$ Jianchao Fan *
|
| 24 |
+
|
| 25 |
+
School of Control Science and Engineering
|
| 26 |
+
|
| 27 |
+
Dalian University of Technology
|
| 28 |
+
|
| 29 |
+
Dalian, China
|
| 30 |
+
|
| 31 |
+
fjchao@dlut.edu.cn
|
| 32 |
+
|
| 33 |
+
Abstract-Marine aquaculture semantic segmentation provides a scientific basis for marine regulation and plays an important role in marine ecological protection and management. Currently, most high-performance marine aquaculture segmentation networks are trained by supervised learning. This approach requires collecting a large number of accurate manually labelled samples for training, but the labelled samples are difficult to obtain. To solve this problem, this paper proposes an unsupervised feature fusion model (UFFM) for marine raft aquaculture semantic segmentation. Firstly, a pseudo-label generator is designed to label the training samples, and a coarse mask is generated using saliency feature clustering. The training samples with pseudo-labels are inputted into a multilevel feature fusion module to extract further and continuously improve the graphical shapes and categories of the objects under the guidance of cross-entropy loss. The pseudo-labels are optimised under continuous iteration to improve the model segmentation performance. Comparison experiments on the GF-3 dataset demonstrate the effectiveness of UFFM.
|
| 34 |
+
|
| 35 |
+
Index Terms-unsupervised learning, pseudo-label, SAR images, semantic segmentation
|
| 36 |
+
|
| 37 |
+
§ I. INTRODUCTION
|
| 38 |
+
|
| 39 |
+
China has witnessed rapid growth in the scale and benefits of marine aquaculture development in recent years [1]. However, while the marine aquaculture industry has made significant progress, it is also faced with problems such as pollution around aquaculture waters, irrational layout of aquaculture, and excessive density of offshore aquaculture [2]. Synthetic aperture radar (SAR) has the advantage of being all-weather and does not need to consider factors such as cloudy weather. It has become an essential tool for monitoring marine aquaculture. The backscattering features of the mariculture raft target in SAR images are much larger than the backscattering features of the seawater surface, which makes the aquaculture rafts and seawater background present a high contrast [3]. Researchers have adopted deep learning techniques to design various mariculture semantic segmentation methods to efficiently and accurately extract the mariculture information [4].
|
| 40 |
+
|
| 41 |
+
However, existing neural network models usually rely on a large amount of manually labeled data for training to obtain high-accuracy results. This approach faces two main problems: 1) the cost of obtaining high-quality manually labeled data is extremely high in complex scenarios and when dealing with massive remote sensing data, resulting in a large amount of remote sensing data that cannot be fully utilized. 2) the reliance on manual labelling as the only learning signal leads to limited feature learning. Several studies have proposed unsupervised methods for extracting information on marine aquaculture to address these challenges. Fan et al. [3] proposed using the multi-source characteristics of floating rafts and combining the neurodynamic optimization with the collective multi-core fuzzy C-means algorithm to classify unsupervised aquaculture. Wang et al. [5] designed an incremental dual unsupervised deep learning model based on the idea of alternating iterative optimization of pseudo-labels and segmentation results to maintain and strengthen the edge semantic information of pseudo-labels and effectively reduce the influence of coherent spot noise in SAR images. Subsequently, Zhou et al. [6] constructed an unsupervised semantic segmentation network for mariculture based on mutual information theory and su-perpixel algorithm, which improves the continuity and spatial consistency of mariculture target extraction through global feature learning, pseudo-label generation, and optimization with mutual information loss. However, the above unsupervised deep learning models mainly rely on single-area data training, which is difficult to generalize to intelligent image interpretation in wide-area and complex scenes.
|
| 42 |
+
|
| 43 |
+
This work was supported in part by the National Natural Science Foundation of China under Grant 42076184, Grant 41876109, and Grant 41706195; in part by the National Key Research and Development Program of China under Grant 2021YFC2801000; in part by the National High Resolution Special Research under Grant 41-Y30F07-9001-20/22; in part by the Fundamental Research Funds for the Central Universities under Grant DUT23RC(3)050; and in part by the Dalian High Level Talent Innovation Support Plan under Grant2021RD04. (Corresponding author: Jianchao Fan.)
|
| 44 |
+
|
| 45 |
+
With the emergence of transformer [7], a self-supervised representation learning model using unlabeled remote sensing big data to address regional feature differences. Self-supervised transformer network can learn its spatial features from a large amount of remote sensing data by constructing a pretexting task and pre-training the vision transformer model, which applies to a variety of downstream tasks by fine-tuning, e.g., change detection [8], classification [9], target detection [10], and semantic segmentation tasks [11]. Fan et al. [12] established a self-supervised feature fusion transformer model to obtain the essential features of mariculture through a large number of unlabeled samples, introduced contrast loss and mask loss, and paid attention to the global and local features of aquaculture at the same time, which mitigated the problems of mutual interference among multiple targets and imbalance of data between classes, and realized the accurate segmentation of mariculture. However, the self-supervised transformer model can rely on a large number of unlabeled floating raft aquaculture data for information extraction on a single sea area but still needs high-quality labeled data fine-tuning in the downstream segmentation network.
|
| 46 |
+
|
| 47 |
+
To solve the above problems, this paper applies the saliency information obtained from self-supervised representation learning to the downstream segmentation network. It combines it with a multi-stage feature fusion module to further enhance the semantic segmentation performance of the network. Specifically, a pseudo-label generator is first designed to generate saliency pseudo-labels. Then, the semantic segmentation results output by the multilevel feature fusion module is cross-entropy loss with the pseudo-labels, which are constrained and directionally passed parameters to the network. The pseudo-labels are optimised through continuous iteration to improve network segmentation performance further.
|
| 48 |
+
|
| 49 |
+
§ II. RELATED WORK
|
| 50 |
+
|
| 51 |
+
§ A. SELF-SUPERVISED FEATURE LEARNING
|
| 52 |
+
|
| 53 |
+
Self-supervised learning mainly utilizes auxiliary tasks to mine supervised information from large-scale unmanually labeled data. It trains the network with this constructed supervised information to learn valuable representations for downstream tasks. Common auxiliary tasks include comparative learning, generative learning, and comparative generative methods that design learning paradigms based on data distribution characteristics to obtain better feature representations. However, these methods are mainly focused on image classification tasks and thus are typically designed to generate separate global vectors from images as input. This problem leads to poor results downstream for densely predicted segmentation tasks, requiring high-quality truth-labeled fine-tuned models. However, the emergence of self-supervised transformer has made it possible to extract dense feature vectors without requiring specialized dense contrast learning methods, which can reveal hidden semantic relationships in images. In this paper, inspired by DINO [13], the upstream trained image saliency features generate pseudo-labels for the training data to fine-tune the downstream segmentation network to construct a fully unsupervised semantic segmentation model.
|
| 54 |
+
|
| 55 |
+
§ B. UNSUPERVISED SEMANTIC SEGMENTATION
|
| 56 |
+
|
| 57 |
+
Unsupervised semantic segmentation aims at class prediction for each pixel point in an image without artificial labels. Ji et al. [14] proposed invariant information clustering (IIC), which ensures cross-view consistency by maximising the mutual information between neighbouring pixels of different views. Li et al. [15] constructed PiCIE to learn the invariance and isotropy of photometric and geometric variations by using geometric consistency as an inductive bias. This approach is that it only works on dataset MS COCO, which does not distinguish between foreground and background classes. MaskContrast [16] first generates object masks using DINO pre-trained ViT and then uses pixel-level embeddings obtained from contrast loss. However, the method can only be applied to saliency datasets. For the multi-stage paradigm, researchers tried to utilise class activation maps (CAM) [17] to obtain initial pixel-level pseudo-labels, which were then refined using a teacher-student network. However, this would result in losing features during training, decreasing segmentation accuracy. In this paper, to solve the above problems, Grad-CAM [18] is introduced in multi-stage to generate pseudo-labels and improve the segmentation performance by multi-scale feature fusion.
|
| 58 |
+
|
| 59 |
+
§ III. METHOD
|
| 60 |
+
|
| 61 |
+
§ A. OVERALL FRAMEWORK
|
| 62 |
+
|
| 63 |
+
In the upstream task, a large amount of unlabeled marine aquaculture data is trained from zero to obtain the pre-trained ViT weights ${\theta }_{t}$ and initialize the downstream feature extraction network. The overall architecture designed for the downstream segmentation task is shown in Fig. 1. The processed unlabeled marine aquaculture images are used as inputs to the network to obtain the segmentation results of the aquaculture. The designed network have two branches, one for generating pseudo labels using saliency features and the other is a segmentation branch for multi-layer feature fusion. First, in the upstream task, large-scale unlabeled data is used to pre-train the ViT [13] in order to obtain the initialization parameters ${\theta }_{t}$ of the downstream feature extraction network, which can accelerate the convergence of the downstream segmentation network by using the pre-training weights and is crucial for the extraction of the model to salient features. The designed network is shown in Fig. 1. First, an input unlabeled marine aquaculture image, which has been stretched in a linear phase and rotated randomly, is used to augment the original image with data. The input image will go through two branches: one is the saliency pseudo-label generation branch, which will be presented in III-B, and the other is the multi-layer transformer feature fusion branch, which will be presented in III-C. In network training, the supervisory loss ${\mathcal{L}}_{s}$ is the pixel-by-pixel cross-entropy loss between the pseudo-labeled pixel level and the prediction:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{\mathcal{L}}_{s} = \frac{1}{N}\mathop{\sum }\limits_{{i = 0}}^{{N - 1}}\text{ CrossEntropy }\left( {{\widetilde{y}}_{i},{y}_{i}}\right) \tag{1}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
< g r a p h i c s >
|
| 70 |
+
|
| 71 |
+
Fig. 1. Overview model of UFFM. (a) Obtaining saliency pseudo-label: Input the multi-head self-attention mechanism of the last layer feature map in the transformer block into Grad-CAM to obtain saliency patch features and generate saliency pseudo-label. (b) Obtaining segmentation results: The semantic information is enhanced using a multilayer transformer with PPM, and the semantic segmentation results with pseudo-labels are output by backpropagation after the loss computation. After continuous iterative updates, the network segmentation performance is improved.
|
| 72 |
+
|
| 73 |
+
where $N$ denotes the number of pixels in the image $x \in$ ${\mathbb{R}}^{H \times W \times 3}$ and ${y}_{i} \in {\mathbb{R}}^{C}$ is the network’s prediction probability for pixel $i$ , where $C$ is the number of predicted classes and ${\widetilde{y}}_{i} \in {\mathbb{R}}^{C}$ is the labelling class of pixel $i$ in the pseudo-label.
|
| 74 |
+
|
| 75 |
+
During the network training, the loss will be gradient back to the feature extraction network, and in particular, the weights of the two branches will be shared and updated simultaneously. Through continuous iteration of the network, the pseudo-label is updated, thus improving the segmentation performance of the network.
|
| 76 |
+
|
| 77 |
+
§ B. SALIENCY PSEUDO-LABEL GENERATION
|
| 78 |
+
|
| 79 |
+
In unsupervised tasks, the design of pseudo-labels is crucial. A simple approach is to use confidence thresholds followed by direct results output as pseudo-labels. However, this approach is unsatisfactory in processing complex data and produces poor results. To solve this problem, a variant of activation graph-like Grad-CAM is used in this paper to generate significance discriminative pseudo-labels by stepwise subdivision from the target localisation method. Given an image $x$ , generate a sequence of patch embeddings ${x}_{\text{ patch }} \in {\mathbb{R}}^{P \times D}$ , where $P$ is the number of patches, and $D$ is the output dimension. Then, ${x}_{CLS} \in {\mathbb{R}}^{1 \times D}$ and position embedding $\mathrm{P}$ are also added to the concatenated inputs. Therefore, the input sequence ${z}_{0}$ of ViT is described as:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{z}_{0} = \left\lbrack {{x}_{\text{ patch }},{x}_{CLS}}\right\rbrack + \mathrm{P} \tag{2}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
After that, the last layer of features is obtained through multiple layers of transformer encoders. The saliency feature map is computed using Grad-CAM. The first $\mathrm{k}$ salient patches with the largest absolute value of the gradient of the embedded image patch features are selected as the salient patches, and finally, a binary operation is performed to mark the first $k$ salient patches as 0 and the rest as 255 . The generated saliency pseudo-label $\widetilde{y}$ is written as:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{g}_{k} = \operatorname{Sum}\left| \frac{\partial L\left( {f\left( x\right) ,y}\right) }{\partial {x}_{\text{ patch }}^{k}}\right| \tag{3}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\widetilde{y} = \left\{ \begin{array}{l} 0,\text{ if }{g}_{k}\text{ in topk }\mathrm{G} \\ {255},\text{ otherwise } \end{array}\right. \tag{4}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $\mathrm{G} \in \mathbb{R} = \left\{ {{g}_{1},{g}_{2},\ldots {g}_{K}}\right\}$ is the salience map of patches ${x}_{\text{ patch }} = \left\{ {{x}_{\text{ patch }}^{1},\ldots {x}_{\text{ patch }}^{K}}\right\}$ topk is the set of selected salient patches.
|
| 96 |
+
|
| 97 |
+
§ C. MULTI-STAGE FEATURE FUSION
|
| 98 |
+
|
| 99 |
+
The segmentation decoder consists of a pyramid pooling module (PPM) and a multi-scale feature pyramid to enable the network to capture contextual semantic information better. Firstly, three feature maps $\left\{ {{V}_{2},{V}_{3},{V}_{4}}\right\}$ are generated at the transformer encoder. The output feature vectors are the same size since the model chosen is the base ViT model, and the last transversal ${L}_{5}$ is generated from the last feature map ${V}_{5}$ through the PPM module. The FPN sub-network then paths down from the top to the branch to obtain ${\mathrm{F}}_{i} =$ ${\mathrm{L}}_{i} + {\mathrm{{UP}}}_{2}\left( {\mathrm{\;F}}_{i + 1}\right) ,i = \{ 2,3,4\}$ , where the operation Up denotes bilinear upsampling. The FPN then uses the convolutional block ${h}_{i}$ to obtain the output ${\mathrm{P}}_{i}$ respectively. The final feature fusion of the FPN output requires bilinear upsampling of each po to ensure that they have the same spatial size and is finally connected by the channel dimension and fused by the convolutional unit block $h$
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\mathrm{Z} = h\left( \left\lbrack {{P}_{2};{\mathrm{{UP}}}_{2}\left( {P}_{3}\right) ;{\mathrm{{UP}}}_{4}\left( {P}_{4}\right) ;{\mathrm{{UP}}}_{8}\left( {P}_{5}\right) }\right\rbrack \right) \tag{5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
< g r a p h i c s >
|
| 106 |
+
|
| 107 |
+
Fig. 2. Visual comparison of raft marine aquqculture segmentation on the GF-3 dataset. (a) original images. (b) ground-truth labels. (c) IIC. (d) PiCIE. (e) IDUDL. (f) UFFM.
|
| 108 |
+
|
| 109 |
+
The fused feature $\mathrm{Z}$ is then subjected to $1 \times 1$ convolution and $4 \times$ bilinear upsampling to obtain the final prediction $y$ .
|
| 110 |
+
|
| 111 |
+
§ IV. EXPERIMENTAL RESULTS
|
| 112 |
+
|
| 113 |
+
§ A. EXPERIMENT SETUP AND DATASETS
|
| 114 |
+
|
| 115 |
+
All experiments are conducted in PyTorch 1.8.1, using an Intel Xeon Platinum ${8255}\mathrm{C}$ with a clock speed of ${2.5}\mathrm{{GHz}}$ and an Nvidia GeForce RTX 3090. The data enhancement strategy was consistent with DINO [13]. A vit - s /16 model [7] trained with self jitter loss was used to extract features from the patches. The learning rate was set to 0.05 . In addition, a stochastic gradient descent (SGD) optimiser with a momentum of 0.9 was used. The encoder part uses ViT as the main network. The decoder part uses the UPerHead architecture to receive features from all levels of the encoder and generate the final prediction through pooling and upsampling operations. Meanwhile, the auxiliary head uses FCNHead architecture to receive features from specific encoder layers.
|
| 116 |
+
|
| 117 |
+
The study area is located in the sea water aquaculture area of Changhai County, China. The remote sensing images were preprocessed with radiometric calibration and geographic correction, and the remote sensing images with horizontal-horizontal(HH) polarisation mode are selected as the experimental data. The images are subsequently cropped to ${512} \times {512}$ pixels. The self-supervised pre-training of the GF-3 dataset is more than 13,000, the downstream train datasets is 369, and the test datasets is 160 .
|
| 118 |
+
|
| 119 |
+
§ B. EVALUATION METRICS
|
| 120 |
+
|
| 121 |
+
In SAR images, there are a large number of coherent spot noise effects on raft aquaculture targets, resulting in a large number of isolated noise points in the image, which affects the accurate extraction of raft aquaculture targets. Therefore, in this paper, multiple evaluation metrics are used to evaluate the segmentation results. The metrics refer to IDUDL, which contains mIoU ( ${mIoU}$ ), Kappa coefficient(K), Overall Accuracy(OA), Precision(P), Recall(R)and F1 score $\left( {F}_{1}\right)$ .
|
| 122 |
+
|
| 123 |
+
Where ${mIoU}$ evaluates the average degree of overlap between the predicted pixel categories and the true value pixel categories, which enables a better evaluation of the semantic continuity and consistency of the model predictions. $K$ considered the effect of chance coincidences when evaluating the degree of consistency. ${OA}$ evaluates the proportion of correctly predicted pixel classes in the overall correctly predicted pixel classes, reflecting the global accuracy. $P$ denotes the proportion of float samples predicted by the model. $R$ represents the ability of the model to find all positive samples. ${F}_{1}$ synthesis balances $P$ and $R$ .
|
| 124 |
+
|
| 125 |
+
TABLE I
|
| 126 |
+
|
| 127 |
+
QUANTITATIVE COMPARISON OF PROPOSED WITH OTHER UNSUPERVISED DEEP LEARNING METHODS ON THE SAME DATASET. THE BEST RESULTS ARE HIGHLIGHTED AS BOLD.
|
| 128 |
+
|
| 129 |
+
max width=
|
| 130 |
+
|
| 131 |
+
Methods mloU Kappa OA(%) $P\left( \% \right)$ $R\left( \% \right)$ F1
|
| 132 |
+
|
| 133 |
+
1-7
|
| 134 |
+
IIC [14] 0.4613 0.2375 70.95 72.76 89.60 0.8063
|
| 135 |
+
|
| 136 |
+
1-7
|
| 137 |
+
PiCIE [15] 0.4905 0.3504 68.73 80.98 70.60 0.7198
|
| 138 |
+
|
| 139 |
+
1-7
|
| 140 |
+
IDUDL [5] 0.6102 0.5364 78.46 83.07 91.34 0.8130
|
| 141 |
+
|
| 142 |
+
1-7
|
| 143 |
+
UFFM 0.6371 0.5890 79.44 91.74 75.30 0.8371
|
| 144 |
+
|
| 145 |
+
1-7
|
| 146 |
+
|
| 147 |
+
§ C. COMPARISON RESULTS FOR SEMANTIC SEGMENTATION
|
| 148 |
+
|
| 149 |
+
Two classical unsupervised deep learning IIC [14] methods with PiCIE [15] method and an unsupervised deep learning model IDUDL [5] specifically designed for marine aquaculture are selected. The semantic segmentation results of different methods are shown in Table I. The results showed that the proposed method improved the ${mIoU}$ by 0.0269 compared to IDUDL, while $P$ increased by ${8.67}\%$ .
|
| 150 |
+
|
| 151 |
+
The visualisation results are shown in Fig. 2. In Fig. 2, the proposed method performs better in continuity and can reduce the interference of coherent spot noise. The effect of coherent spot noise in SAR images leads to many bright noises, affecting the segmentation results. The method of utilising mutual information in IIC can enhance the degree of correlation between similar samples. However, the noisy pixels are still strongly correlated with the target pixels, which leads to the impossibility of removing a large number of noisy pixels in the segmentation results.PiCIE utilises the method of geometric invariance and photometric invariance to maintain semantic consistency, but a large number of misclassifications occur. IDUDL can extract semantic features, overcome many noisy pixels, and perform the floating boundary better. However, the lack of global information leads to many missed judgments. Sample (2) shows that the proposed method can reduce the underdetermination in rafting compared to IDUDL.
|
| 152 |
+
|
| 153 |
+
§ V. CONCLUSION
|
| 154 |
+
|
| 155 |
+
This paper proposes a new unsupervised feature fusion model, UFFM, for marine raft aquaculture semantic segmentation based on SAR images. The saliency obtained from representational learning generates saliency pseudo-labels in the pseudo-label generator. During network training, multistage feature fusion is designed to enhance the semantic information and the extraction of raft aquaculture target boundaries and semantic continuity. The experimental results show that UFFM can effectively reduce the problem of omission and misjudgment of raft aquaculture targets.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,387 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dynamic Threshold Global Performance-Guaranteed Formation Control for Wheeled Mobile Robots with Smooth Extended State Observer
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Minjing Wang
|
| 4 |
+
|
| 5 |
+
School of Information and Communication Engineering
|
| 6 |
+
|
| 7 |
+
Hainan University
|
| 8 |
+
|
| 9 |
+
Haikou, China
|
| 10 |
+
|
| 11 |
+
mjwang@hainanu.edu.cn
|
| 12 |
+
|
| 13 |
+
${2}^{\text{nd }}$ Di Wu
|
| 14 |
+
|
| 15 |
+
School of Information and Communication Engineering
|
| 16 |
+
|
| 17 |
+
Hainan University
|
| 18 |
+
|
| 19 |
+
Haikou, China
|
| 20 |
+
|
| 21 |
+
hainuwudi@hainanu.edu.cn
|
| 22 |
+
|
| 23 |
+
${3}^{\text{rd }}$ Yibo Zhang
|
| 24 |
+
|
| 25 |
+
Department of Automation
|
| 26 |
+
|
| 27 |
+
Shanghai Jiao Tong University
|
| 28 |
+
|
| 29 |
+
Shanghai, China
|
| 30 |
+
|
| 31 |
+
zhang297@sjtu.edu.cn
|
| 32 |
+
|
| 33 |
+
${4}^{\text{th }}$ Wenlong Feng
|
| 34 |
+
|
| 35 |
+
School of Information and Communication Engineering
|
| 36 |
+
|
| 37 |
+
Hainan University
|
| 38 |
+
|
| 39 |
+
Haikou, China
|
| 40 |
+
|
| 41 |
+
fwlfwl@163.com
|
| 42 |
+
|
| 43 |
+
Abstract-In this paper, a dynamic threshold global performance-guaranteed formation control method is proposed for wheeled mobile robots (WMRs). Unlike existing prescribed performance formation control methods that are constrained by initial values, we design a dynamic threshold global performance-guaranteed (DTGPG) function to address the initial value constraints while being able to secondary adjust the steady state performance boundaries. Moreover, we design a smooth extended state observer (SESO) based on a sigmoid-like function to mitigate the chattering problem of the existing event-triggered ESO. Then a DTGPG-based guidance law and a SESO-based control law are designed to implement the formation control. The proof shows that the total closed-loop system is input-to-state stable (ISS). Through simulation, the benefits and validity of the proposed control methodology are confirmed.
|
| 44 |
+
|
| 45 |
+
Index Terms-WMRs, dynamic threshold global performance-guaranteed function, formation control, SESO
|
| 46 |
+
|
| 47 |
+
## I. INTRODUCTION
|
| 48 |
+
|
| 49 |
+
Multi-wheeled mobile robots (WMRs) formation control with extremely high demands on transient and steady state performance. In the transient phase, small overshoots and fast convergence can avoid collisions between WMRs. In the steady state phase, high accuracy tracking performance can significantly improve the overall coordination and task execution efficiency. Therefore, it is crucial to prescribe the performance of the multi-WMRs system. In [1], a collision avoidance prescribed performance control (PPC) method is proposed for WMR formations, which guarantees the performance of the multi-WMR system by adding communication limits and collision limits to the prescribed performance function. In [2], a fixed-time performance-guaranteed formation control problem for multi-WMRs is investigated, which achieves fixed-time convergence by introducing a segmented time-varying function into the performance function. In [3], a field-of-view constrained performance-guaranteed formation control method is proposed for multi-WMRs, which designs a guaranteed performance function that considers leader and follower distance maintenance to avoid collisions. Although the above work [1]-[3] can effectively improve the performance of multi-WMRs, there are still two points that need to be improved: 1. They are all subject to initial conditions, which will increase the human intervention in practical applications, i.e., calculating the starting position of the WMRs in advance. 2. The standard PPC cannot perform a secondary adjustment of the performance boundaries after reaching the steady state.
|
| 50 |
+
|
| 51 |
+
On the other hand, when performing tasks in complex environments, frozen and uneven road surfaces are usually encountered. These disturbances may affect the stability of WMR formations. Therefore, how to quickly and accurately estimate the external disturbances is also crucial. In [4], a nonlinear extended state observer (ESO) is proposed to estimate the external disturbance, which recovers the velocity and estimates the external disturbance through position and heading errors. Then to improve the estimation rate, a finite time ESO is designed. In [5], an event-triggered ESO is designed to adjust the allocation of resources. Note that event-triggered ESO [5] can save resources when estimating disturbances, but will inevitably have chattering problems.
|
| 52 |
+
|
| 53 |
+
Inspired by the aforementioned observations, we propose a dynamic threshold global performance-guaranteed (DTGPG) formation control method for WMRs with a smooth extended state observer (SESO). The key contributions of this work are: Unlike the standard PPC methods described in [6] and the TPP methods in [7]-[9], this paper proposes DTGPG capable of solving the initial value constraints problem and secondary adjustment of the steady state performance bounds. In contrast to event-triggered ESO [5], we design the SESO to mitigate chattering by introducing a sigmoid-like function to smooth the estimation error. The total closed-loop system is proved to be input-to-state stable (ISS). Some of the symbols in this paper are defined in Table I.
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
This work is partly distributed under the "South China Sea Rising Star" Education Platform Foundation of Hainan Province (JYNHXX2023-17G), the Natural Science Foundation of Hainan Province (624MS036). (Corresponding author: Di Wu)
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
TABLE I
|
| 62 |
+
|
| 63 |
+
SYMBOL DEFINITION
|
| 64 |
+
|
| 65 |
+
<table><tr><td>Symbol</td><td>Definition</td></tr><tr><td>${\mathbb{R}}^{n}$</td><td>$n$ -dimensional Euclidean Space</td></tr><tr><td>${\mathbb{R}}^{ + }$</td><td>Positive real space</td></tr><tr><td>$\parallel \cdot \parallel$</td><td>Euclidean norm</td></tr><tr><td>diag $\{ \cdots \}$</td><td>Block-diagonal matrix</td></tr><tr><td>${\lambda }_{\max }\left( \cdot \right)$</td><td>Maximum eigenvalue of a matrix</td></tr><tr><td>${\lambda }_{\min }\left( \cdot \right)$</td><td>Minimum eigenvalue of a matrix</td></tr><tr><td>$\operatorname{sgn}\left( \cdot \right)$</td><td>Sign function</td></tr><tr><td>$\exp \left( \cdot \right)$</td><td>Exponential function</td></tr><tr><td>$\operatorname{col}\left( \cdot \right)$</td><td>Column vector</td></tr></table>
|
| 66 |
+
|
| 67 |
+
## II. PRELIMINARIES AND PROBLEM STATEMENT
|
| 68 |
+
|
| 69 |
+
## A. Graph Theory
|
| 70 |
+
|
| 71 |
+
To describe the communication among the virtual leader and WMRs, a directed graph is described as $\mathcal{G} = \{ \mathcal{V},\mathcal{M}\}$ . $\mathcal{V} = \left\{ {{n}_{1},\ldots ,{n}_{M}}\right\}$ and $\mathcal{M} = \left\{ {\left( {{n}_{i},{n}_{j}}\right) \in \mathcal{V} \times \mathcal{V}}\right\}$ represent a vertex set and an edge set, respectively. An adjacency matrix associated with $\mathcal{G}$ is defined as $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{M \times M}$ . Correspondingly, a degree matrix connected with $\mathcal{G}$ is characterized as $\mathcal{D} = \operatorname{diag}\left\{ {d}_{i}\right\} \in {\mathbb{R}}^{M \times M}$ with ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{M}{a}_{ij}$ . Additionally, a Laplacian matrix associated with $\mathcal{G}$ is defined as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ . Note that here $i = 1,\ldots , M, j = 1,\ldots , M$ .
|
| 72 |
+
|
| 73 |
+
## B. Problem Statement
|
| 74 |
+
|
| 75 |
+
Suppose that there exist $N$ followers, labeled as agents ${n}_{1}$ to ${n}_{N}$ , and $M - N$ leaders, labeled as agents ${n}_{N + 1}$ to ${n}_{M}$ , under a communication topology graph. A group of followers consisting of $N$ wheeled mobile robots is modelled as follows
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\begin{cases} {\dot{\mathbf{\eta }}}_{i} & = {\mathbf{R}}_{i}{\mathbf{\nu }}_{i} \\ {\dot{\mathbf{\nu }}}_{i} & = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\mathcal{T}}}_{i} \\ & - {D}_{i\theta }{r}_{i}^{2}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{J}}_{i}{\mathbf{R}}_{i}^{-1}{\dot{\mathbf{\eta }}}_{i} - {\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\mathcal{F}}}_{i}{r}_{i}^{2} \end{cases} \tag{1}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $i = 1,\ldots , N.{\mathbf{\eta }}_{i} = {\left\lbrack {x}_{i},{y}_{i},{\psi }_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ denotes the position and yaw angle. ${\mathbf{\nu }}_{i} = {\left\lbrack {u}_{i},{v}_{i},{w}_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ denotes the velocity vector. ${\tau }_{i} = {\left\lbrack {\tau }_{i1},{\tau }_{i2},{\tau }_{i3},{\tau }_{i4}\right\rbrack }^{T} \in {\mathbb{R}}^{4}$ denotes the control input. ${\mathcal{T}}_{i} = {\left\lbrack {\mathcal{T}}_{i1},{\mathcal{T}}_{i2},{\mathcal{T}}_{i3},{\mathcal{T}}_{i4}\right\rbrack }^{T} \in {\mathbb{R}}^{4}$ denotes the external disturbance. The kinetic parameters and matrices of this WMR can be found in [10]. ${\mathbf{J}}_{i} \in {\mathbb{R}}^{4 \times 3}$ and ${\mathbf{J}}_{i}^{ + } \in {\mathbb{R}}^{3 \times 4}$ satisfy the relationship ${\mathbf{J}}_{i}^{ + }{\mathbf{J}}_{i} = {\mathbf{I}}_{3}$ .
|
| 82 |
+
|
| 83 |
+
Assumption 1: The graph $\mathcal{G}$ contains a spanning tree with the virtual leader as the root node.
|
| 84 |
+
|
| 85 |
+
C. Dynamic Threshold Global Performance-Guaranteed and Barrier Function
|
| 86 |
+
|
| 87 |
+
We define the distributed error as follows
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
{\mathbf{E}}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}\left( {{\mathbf{\eta }}_{i} - {\mathbf{\eta }}_{j}}\right) + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}\left( {{\mathbf{\eta }}_{i} - {\mathbf{\eta }}_{jr}}\right) \tag{2}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where ${\mathbf{\eta }}_{jr} = {\left\lbrack {\eta }_{jx},{\eta }_{jy},{\eta }_{j\psi }\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ represents the trajectory of the virtual leader. The coefficient ${a}_{ij}$ is defined in [11]. To ensure that the developed control is free from the influence of initial conditions and can dynamically adjust prescribed thresholds, the error is constrained within the following prescribed regions
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{\mathcal{I}}_{ik}\left( {-{\mathcal{W}}_{ik}}\right) \leq {E}_{ik} \leq {\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right) ,\;k = x, y,\psi \tag{3}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where ${\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right)$ is a dynamic threshold global performance-guaranteed (DTGPG) function similar to the [12], and is defined as follows
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right) = \frac{\sqrt{{l}_{ik}}{\mathcal{W}}_{ik}}{\sqrt{1 - {\mathcal{W}}_{ik}^{2}}} \tag{4}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
with ${\mathcal{W}}_{ik} = 1/{\mathcal{P}}_{ik}.{\mathcal{P}}_{ik}$ is a dynamic threshold finite-time prescribed function similar to the [13]
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{\mathcal{P}}_{ik}\left( t\right) = \left\{ \begin{array}{ll} \left( {1 - {\Theta }_{{ik},\infty }}\right) \exp \left( {-{\varrho }_{ik}\frac{{T}_{{ik}, a}}{{T}_{{ik}, a} - t}}\right) + {\Theta }_{{ik},\infty },0 \leq t < {T}_{{ik}, a} & \\ {\Theta }_{{ik},\infty }\left( {1 - \frac{{\omega }_{ik}}{2} + \frac{{\omega }_{ik}}{2}\cos \left( {\frac{\pi }{{c}_{ik}}\left( {t - {T}_{{ik}, a}}\right) }\right) }\right) ,{T}_{{ik}, a} \leq t < {T}_{{ik}, b} & \\ {\Theta }_{{ik},\infty }\left( {1 - {\omega }_{ik}}\right) , & t \geq {T}_{{ik}, b} \end{array}\right. \tag{5}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${l}_{ik}$ and ${\omega }_{ik}$ are positive constants. ${\Theta }_{{ik},\infty } =$ $\mathop{\lim }\limits_{{t \rightarrow \infty }}{\Theta }_{ik}\left( t\right)$ is the steady-state value. ${\varrho }_{ik} > 0$ represents the convergence rate. ${T}_{{ik}, a}$ is the settling time to reach steady state. ${c}_{ik} = {T}_{{ik}, b} - {T}_{{ik}, a}$ is the duration of the dynamic adjustment.
|
| 112 |
+
|
| 113 |
+
Then, we employ the following barrier function to implement the error constraint in (3)
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
{\mathcal{Z}}_{ik} = \frac{{\mathcal{J}}_{ik}}{1 - {\mathcal{J}}_{ik}^{2}} \tag{6}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where ${\mathcal{J}}_{ik} = {\mathcal{P}}_{ik}{\mathcal{H}}_{ik}$ with ${\mathcal{H}}_{ik} = {E}_{ik}/\sqrt{{E}_{ik}^{2} + {l}_{ik}}$ . The properties of the barrier function are described in [12].
|
| 120 |
+
|
| 121 |
+
## III. Controller Design and Analysis
|
| 122 |
+
|
| 123 |
+
## A. Smooth Extended State Observer
|
| 124 |
+
|
| 125 |
+
To facilitate the subsequent strategy design, define ${\mathbf{\Lambda }}_{i} =$ ${r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathcal{T}}_{i} - {D}_{i\theta }{r}_{i}^{2}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{J}}_{i}{\mathbf{R}}_{i}^{-1}{\dot{\mathbf{\eta }}}_{i} - {\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathcal{F}}_{i}{r}_{i}^{2}$ to denote internal uncertainty and external disturbances suffered by the $i$ th WMR. (1) can be reformulated as
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
\left\{ \begin{array}{l} {\dot{\mathbf{\eta }}}_{i} = {\mathbf{R}}_{i}{\mathbf{\nu }}_{i} \\ {\dot{\mathbf{\nu }}}_{i} = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {\mathbf{\Lambda }}_{i}. \end{array}\right. \tag{7}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
Assumption 2: For the multi-WMR system, the unknown total disturbance ${\mathbf{\Lambda }}_{i}$ is smooth and continuous.
|
| 132 |
+
|
| 133 |
+
Then, we regard the total disturbances ${\mathbf{\Lambda }}_{i}$ as an extended state, and to avoid unnecessary waste of resources when approximating the disturbances, an ESO based on event-triggered mechanism is designed as [5]
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\left\{ \begin{array}{l} {\widetilde{\mathbf{\nu }}}_{i}^{s} = {\widehat{\mathbf{\nu }}}_{i} - {\mathbf{\nu }}_{i}^{ \star } \\ {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\varepsilon }_{i1}{\widetilde{\mathbf{\nu }}}_{i}^{s} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\varepsilon }_{i2}{\widetilde{\mathbf{\nu }}}_{i}^{s} \end{array}\right. \tag{8}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where ${\varepsilon }_{i1}$ and ${\varepsilon }_{i2} \in {\mathbb{R}}^{3 \times 3}$ denote positive diagonal matrices. The variables ${\widehat{\mathbf{\nu }}}_{i} = {\left\lbrack {\widehat{u}}_{i},{\widehat{v}}_{i},{\widehat{w}}_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ and ${\widehat{\mathbf{\Lambda }}}_{i} = {\left\lbrack {\widehat{\Lambda }}_{iu},{\widehat{\Lambda }}_{iv},{\widehat{\Lambda }}_{iw}\right\rbrack }^{T} \in$ ${\mathbb{R}}^{3}$ denote the estimates of ${\mathbf{\nu }}_{i}$ and ${\mathbf{\Lambda }}_{i}$ , respectively. ${\mathbf{\nu }}_{i}^{ \star } \in {\mathbb{R}}^{3}$ represents the aperiodic sampling of ${\mathbf{\nu }}_{i}$ . The event-triggered mechanism is defined as
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\left\{ \begin{array}{l} {\mathbf{\nu }}_{i}^{ \star }\left( t\right) = {\mathbf{\nu }}_{i}\left( {t}_{\varpi }^{{\nu }_{i}}\right) ,\forall t \in \left\lbrack {{t}_{\varpi }^{{\nu }_{i}},{t}_{\varpi + 1}^{{\nu }_{i}}}\right) ,{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) = {\mathbf{\nu }}_{i}^{ \star }\left( t\right) - {\mathbf{\nu }}_{i}\left( t\right) \\ {t}_{\varpi + 1}^{{\nu }_{i}} = \inf \left\{ {t \in \mathbb{R} \mid \begin{Vmatrix}{{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) }\end{Vmatrix} \geq {\mathcal{X}}_{i}}\right\} \end{array}\right. \tag{9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where ${\mathcal{X}}_{i} \in {\mathbb{R}}^{ + }$ denotes the event triggering threshold, and ${\widetilde{\mathbf{\nu }}}_{is}\left( t\right)$ denotes the aperiodic sampling error. When $\begin{Vmatrix}{{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) }\end{Vmatrix} \geq$ ${\mathcal{X}}_{i}$ , update ${\nu }_{i}^{ \star }\left( t\right)$ ; otherwise, maintain the last updated value.
|
| 146 |
+
|
| 147 |
+
Remark 1: In addition to using ESO to estimate the external disturbances, the neural network [14] and the neural predictor [15] also achieve the same objective.
|
| 148 |
+
|
| 149 |
+
Existing ESO based on event-triggered mechanism [5] suffers from unavoidable chattering when approximating the disturbances. To solve the chattering problem, we design the SESO as follows
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\left\{ \begin{array}{l} {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\varepsilon }_{i1}{\widetilde{\mathbf{\nu }}}_{i}^{s} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\varepsilon }_{i2}\mathcal{B}\left( {\widetilde{\mathbf{\nu }}}_{i}^{s}\right) \end{array}\right. \tag{10}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
where $\mathcal{B}\left( {\widetilde{\nu }}_{i}^{s}\right) = \operatorname{col}\left( {\mathcal{B}\left( {\widetilde{\nu }}_{i\mathcal{E}}^{s}\right) }\right) ,\Xi = u, v, w \in {\mathbb{R}}^{3}$ is the sigmoid-like function vector, defined as follows
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\mathcal{B}\left( {\widetilde{\nu }}_{i\Xi }^{s}\right) = \left\{ \begin{array}{ll} \frac{1 - \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }{1 + \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }\frac{{\widetilde{\nu }}_{i\Xi }^{s}}{\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }, & {\widetilde{\nu }}_{i\Xi }^{s} \neq 0 \\ {\widetilde{\nu }}_{i\Xi }^{s}, & {\widetilde{\nu }}_{i\Xi }^{s} = 0. \end{array}\right. \tag{11}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
Next, to facilitate the stability analysis of the SESO, define a positive vector ${\mathcal{V}}_{i} = \operatorname{diag}\left\{ {\mathcal{V}}_{i\Xi }\right\} \in {\mathbb{R}}^{3 \times 3}$ with
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
{\mathcal{V}}_{i\Xi } = \left\{ \begin{array}{ll} \frac{1 - \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }{1 + \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }\frac{1}{\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }, & {\widetilde{\nu }}_{i\Xi }^{s} \neq 0 \\ 1, & {\widetilde{\nu }}_{i\Xi }^{s} = 0. \end{array}\right. \tag{12}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
The (10) can be rewritten as
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
\left\{ \begin{array}{l} {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\mathbf{\varepsilon }}_{i1}{\widetilde{\mathbf{\nu }}}_{i} + {\mathbf{\varepsilon }}_{i1}{\widetilde{\mathbf{\nu }}}_{is} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\mathbf{\varepsilon }}_{i2}{\mathbf{V}}_{i}{\widetilde{\mathbf{\nu }}}_{i} + {\mathbf{\varepsilon }}_{i2}{\mathbf{V}}_{i}{\widetilde{\mathbf{\nu }}}_{is} \end{array}\right. \tag{13}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
where ${\widetilde{\mathbf{\nu }}}_{i} = {\widehat{\mathbf{\nu }}}_{i} - {\mathbf{\nu }}_{i},{\widetilde{\mathbf{\Lambda }}}_{i} = {\widehat{\mathbf{\Lambda }}}_{i} - {\mathbf{\Lambda }}_{i}$ . Define ${\mathcal{N}}_{i1} = {\left\lbrack {\widetilde{\mathbf{\nu }}}_{i},{\widetilde{\mathbf{\Lambda }}}_{i}\right\rbrack }^{T} \in$ ${\mathbb{R}}^{6}$ , one has
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
{\dot{\mathcal{N}}}_{i1} = {\mathbf{A}}_{i1}{\mathcal{N}}_{i1} + {\mathbf{B}}_{i1}{\widetilde{\mathbf{\nu }}}_{is} + {\mathbf{C}}_{i1}{\dot{\mathbf{\Lambda }}}_{i} \tag{14}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
where
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\left\{ {{\mathbf{A}}_{i1} = \left\lbrack \begin{matrix} - {\varepsilon }_{i1}{\mathbf{I}}_{3} & {\mathbf{I}}_{3} \\ - {\varepsilon }_{i2}{\mathbf{V}}_{i} & {\mathbf{O}}_{3} \end{matrix}\right\rbrack {\mathbf{B}}_{i1} = \left\lbrack \begin{matrix} {\varepsilon }_{i1}{\mathbf{I}}_{3} \\ {\varepsilon }_{i2}{\mathbf{V}}_{i} \end{matrix}\right\rbrack {\mathbf{C}}_{i1} = \left\lbrack \begin{matrix} {\mathbf{O}}_{3} \\ {\mathbf{I}}_{3} \end{matrix}\right\rbrack .}\right.
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
Note that the matrix ${\mathbf{A}}_{i1}$ is a Hurwitz matrix. There exists a positive-definite matrix ${\mathbf{P}}_{i1}$ satisfying the following inequality
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
{\mathbf{A}}_{i1}^{T}{\mathbf{P}}_{i1} + {\mathbf{P}}_{i1}{\mathbf{A}}_{i1} \leq - {\jmath }_{i1}{\mathbf{I}}_{6}. \tag{15}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
Lemma 1: The system (14) is ISS.
|
| 192 |
+
|
| 193 |
+
Proof: Consider a Lyapunov function candidate as follows
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
{V}_{1} = \frac{1}{2}\mathop{\sum }\limits_{{i = 1}}^{N}{\mathcal{N}}_{i1}^{T}{\mathbf{P}}_{i1}{\mathcal{N}}_{i1}. \tag{16}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
The time derivative ${V}_{1}$ based on (14) and (15) satisfies
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
{\dot{V}}_{1} \leq - \frac{{j}_{1}}{2}{\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}}^{2} + \begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix} \tag{17}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
+ \begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\parallel \dot{\mathbf{\Lambda }}\parallel
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
where ${\jmath }_{1} = \mathop{\min }\limits_{{i = 1,\ldots , N}}\left( {\jmath }_{i1}\right) ,{\mathcal{N}}_{1} = {\left\lbrack {\mathcal{N}}_{11}^{T},\ldots ,{\mathcal{N}}_{N1}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{6N},{\widetilde{\mathbf{\nu }}}_{s} =$ ${\left\lbrack {\widetilde{\mathbf{\nu }}}_{1s}^{T},\ldots ,{\widetilde{\mathbf{\nu }}}_{Ns}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},\dot{\mathbf{\Lambda }} = {\left\lbrack {\dot{\mathbf{\Lambda }}}_{1}^{T},\ldots ,{\dot{\mathbf{\Lambda }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\mathbf{P}}_{1} =$ $\operatorname{diag}\left\{ {{\mathbf{P}}_{11},\ldots ,{\mathbf{P}}_{N1}}\right\} \in {\mathbb{R}}^{{6N} \times {6N}},{\mathbf{B}}_{1} = \operatorname{diag}\left\{ {{\mathbf{B}}_{11},\ldots ,{\mathbf{B}}_{N1}}\right\} \in$ ${\mathbb{R}}^{{6N} \times {3N}}$ , and ${\mathbf{C}}_{1} = \operatorname{diag}\left\{ {{\mathbf{C}}_{11},\ldots ,{\mathbf{C}}_{N1}}\right\} \in {\mathbb{R}}^{{6N} \times {3N}}$ . Since $\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix} \geq 2\left( {\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\parallel \dot{\mathbf{\Lambda }}\parallel }\right) /{\jmath }_{1}{\sigma }_{1}$ , one has ${\dot{V}}_{1} \leq - {\jmath }_{1}\left( {1 - {\sigma }_{1}}\right) {\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}}^{2}/2$ , where $0 < {\sigma }_{1} < 1$ . It follows that the subsystem (14) is ISS. There exists a $\mathcal{K}\mathcal{L}$ function ${\mathcal{Y}}_{1}\left( \cdot \right)$ and ${\mathcal{K}}_{\infty }$ function ${\mathcal{C}}^{{\widehat{\mathcal{\nu }}}_{s}}\left( \cdot \right)$ and ${\mathcal{C}}^{\Lambda }\left( \cdot \right)$ satisfying $\begin{Vmatrix}{{\mathcal{N}}_{1}\left( t\right) }\end{Vmatrix} \leq$ ${\mathcal{Y}}_{1}\left( {\begin{Vmatrix}{{\mathcal{N}}_{1}\left( 0\right) }\end{Vmatrix}, t}\right) + {\mathcal{C}}^{{\widetilde{\mathbf{\nu }}}_{s}}\left( \begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix}\right) + {\mathcal{C}}^{\mathbf{\Lambda }}\left( {\parallel \dot{\mathbf{\Lambda }}\parallel }\right)$ , where ${\mathcal{C}}^{{\widetilde{\mathbf{\nu }}}_{s}}\left( s\right) =$ $\left( {\left( {{2s}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right) }\right)$ and ${\mathcal{C}}^{\dot{\mathbf{\Lambda }}}\left( s\right) =$ $\left( {\left( {{2s}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right) }\right)$ .
|
| 210 |
+
|
| 211 |
+
## B. Design of Guidance Law and Control Law
|
| 212 |
+
|
| 213 |
+
In this section, we design the DTGPG-based guidance law and the SESO-based control law. First, we design the guidance law. The time derivative of (6) is represented by
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
{\dot{\mathcal{Z}}}_{ik} = {\mu }_{ik}{\mathcal{P}}_{ik}{\rho }_{ik}{\dot{E}}_{ik} + {\mu }_{ik}{\dot{\mathcal{P}}}_{ik}{\mathcal{H}}_{ik} \tag{18}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
where ${\mu }_{ik} = \left( {1 + {\mathcal{J}}_{ik}^{2}}\right) /{\left( 1 - {\mathcal{J}}_{ik}^{2}\right) }^{2}$ and ${\rho }_{ik} =$ ${l}_{ik}/\left( {\sqrt{{E}_{ik}^{2} + {l}_{ik}}\left( {{E}_{ik}^{2} + {l}_{ik}}\right) }\right)$ .
|
| 220 |
+
|
| 221 |
+
Next, to simplify the design of the controller, we rewrite (18) in a vector form
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{\dot{\mathbf{Z}}}_{i} = {\mathbf{\mu }}_{i1}{\dot{\mathbf{E}}}_{i} + {\mathbf{\mu }}_{i2} \tag{19}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
where ${\mathcal{Z}}_{i} = {\left\lbrack {\mathcal{Z}}_{ix},{\mathcal{Z}}_{iy},{\mathcal{Z}}_{i\psi }\right\rbrack }^{T} \in {\mathbb{R}}^{3},{\mathbf{E}}_{i} = {\left\lbrack {E}_{ix},{E}_{iy},{E}_{i\psi }\right\rbrack }^{T} \in$ ${\mathbb{R}}^{3},{\mathbf{\mu }}_{i1} = \operatorname{diag}\left\{ {{\mu }_{ix}{\mathcal{P}}_{ix}{\rho }_{ix},{\mu }_{iy}{\mathcal{P}}_{iy}{\rho }_{iy},{\mu }_{i\psi }{\mathcal{P}}_{i\psi }{\rho }_{i\psi }}\right\} \in {\mathbb{R}}^{3 \times 3}$ , and ${\mathbf{\mu }}_{i2} = \operatorname{diag}\left\{ {{\mu }_{ix}{\dot{\mathcal{P}}}_{ix}{\mathcal{H}}_{ix},{\mu }_{iy}{\dot{\mathcal{P}}}_{iy}{\mathcal{H}}_{iy},{\mu }_{i\psi }{\dot{\mathcal{P}}}_{i\psi }{\mathcal{H}}_{i\psi }}\right\} \in {\mathbb{R}}^{3 \times 3}$ .
|
| 228 |
+
|
| 229 |
+
Take the time derivative of (2) based on (1) satisfies
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
{\dot{\mathbf{E}}}_{i} = {\iota }_{i}{\mathbf{R}}_{i}{\mathbf{\nu }}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\mathbf{\nu }}_{j} - \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr} \tag{20}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
where ${\iota }_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij} + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}$ . Substituting (20) into (19) results in
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\left( {{\iota }_{i}{\mathbf{R}}_{i}{\mathbf{\nu }}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\mathbf{\nu }}_{j} - \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr}}\right) + {\mathbf{\mu }}_{i2}. \tag{21}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
From (21), the DTGPG-based guidance law is chosen as
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
{\mathbf{\alpha }}_{i} = \frac{1}{{\iota }_{i}{\mathbf{R}}_{i}}\left( {\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widehat{\mathbf{\nu }}}_{j} + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr} - \frac{1}{{\mathbf{\mu }}_{i1}}\left( {{\mathbf{\kappa }}_{i1}{\mathbf{\mathcal{Z}}}_{i} + {\mathbf{\mu }}_{i2}}\right) }\right) . \tag{22}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
We substitute (22) into (21), and it follows that
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
{\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widetilde{\mathbf{\nu }}}_{j} - {\mathbf{\kappa }}_{i1}{\mathcal{Z}}_{i} \tag{23}
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
with ${\kappa }_{i1} \in {\mathbb{R}}^{3 \times 3}$ being a positive diagonal matrix.
|
| 254 |
+
|
| 255 |
+
Differing from the first-order low-pass filtering method in the traditional DSC, a second-order linear tracking differentiator (LTD) with respect to ${\mathbf{\alpha }}_{i}$ is introduced
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\left\{ \begin{array}{l} {\dot{\mathbf{\alpha }}}_{if} = {\mathbf{\alpha }}_{if}^{ * } \\ {\dot{\mathbf{\alpha }}}_{if}^{ * } = - {\gamma }_{i}^{2}\left( {\left( {{\mathbf{\alpha }}_{if} - {\mathbf{\alpha }}_{i}}\right) + 2\left( {{\mathbf{\alpha }}_{if}^{ * }/{\gamma }_{i}}\right) }\right) \end{array}\right. \tag{24}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
where ${\mathbf{\alpha }}_{if}^{ * } \in {\mathbb{R}}^{3}$ is the filtered value of ${\dot{\mathbf{\alpha }}}_{i}$ , and ${\gamma }_{i} \in {\mathbb{R}}^{ + }$ .
|
| 262 |
+
|
| 263 |
+
Second, we design the control law. Defining a velocity error ${\mathcal{Z}}_{ie} = {\mathbf{\nu }}_{i} - {\mathbf{\alpha }}_{i} \in {\mathbb{R}}^{3},{\dot{\mathcal{Z}}}_{ie}$ along (7) satisfies
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
{\dot{\mathbf{Z}}}_{ie} = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {\mathbf{\Lambda }}_{i} - {\dot{\mathbf{\alpha }}}_{i}. \tag{25}
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
Then, we designed the SESO-based control law to stabilize (25)
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
{\mathbf{\tau }}_{i} = \frac{{\mathbf{M}}_{i}{\mathbf{J}}_{i}}{{r}_{i}}\left( {{\mathbf{\alpha }}_{if}^{ * } - {\widehat{\mathbf{\Lambda }}}_{i} - {\mathbf{\kappa }}_{i2}{\mathbf{\mathcal{Z}}}_{ie}}\right) \tag{26}
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
with ${\kappa }_{i2} \in {\mathbb{R}}^{3 \times 3}$ being a positive diagonal matrix.
|
| 276 |
+
|
| 277 |
+
The dynamics of ${\mathcal{Z}}_{ie}$ is further obtained by substituting (26) into (25)
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
{\dot{\mathcal{Z}}}_{ie} = {\widetilde{\alpha }}_{i}^{ * } - {\widetilde{\Lambda }}_{i} - {\kappa }_{i2}{\mathcal{Z}}_{ie} \tag{27}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
where ${\widetilde{\mathbf{\alpha }}}_{i}^{ * } = {\mathbf{\alpha }}_{if}^{ * } - {\dot{\mathbf{\alpha }}}_{i}$ .
|
| 284 |
+
|
| 285 |
+
From (23) and (27), we can obtain the following subsystems
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
\left\{ \begin{array}{l} {\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widetilde{\mathbf{\nu }}}_{j} - {\mathbf{\kappa }}_{i1}{\mathcal{Z}}_{i} \\ {\dot{\mathcal{Z}}}_{ie} = {\widetilde{\mathbf{\alpha }}}_{i}^{ * } - {\mathbf{\Lambda }}_{i} - {\mathbf{\kappa }}_{i2}{\mathcal{Z}}_{ie}. \end{array}\right. \tag{28}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
Lemma 2: The system (28) is ISS.
|
| 292 |
+
|
| 293 |
+
Proof: Consider a Lyapunov function candidate as ${V}_{2} =$ $\left( {1/2}\right) \mathop{\sum }\limits_{{i = 1}}^{N}\left( {{\mathbf{Z}}_{i}^{T}{\mathbf{Z}}_{i} + {\mathbf{Z}}_{ie}^{T}{\mathbf{Z}}_{ie}}\right)$ . The time derivative of ${V}_{2}$ based on (28) satisfies
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{\dot{V}}_{2} \leq - {n}_{1}\parallel \mathcal{Z}{\parallel }^{2} - {n}_{2}{\begin{Vmatrix}{\mathcal{Z}}_{e}\end{Vmatrix}}^{2} + {n}_{3}{n}^{ * }\parallel \mathcal{Z}\parallel \parallel \widetilde{\nu }\parallel \tag{29}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
+ \begin{Vmatrix}{\mathbf{Z}}_{e}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \begin{Vmatrix}{\mathbf{Z}}_{e}\end{Vmatrix}\parallel \widetilde{\mathbf{\Lambda }}\parallel
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
where ${n}_{1} = {\lambda }_{\min }\left( {\mathbf{\kappa }}_{1}\right)$ with ${\mathbf{\kappa }}_{1} = \operatorname{diag}\left\{ {{\mathbf{\kappa }}_{11},\ldots ,{\mathbf{\kappa }}_{N1}}\right\} \in$ ${\mathbb{R}}^{{3N} \times {3N}} \cdot {n}_{2} = {\lambda }_{\min }\left( {\mathbf{\kappa }}_{2}\right)$ with ${\mathbf{\kappa }}_{2} = \operatorname{diag}\left\{ {{\mathbf{\kappa }}_{12},\ldots ,{\mathbf{\kappa }}_{N2}}\right\} \in$ ${\mathbb{R}}^{{3N} \times {3N}}.{n}_{3} = \mathop{\max }\limits_{{i = 1,\ldots , N}}\left( {{\lambda }_{\max }\left( {\mathbf{\mu }}_{i1}\right) }\right) .{n}^{ * } = \mathop{\max }\limits_{{i = 1,\ldots , N}}\left( {n}_{i}^{ * }\right)$ with ${n}_{i}^{ * } = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ji}.\mathcal{Z} = {\left\lbrack {\mathcal{Z}}_{1}^{T},\ldots ,{\mathcal{Z}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\mathcal{Z}}_{e} =$ ${\left\lbrack {\mathbf{\mathcal{Z}}}_{1e}^{T},\ldots ,{\mathbf{\mathcal{Z}}}_{Ne}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},\widetilde{\mathbf{\nu }} = {\left\lbrack {\widetilde{\mathbf{\nu }}}_{1}^{T},\ldots ,{\widetilde{\mathbf{\nu }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\widetilde{\mathbf{\alpha }}}^{ * } =$ ${\left\lbrack {\widetilde{\mathbf{\alpha }}}_{1}^{*T},\ldots ,{\widetilde{\mathbf{\alpha }}}_{N}^{*T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N}$ , and $\widetilde{\mathbf{\Lambda }} = {\left\lbrack {\widetilde{\mathbf{\Lambda }}}_{1}^{T},\ldots ,{\widetilde{\mathbf{\Lambda }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N}$ .
|
| 304 |
+
|
| 305 |
+
Define $n = \min \left( {{n}_{1},{n}_{2}}\right)$ and ${\mathcal{N}}_{2} = {\left\lbrack \parallel \mathcal{Z}\parallel ,\begin{Vmatrix}{\mathcal{Z}}_{e}\end{Vmatrix}\right\rbrack }^{T} \in {\mathbb{R}}^{2}$ . Then, (29) is further put into
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
{\dot{V}}_{2} \leq - n{\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}}^{2} + {n}_{3}{n}^{ * }\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\parallel \widetilde{\nu }\parallel \tag{30}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
+ \begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\parallel \widetilde{\mathbf{\Lambda }}\parallel \text{.}
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
Since $\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix} \geq 2\left( {{n}_{3}{n}^{ * }\parallel \widetilde{\mathbf{\nu }}\parallel + \begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \parallel \widetilde{\mathbf{\Lambda }}\parallel }\right) /n$ , one has ${\dot{V}}_{2} \leq$ $- n{\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}}^{2}/2$ . It follows that the subsystem (28) is ISS. There exists a $\mathcal{K}\mathcal{L}$ function ${\mathcal{Y}}_{2}\left( \cdot \right)$ and ${\mathcal{K}}_{\infty }$ function ${\mathcal{C}}^{\widetilde{\nu }}\left( \cdot \right) ,{\mathcal{C}}^{{\widetilde{\alpha }}^{ * }}\left( \cdot \right)$ , and ${\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( \cdot \right)$ satisfying $\begin{Vmatrix}{{\mathcal{N}}_{2}\left( t\right) }\end{Vmatrix} \leq {\mathcal{Y}}_{2}\left( {\begin{Vmatrix}{{\mathcal{N}}_{2}\left( 0\right) }\end{Vmatrix}, t}\right) + {\mathcal{C}}^{\widetilde{\mathbf{\nu }}}\left( {\parallel \widetilde{\mathbf{\nu }}\parallel }\right) +$ ${\mathcal{C}}^{{\widetilde{\mathbf{\alpha }}}^{ * }}\left( \begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix}\right) + {\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( {\parallel \widetilde{\mathbf{\Lambda }}\parallel }\right)$ , where ${\mathcal{C}}^{\widetilde{\mathbf{\nu }}}\left( s\right) = 2{n}_{3}{n}^{ * }s/n,{\mathcal{C}}^{{\widetilde{\mathbf{\alpha }}}^{ * }}\left( s\right) =$ ${2s}/n$ , and ${\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( s\right) = {2s}/n$ .
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
|
| 319 |
+
Fig. 1. Circular formation using the proposed method.
|
| 320 |
+
|
| 321 |
+
Theorem 1: For multi-WMRs (1) subject to initial conditions, the closed-loop system is ISS consisting of SESO (10), the DTGPG-based guidance law (22), and the SESO-based control law (26). Moreover, Zeno behavior can be avoided.
|
| 322 |
+
|
| 323 |
+
Proof: The ISS properties of subsystems (14) and (28) are proven through Lemma 1 and Lemma 2, respectively. The state of the subsystem (14), $\widetilde{\mathbf{\nu }}$ , and $\widetilde{\mathbf{\Lambda }}$ are inputs of the subsystem (28). Under Assumptions 1-2, according to the cascade stability theorem, the closed-loop system is ISS. It yields that the ultimate boundedness of $\begin{Vmatrix}{{\mathcal{N}}_{2}\left( t\right) }\end{Vmatrix}$ as $t \rightarrow \infty$
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
{\begin{Vmatrix}{\mathcal{N}}_{2}\left( t\right) \end{Vmatrix}}_{t \rightarrow \infty } \leq \frac{2\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix}}{n} + {\mathcal{H}}^{ * }\left( {\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix} + \parallel \dot{\mathbf{\Lambda }}\parallel \begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}}\right. \tag{31}
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
with ${\mathcal{H}}^{ * } = \left( {4\left( {{n}_{3}{n}^{ * } + 1}\right) \sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {n{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right)$ . The detailed proof process of the Zeno behavior can be referred to [5]. The proof of Theorem 1 is complete.
|
| 330 |
+
|
| 331 |
+
## IV. Simulation Results
|
| 332 |
+
|
| 333 |
+
From Fig. 1, it can be seen that we consider a communication topology consisting of three followers ${n}_{1},{n}_{2}$ , and ${n}_{3}$ , as well as two virtual leaders ${n}_{4}$ and ${n}_{5}$ to verify the effectiveness of the proposed controller. The physical parameters of the WMR can refer to [10]. This external disturbance is similar to [16]. The initial values of three followers are chosen as ${\mathbf{\eta }}_{1}\left( 0\right) = {\left\lbrack 0,0,3\pi /2\right\rbrack }^{T},{\mathbf{\eta }}_{2}\left( 0\right) = {\left\lbrack 2, - {10},\pi /2\right\rbrack }^{T},{\mathbf{\eta }}_{3}\left( 0\right) =$ ${\left\lbrack 2, - {17},4\pi /3\right\rbrack }^{T}$ . The trajectories of the two virtual leaders are chosen as
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
\left\{ \begin{array}{l} {\mathbf{\eta }}_{4r} = {\left\lbrack -5\sin \left( {0.2}t\right) , - 5\cos \left( {0.2}t\right) ,\operatorname{atan}2\left( {\dot{\eta }}_{4y},{\dot{\eta }}_{4x}\right) \right\rbrack }^{T} \\ {\mathbf{\eta }}_{5r} = {\left\lbrack -{15}\sin \left( {0.2}t\right) , - {15}\cos \left( {0.2}t\right) ,\operatorname{atan}2\left( {\dot{\eta }}_{5y},{\dot{\eta }}_{5x}\right) \right\rbrack }^{T}. \end{array}\right.
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
The main design parameters are set as ${\kappa }_{11} = \operatorname{diag}\{ {12},7,{10}\}$ , ${\kappa }_{21} = \operatorname{diag}\{ 7,7,{10}\} ,{\kappa }_{31} = \operatorname{diag}\{ {12},9,{10}\} ,{\kappa }_{i2} = \operatorname{diag}\{ {20},{20},{20}\}$ , ${\varepsilon }_{i1} = \operatorname{diag}\{ 2,2,2\} ,{\varepsilon }_{i2} = \operatorname{diag}\{ {40},{40},{40}\} ,{T}_{{1x}, a} = {T}_{{1\psi }, a} =$ ${T}_{{2x}, a} = {T}_{{2\psi }, a} = {T}_{{3x}, a} = {T}_{{3\psi }, a} = {0.5},{T}_{{1y}, a} = {T}_{{2y}, a} =$
|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
|
| 343 |
+

|
| 344 |
+
|
| 345 |
+
Fig. 4. The number of triggering events.
|
| 346 |
+
|
| 347 |
+
Fig. 2. Tracking errors using the DTGPG. Fig. 3. The estimated disturbances using the SESO. ${T}_{{3y}, a} = 1,{T}_{{1x}, b} = {T}_{{2x}, b} = {T}_{{3x}, b} = {0.7},{T}_{{1y}, b} = {T}_{{2y}, b} =$ ${T}_{{3y}, b} = {1.2},{T}_{{1\psi }, b} = {T}_{{2\psi }, b} = {T}_{{3\psi }, b} = {1.5},{\omega }_{ik} =$ ${0.7},{\Theta }_{{ik},\infty } = {0.9},{\varrho }_{ik} = 2,{l}_{ik} = {10},{\mathcal{X}}_{1} = {\mathcal{X}}_{2} = {\mathcal{X}}_{3} = {0.06}$ .
|
| 348 |
+
|
| 349 |
+
Simulation results are depicted in Figs 1-4. Fig. 1 demonstrates these three vehicles forming a circular formation guided by two virtual leaders. Fig. 2 shows that the tracking profile is not constrained by the initial value and is able to dynamically adjust the performance boundaries using the proposed DTGPG control scheme. Fig. 3 shows that SESO is not only able to estimate internal uncertainties and external disturbances but also to reduce chattering. Fig. 4 shows the number of triggering events. ${\nu }_{1}^{ \star },{\nu }_{2}^{ \star }$ , and ${\nu }_{3}^{ \star }$ are triggered 179,213, and 211 times respectively. Compared to time triggering 2800 times, it effectively saves resources.
|
| 350 |
+
|
| 351 |
+
## V. Conclusion
|
| 352 |
+
|
| 353 |
+
In this paper, the dynamic threshold global prescribed performance formation control problem was investigated for WMRs in the presence of unknown total disturbances. A dynamic threshold global performance-guaranteed formation control method based on SESO was proposed, which had three advantages: 1) it could adjust the steady-state performance boundary twice, 2) it resolved the initial value constraints present in standard PPC, and 3) it mitigated the chattering problem in event-triggered ESO. This cascade system consisting of the SESO, the DTGPG-based guidance law, and the SESO-based control law was proved to be ISS. The main results were demonstrated by the simulation examples.
|
| 354 |
+
|
| 355 |
+
## REFERENCES
|
| 356 |
+
|
| 357 |
+
[1] S.-L. Dai, S. He, X. Chen, and X. Jin, "Adaptive leader-follower formation control of nonholonomic mobile robots with prescribed transient and steady-state performance," IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 3662-3671, 2019.
|
| 358 |
+
|
| 359 |
+
[2] S. Chang, Y. Wang, Z. Zuo, and H. Yang, "Fixed-time formation control for wheeled mobile robots with prescribed performance," IEEE Transactions on Control Systems Technology, vol. 30, no. 2, pp. 844- 851, 2021.
|
| 360 |
+
|
| 361 |
+
[3] S.-L. Dai, K. Lu, and X. Jin, "Fixed-time formation control of unicycle-type mobile robots with visibility and performance constraints," IEEE Transactions on Industrial Electronics, vol. 68, no. 12, pp. 12615- 12625, 2020.
|
| 362 |
+
|
| 363 |
+
[4] L. Liu, D. Wang, and Z. Peng, "State recovery and disturbance estimation of unmanned surface vehicles based on nonlinear extended state observers," Ocean Engineering, vol. 171, pp. 625-632, 2019.
|
| 364 |
+
|
| 365 |
+
[5] C. Wang, D. Wang, and Z. Peng, "Distributed output-feedback control of unmanned container transporter platooning with uncertainties and disturbances using event-triggered mechanism," IEEE Transactions on Vehicular Technology, vol. 71, no. 1, pp. 162-170, 2021.
|
| 366 |
+
|
| 367 |
+
[6] J. Li, J. Du, and C. P. Chen, "Command-filtered robust adaptive nn control with the prescribed performance for the 3-D trajectory tracking of underactuated AUVs," IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 11, pp. 6545-6557, 2021.
|
| 368 |
+
|
| 369 |
+
[7] W. Wu, R. Ji, W. Zhang, and Y. Zhang, "Transient-reinforced tunnel coordinated control of underactuated marine surface vehicles with actuator faults," IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 2, pp. 1872-1881, 2024.
|
| 370 |
+
|
| 371 |
+
[8] D. Wu, Y. Zhang, W. Wu, E. Q. Wu, and W. Zhang, "Tunnel prescribed performance control for distributed path maneuvering of multi-UAV swarms via distributed neural predictor," IEEE Transactions on Circuits and Systems II: Express Briefs, 2024, doi:10.1109/TCSII.2024.3371981.
|
| 372 |
+
|
| 373 |
+
[9] W. Wu, D. Wu, Y. Zhang, S. Chen, and W. Zhang, "Safety-critical trajectory tracking for mobile robots with guaranteed performance," IEEE/CAA Journal of Automatica Sinica, 2024, doi:10.1109/JAS.2023.123864.
|
| 374 |
+
|
| 375 |
+
[10] D. Yu, C. P. Chen, and H. Xu, "Fuzzy swarm control based on sliding-mode strategy with self-organized omnidirectional mobile robots system," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 4, pp. 2262-2274, 2021.
|
| 376 |
+
|
| 377 |
+
[11] Z. Peng, J. Wang, and D. Wang, "Distributed containment maneuvering of multiple marine vessels via neurodynamics-based output feedback," IEEE Transactions on Industrial Electronics, vol. 64, no. 5, pp. 3831- 3839, 2017.
|
| 378 |
+
|
| 379 |
+
[12] K. Zhao, Y. Song, C. P. Chen, and L. Chen, "Adaptive asymptotic tracking with global performance for nonlinear systems with unknown control directions," IEEE Transactions on Automatic Control, vol. 67, no. 3, pp. 1566-1573, 2021.
|
| 380 |
+
|
| 381 |
+
[13] X. Liu, H. Zhang, J. Sun, and X. Guo, "Dynamic threshold finite-time prescribed performance control for nonlinear systems with dead-zone output," IEEE Transactions on Cybernetics, vol. 54, no. 1, pp. 655-664, 2023.
|
| 382 |
+
|
| 383 |
+
[14] T.-S. Li, D. Wang, G. Feng, and S.-C. Tong, "A DSC approach to robust adaptive NN tracking control for strict-feedback nonlinear systems," IEEE Transactions on Systems, Man, and Cybernetics, part $b$ (Cybernetics), vol. 40, no. 3, pp. 915-927,2009.
|
| 384 |
+
|
| 385 |
+
[15] Y. Zhang, W. Wu, W. Chen, H. Lu, and W. Zhang, "Output-feedback consensus maneuvering of uncertain MIMO strict-feedback multiagent systems based on a high-order neural observer," IEEE Transactions on Cybernetics, 2024, doi:10.1109/TCYB.2024.3351476.
|
| 386 |
+
|
| 387 |
+
[16] T. Zhao, X. Zou, and S. Dian, "Fixed-time observer-based adaptive fuzzy tracking control for Mecanum-wheel mobile robots with guaranteed transient performance," Nonlinear Dynamics, pp. 1-17, 2022.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/7LL9KbT9ro/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,380 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DYNAMIC THRESHOLD GLOBAL PERFORMANCE-GUARANTEED FORMATION CONTROL FOR WHEELED MOBILE ROBOTS WITH SMOOTH EXTENDED STATE OBSERVER
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Minjing Wang
|
| 4 |
+
|
| 5 |
+
School of Information and Communication Engineering
|
| 6 |
+
|
| 7 |
+
Hainan University
|
| 8 |
+
|
| 9 |
+
Haikou, China
|
| 10 |
+
|
| 11 |
+
mjwang@hainanu.edu.cn
|
| 12 |
+
|
| 13 |
+
${2}^{\text{ nd }}$ Di Wu
|
| 14 |
+
|
| 15 |
+
School of Information and Communication Engineering
|
| 16 |
+
|
| 17 |
+
Hainan University
|
| 18 |
+
|
| 19 |
+
Haikou, China
|
| 20 |
+
|
| 21 |
+
hainuwudi@hainanu.edu.cn
|
| 22 |
+
|
| 23 |
+
${3}^{\text{ rd }}$ Yibo Zhang
|
| 24 |
+
|
| 25 |
+
Department of Automation
|
| 26 |
+
|
| 27 |
+
Shanghai Jiao Tong University
|
| 28 |
+
|
| 29 |
+
Shanghai, China
|
| 30 |
+
|
| 31 |
+
zhang297@sjtu.edu.cn
|
| 32 |
+
|
| 33 |
+
${4}^{\text{ th }}$ Wenlong Feng
|
| 34 |
+
|
| 35 |
+
School of Information and Communication Engineering
|
| 36 |
+
|
| 37 |
+
Hainan University
|
| 38 |
+
|
| 39 |
+
Haikou, China
|
| 40 |
+
|
| 41 |
+
fwlfwl@163.com
|
| 42 |
+
|
| 43 |
+
Abstract-In this paper, a dynamic threshold global performance-guaranteed formation control method is proposed for wheeled mobile robots (WMRs). Unlike existing prescribed performance formation control methods that are constrained by initial values, we design a dynamic threshold global performance-guaranteed (DTGPG) function to address the initial value constraints while being able to secondary adjust the steady state performance boundaries. Moreover, we design a smooth extended state observer (SESO) based on a sigmoid-like function to mitigate the chattering problem of the existing event-triggered ESO. Then a DTGPG-based guidance law and a SESO-based control law are designed to implement the formation control. The proof shows that the total closed-loop system is input-to-state stable (ISS). Through simulation, the benefits and validity of the proposed control methodology are confirmed.
|
| 44 |
+
|
| 45 |
+
Index Terms-WMRs, dynamic threshold global performance-guaranteed function, formation control, SESO
|
| 46 |
+
|
| 47 |
+
§ I. INTRODUCTION
|
| 48 |
+
|
| 49 |
+
Multi-wheeled mobile robots (WMRs) formation control with extremely high demands on transient and steady state performance. In the transient phase, small overshoots and fast convergence can avoid collisions between WMRs. In the steady state phase, high accuracy tracking performance can significantly improve the overall coordination and task execution efficiency. Therefore, it is crucial to prescribe the performance of the multi-WMRs system. In [1], a collision avoidance prescribed performance control (PPC) method is proposed for WMR formations, which guarantees the performance of the multi-WMR system by adding communication limits and collision limits to the prescribed performance function. In [2], a fixed-time performance-guaranteed formation control problem for multi-WMRs is investigated, which achieves fixed-time convergence by introducing a segmented time-varying function into the performance function. In [3], a field-of-view constrained performance-guaranteed formation control method is proposed for multi-WMRs, which designs a guaranteed performance function that considers leader and follower distance maintenance to avoid collisions. Although the above work [1]-[3] can effectively improve the performance of multi-WMRs, there are still two points that need to be improved: 1. They are all subject to initial conditions, which will increase the human intervention in practical applications, i.e., calculating the starting position of the WMRs in advance. 2. The standard PPC cannot perform a secondary adjustment of the performance boundaries after reaching the steady state.
|
| 50 |
+
|
| 51 |
+
On the other hand, when performing tasks in complex environments, frozen and uneven road surfaces are usually encountered. These disturbances may affect the stability of WMR formations. Therefore, how to quickly and accurately estimate the external disturbances is also crucial. In [4], a nonlinear extended state observer (ESO) is proposed to estimate the external disturbance, which recovers the velocity and estimates the external disturbance through position and heading errors. Then to improve the estimation rate, a finite time ESO is designed. In [5], an event-triggered ESO is designed to adjust the allocation of resources. Note that event-triggered ESO [5] can save resources when estimating disturbances, but will inevitably have chattering problems.
|
| 52 |
+
|
| 53 |
+
Inspired by the aforementioned observations, we propose a dynamic threshold global performance-guaranteed (DTGPG) formation control method for WMRs with a smooth extended state observer (SESO). The key contributions of this work are: Unlike the standard PPC methods described in [6] and the TPP methods in [7]-[9], this paper proposes DTGPG capable of solving the initial value constraints problem and secondary adjustment of the steady state performance bounds. In contrast to event-triggered ESO [5], we design the SESO to mitigate chattering by introducing a sigmoid-like function to smooth the estimation error. The total closed-loop system is proved to be input-to-state stable (ISS). Some of the symbols in this paper are defined in Table I.
|
| 54 |
+
|
| 55 |
+
This work is partly distributed under the "South China Sea Rising Star" Education Platform Foundation of Hainan Province (JYNHXX2023-17G), the Natural Science Foundation of Hainan Province (624MS036). (Corresponding author: Di Wu)
|
| 56 |
+
|
| 57 |
+
TABLE I
|
| 58 |
+
|
| 59 |
+
SYMBOL DEFINITION
|
| 60 |
+
|
| 61 |
+
max width=
|
| 62 |
+
|
| 63 |
+
Symbol Definition
|
| 64 |
+
|
| 65 |
+
1-2
|
| 66 |
+
${\mathbb{R}}^{n}$ $n$ -dimensional Euclidean Space
|
| 67 |
+
|
| 68 |
+
1-2
|
| 69 |
+
${\mathbb{R}}^{ + }$ Positive real space
|
| 70 |
+
|
| 71 |
+
1-2
|
| 72 |
+
$\parallel \cdot \parallel$ Euclidean norm
|
| 73 |
+
|
| 74 |
+
1-2
|
| 75 |
+
diag $\{ \cdots \}$ Block-diagonal matrix
|
| 76 |
+
|
| 77 |
+
1-2
|
| 78 |
+
${\lambda }_{\max }\left( \cdot \right)$ Maximum eigenvalue of a matrix
|
| 79 |
+
|
| 80 |
+
1-2
|
| 81 |
+
${\lambda }_{\min }\left( \cdot \right)$ Minimum eigenvalue of a matrix
|
| 82 |
+
|
| 83 |
+
1-2
|
| 84 |
+
$\operatorname{sgn}\left( \cdot \right)$ Sign function
|
| 85 |
+
|
| 86 |
+
1-2
|
| 87 |
+
$\exp \left( \cdot \right)$ Exponential function
|
| 88 |
+
|
| 89 |
+
1-2
|
| 90 |
+
$\operatorname{col}\left( \cdot \right)$ Column vector
|
| 91 |
+
|
| 92 |
+
1-2
|
| 93 |
+
|
| 94 |
+
§ II. PRELIMINARIES AND PROBLEM STATEMENT
|
| 95 |
+
|
| 96 |
+
§ A. GRAPH THEORY
|
| 97 |
+
|
| 98 |
+
To describe the communication among the virtual leader and WMRs, a directed graph is described as $\mathcal{G} = \{ \mathcal{V},\mathcal{M}\}$ . $\mathcal{V} = \left\{ {{n}_{1},\ldots ,{n}_{M}}\right\}$ and $\mathcal{M} = \left\{ {\left( {{n}_{i},{n}_{j}}\right) \in \mathcal{V} \times \mathcal{V}}\right\}$ represent a vertex set and an edge set, respectively. An adjacency matrix associated with $\mathcal{G}$ is defined as $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{M \times M}$ . Correspondingly, a degree matrix connected with $\mathcal{G}$ is characterized as $\mathcal{D} = \operatorname{diag}\left\{ {d}_{i}\right\} \in {\mathbb{R}}^{M \times M}$ with ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{M}{a}_{ij}$ . Additionally, a Laplacian matrix associated with $\mathcal{G}$ is defined as $\mathcal{L} = \mathcal{D} - \mathcal{A}$ . Note that here $i = 1,\ldots ,M,j = 1,\ldots ,M$ .
|
| 99 |
+
|
| 100 |
+
§ B. PROBLEM STATEMENT
|
| 101 |
+
|
| 102 |
+
Suppose that there exist $N$ followers, labeled as agents ${n}_{1}$ to ${n}_{N}$ , and $M - N$ leaders, labeled as agents ${n}_{N + 1}$ to ${n}_{M}$ , under a communication topology graph. A group of followers consisting of $N$ wheeled mobile robots is modelled as follows
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\begin{cases} {\dot{\mathbf{\eta }}}_{i} & = {\mathbf{R}}_{i}{\mathbf{\nu }}_{i} \\ {\dot{\mathbf{\nu }}}_{i} & = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\mathcal{T}}}_{i} \\ & - {D}_{i\theta }{r}_{i}^{2}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{J}}_{i}{\mathbf{R}}_{i}^{-1}{\dot{\mathbf{\eta }}}_{i} - {\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\mathcal{F}}}_{i}{r}_{i}^{2} \end{cases} \tag{1}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
where $i = 1,\ldots ,N.{\mathbf{\eta }}_{i} = {\left\lbrack {x}_{i},{y}_{i},{\psi }_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ denotes the position and yaw angle. ${\mathbf{\nu }}_{i} = {\left\lbrack {u}_{i},{v}_{i},{w}_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ denotes the velocity vector. ${\tau }_{i} = {\left\lbrack {\tau }_{i1},{\tau }_{i2},{\tau }_{i3},{\tau }_{i4}\right\rbrack }^{T} \in {\mathbb{R}}^{4}$ denotes the control input. ${\mathcal{T}}_{i} = {\left\lbrack {\mathcal{T}}_{i1},{\mathcal{T}}_{i2},{\mathcal{T}}_{i3},{\mathcal{T}}_{i4}\right\rbrack }^{T} \in {\mathbb{R}}^{4}$ denotes the external disturbance. The kinetic parameters and matrices of this WMR can be found in [10]. ${\mathbf{J}}_{i} \in {\mathbb{R}}^{4 \times 3}$ and ${\mathbf{J}}_{i}^{ + } \in {\mathbb{R}}^{3 \times 4}$ satisfy the relationship ${\mathbf{J}}_{i}^{ + }{\mathbf{J}}_{i} = {\mathbf{I}}_{3}$ .
|
| 109 |
+
|
| 110 |
+
Assumption 1: The graph $\mathcal{G}$ contains a spanning tree with the virtual leader as the root node.
|
| 111 |
+
|
| 112 |
+
C. Dynamic Threshold Global Performance-Guaranteed and Barrier Function
|
| 113 |
+
|
| 114 |
+
We define the distributed error as follows
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
{\mathbf{E}}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}\left( {{\mathbf{\eta }}_{i} - {\mathbf{\eta }}_{j}}\right) + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}\left( {{\mathbf{\eta }}_{i} - {\mathbf{\eta }}_{jr}}\right) \tag{2}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
where ${\mathbf{\eta }}_{jr} = {\left\lbrack {\eta }_{jx},{\eta }_{jy},{\eta }_{j\psi }\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ represents the trajectory of the virtual leader. The coefficient ${a}_{ij}$ is defined in [11]. To ensure that the developed control is free from the influence of initial conditions and can dynamically adjust prescribed thresholds, the error is constrained within the following prescribed regions
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
{\mathcal{I}}_{ik}\left( {-{\mathcal{W}}_{ik}}\right) \leq {E}_{ik} \leq {\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right) ,\;k = x,y,\psi \tag{3}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
where ${\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right)$ is a dynamic threshold global performance-guaranteed (DTGPG) function similar to the [12], and is defined as follows
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
{\mathcal{I}}_{ik}\left( {\mathcal{W}}_{ik}\right) = \frac{\sqrt{{l}_{ik}}{\mathcal{W}}_{ik}}{\sqrt{1 - {\mathcal{W}}_{ik}^{2}}} \tag{4}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
with ${\mathcal{W}}_{ik} = 1/{\mathcal{P}}_{ik}.{\mathcal{P}}_{ik}$ is a dynamic threshold finite-time prescribed function similar to the [13]
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
{\mathcal{P}}_{ik}\left( t\right) = \left\{ \begin{array}{ll} \left( {1 - {\Theta }_{{ik},\infty }}\right) \exp \left( {-{\varrho }_{ik}\frac{{T}_{{ik},a}}{{T}_{{ik},a} - t}}\right) + {\Theta }_{{ik},\infty },0 \leq t < {T}_{{ik},a} & \\ {\Theta }_{{ik},\infty }\left( {1 - \frac{{\omega }_{ik}}{2} + \frac{{\omega }_{ik}}{2}\cos \left( {\frac{\pi }{{c}_{ik}}\left( {t - {T}_{{ik},a}}\right) }\right) }\right) ,{T}_{{ik},a} \leq t < {T}_{{ik},b} & \\ {\Theta }_{{ik},\infty }\left( {1 - {\omega }_{ik}}\right) , & t \geq {T}_{{ik},b} \end{array}\right. \tag{5}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
where ${l}_{ik}$ and ${\omega }_{ik}$ are positive constants. ${\Theta }_{{ik},\infty } =$ $\mathop{\lim }\limits_{{t \rightarrow \infty }}{\Theta }_{ik}\left( t\right)$ is the steady-state value. ${\varrho }_{ik} > 0$ represents the convergence rate. ${T}_{{ik},a}$ is the settling time to reach steady state. ${c}_{ik} = {T}_{{ik},b} - {T}_{{ik},a}$ is the duration of the dynamic adjustment.
|
| 139 |
+
|
| 140 |
+
Then, we employ the following barrier function to implement the error constraint in (3)
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
{\mathcal{Z}}_{ik} = \frac{{\mathcal{J}}_{ik}}{1 - {\mathcal{J}}_{ik}^{2}} \tag{6}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
where ${\mathcal{J}}_{ik} = {\mathcal{P}}_{ik}{\mathcal{H}}_{ik}$ with ${\mathcal{H}}_{ik} = {E}_{ik}/\sqrt{{E}_{ik}^{2} + {l}_{ik}}$ . The properties of the barrier function are described in [12].
|
| 147 |
+
|
| 148 |
+
§ III. CONTROLLER DESIGN AND ANALYSIS
|
| 149 |
+
|
| 150 |
+
§ A. SMOOTH EXTENDED STATE OBSERVER
|
| 151 |
+
|
| 152 |
+
To facilitate the subsequent strategy design, define ${\mathbf{\Lambda }}_{i} =$ ${r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathcal{T}}_{i} - {D}_{i\theta }{r}_{i}^{2}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{J}}_{i}{\mathbf{R}}_{i}^{-1}{\dot{\mathbf{\eta }}}_{i} - {\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathcal{F}}_{i}{r}_{i}^{2}$ to denote internal uncertainty and external disturbances suffered by the $i$ th WMR. (1) can be reformulated as
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\left\{ \begin{array}{l} {\dot{\mathbf{\eta }}}_{i} = {\mathbf{R}}_{i}{\mathbf{\nu }}_{i} \\ {\dot{\mathbf{\nu }}}_{i} = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {\mathbf{\Lambda }}_{i}. \end{array}\right. \tag{7}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
Assumption 2: For the multi-WMR system, the unknown total disturbance ${\mathbf{\Lambda }}_{i}$ is smooth and continuous.
|
| 159 |
+
|
| 160 |
+
Then, we regard the total disturbances ${\mathbf{\Lambda }}_{i}$ as an extended state, and to avoid unnecessary waste of resources when approximating the disturbances, an ESO based on event-triggered mechanism is designed as [5]
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\left\{ \begin{array}{l} {\widetilde{\mathbf{\nu }}}_{i}^{s} = {\widehat{\mathbf{\nu }}}_{i} - {\mathbf{\nu }}_{i}^{ \star } \\ {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\varepsilon }_{i1}{\widetilde{\mathbf{\nu }}}_{i}^{s} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\varepsilon }_{i2}{\widetilde{\mathbf{\nu }}}_{i}^{s} \end{array}\right. \tag{8}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
where ${\varepsilon }_{i1}$ and ${\varepsilon }_{i2} \in {\mathbb{R}}^{3 \times 3}$ denote positive diagonal matrices. The variables ${\widehat{\mathbf{\nu }}}_{i} = {\left\lbrack {\widehat{u}}_{i},{\widehat{v}}_{i},{\widehat{w}}_{i}\right\rbrack }^{T} \in {\mathbb{R}}^{3}$ and ${\widehat{\mathbf{\Lambda }}}_{i} = {\left\lbrack {\widehat{\Lambda }}_{iu},{\widehat{\Lambda }}_{iv},{\widehat{\Lambda }}_{iw}\right\rbrack }^{T} \in$ ${\mathbb{R}}^{3}$ denote the estimates of ${\mathbf{\nu }}_{i}$ and ${\mathbf{\Lambda }}_{i}$ , respectively. ${\mathbf{\nu }}_{i}^{ \star } \in {\mathbb{R}}^{3}$ represents the aperiodic sampling of ${\mathbf{\nu }}_{i}$ . The event-triggered mechanism is defined as
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\left\{ \begin{array}{l} {\mathbf{\nu }}_{i}^{ \star }\left( t\right) = {\mathbf{\nu }}_{i}\left( {t}_{\varpi }^{{\nu }_{i}}\right) ,\forall t \in \left\lbrack {{t}_{\varpi }^{{\nu }_{i}},{t}_{\varpi + 1}^{{\nu }_{i}}}\right) ,{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) = {\mathbf{\nu }}_{i}^{ \star }\left( t\right) - {\mathbf{\nu }}_{i}\left( t\right) \\ {t}_{\varpi + 1}^{{\nu }_{i}} = \inf \left\{ {t \in \mathbb{R} \mid \begin{Vmatrix}{{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) }\end{Vmatrix} \geq {\mathcal{X}}_{i}}\right\} \end{array}\right. \tag{9}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where ${\mathcal{X}}_{i} \in {\mathbb{R}}^{ + }$ denotes the event triggering threshold, and ${\widetilde{\mathbf{\nu }}}_{is}\left( t\right)$ denotes the aperiodic sampling error. When $\begin{Vmatrix}{{\widetilde{\mathbf{\nu }}}_{is}\left( t\right) }\end{Vmatrix} \geq$ ${\mathcal{X}}_{i}$ , update ${\nu }_{i}^{ \star }\left( t\right)$ ; otherwise, maintain the last updated value.
|
| 173 |
+
|
| 174 |
+
Remark 1: In addition to using ESO to estimate the external disturbances, the neural network [14] and the neural predictor [15] also achieve the same objective.
|
| 175 |
+
|
| 176 |
+
Existing ESO based on event-triggered mechanism [5] suffers from unavoidable chattering when approximating the disturbances. To solve the chattering problem, we design the SESO as follows
|
| 177 |
+
|
| 178 |
+
$$
|
| 179 |
+
\left\{ \begin{array}{l} {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\varepsilon }_{i1}{\widetilde{\mathbf{\nu }}}_{i}^{s} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\varepsilon }_{i2}\mathcal{B}\left( {\widetilde{\mathbf{\nu }}}_{i}^{s}\right) \end{array}\right. \tag{10}
|
| 180 |
+
$$
|
| 181 |
+
|
| 182 |
+
where $\mathcal{B}\left( {\widetilde{\nu }}_{i}^{s}\right) = \operatorname{col}\left( {\mathcal{B}\left( {\widetilde{\nu }}_{i\mathcal{E}}^{s}\right) }\right) ,\Xi = u,v,w \in {\mathbb{R}}^{3}$ is the sigmoid-like function vector, defined as follows
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\mathcal{B}\left( {\widetilde{\nu }}_{i\Xi }^{s}\right) = \left\{ \begin{array}{ll} \frac{1 - \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }{1 + \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }\frac{{\widetilde{\nu }}_{i\Xi }^{s}}{\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }, & {\widetilde{\nu }}_{i\Xi }^{s} \neq 0 \\ {\widetilde{\nu }}_{i\Xi }^{s}, & {\widetilde{\nu }}_{i\Xi }^{s} = 0. \end{array}\right. \tag{11}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
Next, to facilitate the stability analysis of the SESO, define a positive vector ${\mathcal{V}}_{i} = \operatorname{diag}\left\{ {\mathcal{V}}_{i\Xi }\right\} \in {\mathbb{R}}^{3 \times 3}$ with
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
{\mathcal{V}}_{i\Xi } = \left\{ \begin{array}{ll} \frac{1 - \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }{1 + \exp \left( {-\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }\right) }\frac{1}{\left| {\widetilde{\nu }}_{i\Xi }^{s}\right| }, & {\widetilde{\nu }}_{i\Xi }^{s} \neq 0 \\ 1, & {\widetilde{\nu }}_{i\Xi }^{s} = 0. \end{array}\right. \tag{12}
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
The (10) can be rewritten as
|
| 195 |
+
|
| 196 |
+
$$
|
| 197 |
+
\left\{ \begin{array}{l} {\dot{\widehat{\mathbf{\nu }}}}_{i} = - {\mathbf{\varepsilon }}_{i1}{\widetilde{\mathbf{\nu }}}_{i} + {\mathbf{\varepsilon }}_{i1}{\widetilde{\mathbf{\nu }}}_{is} + {\widehat{\mathbf{\Lambda }}}_{i} + {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} \\ {\dot{\widehat{\mathbf{\Lambda }}}}_{i} = - {\mathbf{\varepsilon }}_{i2}{\mathbf{V}}_{i}{\widetilde{\mathbf{\nu }}}_{i} + {\mathbf{\varepsilon }}_{i2}{\mathbf{V}}_{i}{\widetilde{\mathbf{\nu }}}_{is} \end{array}\right. \tag{13}
|
| 198 |
+
$$
|
| 199 |
+
|
| 200 |
+
where ${\widetilde{\mathbf{\nu }}}_{i} = {\widehat{\mathbf{\nu }}}_{i} - {\mathbf{\nu }}_{i},{\widetilde{\mathbf{\Lambda }}}_{i} = {\widehat{\mathbf{\Lambda }}}_{i} - {\mathbf{\Lambda }}_{i}$ . Define ${\mathcal{N}}_{i1} = {\left\lbrack {\widetilde{\mathbf{\nu }}}_{i},{\widetilde{\mathbf{\Lambda }}}_{i}\right\rbrack }^{T} \in$ ${\mathbb{R}}^{6}$ , one has
|
| 201 |
+
|
| 202 |
+
$$
|
| 203 |
+
{\dot{\mathcal{N}}}_{i1} = {\mathbf{A}}_{i1}{\mathcal{N}}_{i1} + {\mathbf{B}}_{i1}{\widetilde{\mathbf{\nu }}}_{is} + {\mathbf{C}}_{i1}{\dot{\mathbf{\Lambda }}}_{i} \tag{14}
|
| 204 |
+
$$
|
| 205 |
+
|
| 206 |
+
where
|
| 207 |
+
|
| 208 |
+
$$
|
| 209 |
+
\left\{ {{\mathbf{A}}_{i1} = \left\lbrack \begin{matrix} - {\varepsilon }_{i1}{\mathbf{I}}_{3} & {\mathbf{I}}_{3} \\ - {\varepsilon }_{i2}{\mathbf{V}}_{i} & {\mathbf{O}}_{3} \end{matrix}\right\rbrack {\mathbf{B}}_{i1} = \left\lbrack \begin{matrix} {\varepsilon }_{i1}{\mathbf{I}}_{3} \\ {\varepsilon }_{i2}{\mathbf{V}}_{i} \end{matrix}\right\rbrack {\mathbf{C}}_{i1} = \left\lbrack \begin{matrix} {\mathbf{O}}_{3} \\ {\mathbf{I}}_{3} \end{matrix}\right\rbrack .}\right.
|
| 210 |
+
$$
|
| 211 |
+
|
| 212 |
+
Note that the matrix ${\mathbf{A}}_{i1}$ is a Hurwitz matrix. There exists a positive-definite matrix ${\mathbf{P}}_{i1}$ satisfying the following inequality
|
| 213 |
+
|
| 214 |
+
$$
|
| 215 |
+
{\mathbf{A}}_{i1}^{T}{\mathbf{P}}_{i1} + {\mathbf{P}}_{i1}{\mathbf{A}}_{i1} \leq - {\jmath }_{i1}{\mathbf{I}}_{6}. \tag{15}
|
| 216 |
+
$$
|
| 217 |
+
|
| 218 |
+
Lemma 1: The system (14) is ISS.
|
| 219 |
+
|
| 220 |
+
Proof: Consider a Lyapunov function candidate as follows
|
| 221 |
+
|
| 222 |
+
$$
|
| 223 |
+
{V}_{1} = \frac{1}{2}\mathop{\sum }\limits_{{i = 1}}^{N}{\mathcal{N}}_{i1}^{T}{\mathbf{P}}_{i1}{\mathcal{N}}_{i1}. \tag{16}
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
The time derivative ${V}_{1}$ based on (14) and (15) satisfies
|
| 227 |
+
|
| 228 |
+
$$
|
| 229 |
+
{\dot{V}}_{1} \leq - \frac{{j}_{1}}{2}{\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}}^{2} + \begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix} \tag{17}
|
| 230 |
+
$$
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
+ \begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\parallel \dot{\mathbf{\Lambda }}\parallel
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
where ${\jmath }_{1} = \mathop{\min }\limits_{{i = 1,\ldots ,N}}\left( {\jmath }_{i1}\right) ,{\mathcal{N}}_{1} = {\left\lbrack {\mathcal{N}}_{11}^{T},\ldots ,{\mathcal{N}}_{N1}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{6N},{\widetilde{\mathbf{\nu }}}_{s} =$ ${\left\lbrack {\widetilde{\mathbf{\nu }}}_{1s}^{T},\ldots ,{\widetilde{\mathbf{\nu }}}_{Ns}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},\dot{\mathbf{\Lambda }} = {\left\lbrack {\dot{\mathbf{\Lambda }}}_{1}^{T},\ldots ,{\dot{\mathbf{\Lambda }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\mathbf{P}}_{1} =$ $\operatorname{diag}\left\{ {{\mathbf{P}}_{11},\ldots ,{\mathbf{P}}_{N1}}\right\} \in {\mathbb{R}}^{{6N} \times {6N}},{\mathbf{B}}_{1} = \operatorname{diag}\left\{ {{\mathbf{B}}_{11},\ldots ,{\mathbf{B}}_{N1}}\right\} \in$ ${\mathbb{R}}^{{6N} \times {3N}}$ , and ${\mathbf{C}}_{1} = \operatorname{diag}\left\{ {{\mathbf{C}}_{11},\ldots ,{\mathbf{C}}_{N1}}\right\} \in {\mathbb{R}}^{{6N} \times {3N}}$ . Since $\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix} \geq 2\left( {\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\parallel \dot{\mathbf{\Lambda }}\parallel }\right) /{\jmath }_{1}{\sigma }_{1}$ , one has ${\dot{V}}_{1} \leq - {\jmath }_{1}\left( {1 - {\sigma }_{1}}\right) {\begin{Vmatrix}{\mathcal{N}}_{1}\end{Vmatrix}}^{2}/2$ , where $0 < {\sigma }_{1} < 1$ . It follows that the subsystem (14) is ISS. There exists a $\mathcal{K}\mathcal{L}$ function ${\mathcal{Y}}_{1}\left( \cdot \right)$ and ${\mathcal{K}}_{\infty }$ function ${\mathcal{C}}^{{\widehat{\mathcal{\nu }}}_{s}}\left( \cdot \right)$ and ${\mathcal{C}}^{\Lambda }\left( \cdot \right)$ satisfying $\begin{Vmatrix}{{\mathcal{N}}_{1}\left( t\right) }\end{Vmatrix} \leq$ ${\mathcal{Y}}_{1}\left( {\begin{Vmatrix}{{\mathcal{N}}_{1}\left( 0\right) }\end{Vmatrix},t}\right) + {\mathcal{C}}^{{\widetilde{\mathbf{\nu }}}_{s}}\left( \begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix}\right) + {\mathcal{C}}^{\mathbf{\Lambda }}\left( {\parallel \dot{\mathbf{\Lambda }}\parallel }\right)$ , where ${\mathcal{C}}^{{\widetilde{\mathbf{\nu }}}_{s}}\left( s\right) =$ $\left( {\left( {{2s}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix}\sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right) }\right)$ and ${\mathcal{C}}^{\dot{\mathbf{\Lambda }}}\left( s\right) =$ $\left( {\left( {{2s}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}\sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right) }\right)$ .
|
| 237 |
+
|
| 238 |
+
§ B. DESIGN OF GUIDANCE LAW AND CONTROL LAW
|
| 239 |
+
|
| 240 |
+
In this section, we design the DTGPG-based guidance law and the SESO-based control law. First, we design the guidance law. The time derivative of (6) is represented by
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
{\dot{\mathcal{Z}}}_{ik} = {\mu }_{ik}{\mathcal{P}}_{ik}{\rho }_{ik}{\dot{E}}_{ik} + {\mu }_{ik}{\dot{\mathcal{P}}}_{ik}{\mathcal{H}}_{ik} \tag{18}
|
| 244 |
+
$$
|
| 245 |
+
|
| 246 |
+
where ${\mu }_{ik} = \left( {1 + {\mathcal{J}}_{ik}^{2}}\right) /{\left( 1 - {\mathcal{J}}_{ik}^{2}\right) }^{2}$ and ${\rho }_{ik} =$ ${l}_{ik}/\left( {\sqrt{{E}_{ik}^{2} + {l}_{ik}}\left( {{E}_{ik}^{2} + {l}_{ik}}\right) }\right)$ .
|
| 247 |
+
|
| 248 |
+
Next, to simplify the design of the controller, we rewrite (18) in a vector form
|
| 249 |
+
|
| 250 |
+
$$
|
| 251 |
+
{\dot{\mathbf{Z}}}_{i} = {\mathbf{\mu }}_{i1}{\dot{\mathbf{E}}}_{i} + {\mathbf{\mu }}_{i2} \tag{19}
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
where ${\mathcal{Z}}_{i} = {\left\lbrack {\mathcal{Z}}_{ix},{\mathcal{Z}}_{iy},{\mathcal{Z}}_{i\psi }\right\rbrack }^{T} \in {\mathbb{R}}^{3},{\mathbf{E}}_{i} = {\left\lbrack {E}_{ix},{E}_{iy},{E}_{i\psi }\right\rbrack }^{T} \in$ ${\mathbb{R}}^{3},{\mathbf{\mu }}_{i1} = \operatorname{diag}\left\{ {{\mu }_{ix}{\mathcal{P}}_{ix}{\rho }_{ix},{\mu }_{iy}{\mathcal{P}}_{iy}{\rho }_{iy},{\mu }_{i\psi }{\mathcal{P}}_{i\psi }{\rho }_{i\psi }}\right\} \in {\mathbb{R}}^{3 \times 3}$ , and ${\mathbf{\mu }}_{i2} = \operatorname{diag}\left\{ {{\mu }_{ix}{\dot{\mathcal{P}}}_{ix}{\mathcal{H}}_{ix},{\mu }_{iy}{\dot{\mathcal{P}}}_{iy}{\mathcal{H}}_{iy},{\mu }_{i\psi }{\dot{\mathcal{P}}}_{i\psi }{\mathcal{H}}_{i\psi }}\right\} \in {\mathbb{R}}^{3 \times 3}$ .
|
| 255 |
+
|
| 256 |
+
Take the time derivative of (2) based on (1) satisfies
|
| 257 |
+
|
| 258 |
+
$$
|
| 259 |
+
{\dot{\mathbf{E}}}_{i} = {\iota }_{i}{\mathbf{R}}_{i}{\mathbf{\nu }}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\mathbf{\nu }}_{j} - \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr} \tag{20}
|
| 260 |
+
$$
|
| 261 |
+
|
| 262 |
+
where ${\iota }_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij} + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}$ . Substituting (20) into (19) results in
|
| 263 |
+
|
| 264 |
+
$$
|
| 265 |
+
{\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\left( {{\iota }_{i}{\mathbf{R}}_{i}{\mathbf{\nu }}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\mathbf{\nu }}_{j} - \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr}}\right) + {\mathbf{\mu }}_{i2}. \tag{21}
|
| 266 |
+
$$
|
| 267 |
+
|
| 268 |
+
From (21), the DTGPG-based guidance law is chosen as
|
| 269 |
+
|
| 270 |
+
$$
|
| 271 |
+
{\mathbf{\alpha }}_{i} = \frac{1}{{\iota }_{i}{\mathbf{R}}_{i}}\left( {\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widehat{\mathbf{\nu }}}_{j} + \mathop{\sum }\limits_{{j = N + 1}}^{M}{a}_{ij}{\dot{\mathbf{\eta }}}_{jr} - \frac{1}{{\mathbf{\mu }}_{i1}}\left( {{\mathbf{\kappa }}_{i1}{\mathbf{\mathcal{Z}}}_{i} + {\mathbf{\mu }}_{i2}}\right) }\right) . \tag{22}
|
| 272 |
+
$$
|
| 273 |
+
|
| 274 |
+
We substitute (22) into (21), and it follows that
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
{\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widetilde{\mathbf{\nu }}}_{j} - {\mathbf{\kappa }}_{i1}{\mathcal{Z}}_{i} \tag{23}
|
| 278 |
+
$$
|
| 279 |
+
|
| 280 |
+
with ${\kappa }_{i1} \in {\mathbb{R}}^{3 \times 3}$ being a positive diagonal matrix.
|
| 281 |
+
|
| 282 |
+
Differing from the first-order low-pass filtering method in the traditional DSC, a second-order linear tracking differentiator (LTD) with respect to ${\mathbf{\alpha }}_{i}$ is introduced
|
| 283 |
+
|
| 284 |
+
$$
|
| 285 |
+
\left\{ \begin{array}{l} {\dot{\mathbf{\alpha }}}_{if} = {\mathbf{\alpha }}_{if}^{ * } \\ {\dot{\mathbf{\alpha }}}_{if}^{ * } = - {\gamma }_{i}^{2}\left( {\left( {{\mathbf{\alpha }}_{if} - {\mathbf{\alpha }}_{i}}\right) + 2\left( {{\mathbf{\alpha }}_{if}^{ * }/{\gamma }_{i}}\right) }\right) \end{array}\right. \tag{24}
|
| 286 |
+
$$
|
| 287 |
+
|
| 288 |
+
where ${\mathbf{\alpha }}_{if}^{ * } \in {\mathbb{R}}^{3}$ is the filtered value of ${\dot{\mathbf{\alpha }}}_{i}$ , and ${\gamma }_{i} \in {\mathbb{R}}^{ + }$ .
|
| 289 |
+
|
| 290 |
+
Second, we design the control law. Defining a velocity error ${\mathcal{Z}}_{ie} = {\mathbf{\nu }}_{i} - {\mathbf{\alpha }}_{i} \in {\mathbb{R}}^{3},{\dot{\mathcal{Z}}}_{ie}$ along (7) satisfies
|
| 291 |
+
|
| 292 |
+
$$
|
| 293 |
+
{\dot{\mathbf{Z}}}_{ie} = {r}_{i}{\mathbf{J}}_{i}^{ + }{\mathbf{M}}_{i}^{-1}{\mathbf{\tau }}_{i} + {\mathbf{\Lambda }}_{i} - {\dot{\mathbf{\alpha }}}_{i}. \tag{25}
|
| 294 |
+
$$
|
| 295 |
+
|
| 296 |
+
Then, we designed the SESO-based control law to stabilize (25)
|
| 297 |
+
|
| 298 |
+
$$
|
| 299 |
+
{\mathbf{\tau }}_{i} = \frac{{\mathbf{M}}_{i}{\mathbf{J}}_{i}}{{r}_{i}}\left( {{\mathbf{\alpha }}_{if}^{ * } - {\widehat{\mathbf{\Lambda }}}_{i} - {\mathbf{\kappa }}_{i2}{\mathbf{\mathcal{Z}}}_{ie}}\right) \tag{26}
|
| 300 |
+
$$
|
| 301 |
+
|
| 302 |
+
with ${\kappa }_{i2} \in {\mathbb{R}}^{3 \times 3}$ being a positive diagonal matrix.
|
| 303 |
+
|
| 304 |
+
The dynamics of ${\mathcal{Z}}_{ie}$ is further obtained by substituting (26) into (25)
|
| 305 |
+
|
| 306 |
+
$$
|
| 307 |
+
{\dot{\mathcal{Z}}}_{ie} = {\widetilde{\alpha }}_{i}^{ * } - {\widetilde{\Lambda }}_{i} - {\kappa }_{i2}{\mathcal{Z}}_{ie} \tag{27}
|
| 308 |
+
$$
|
| 309 |
+
|
| 310 |
+
where ${\widetilde{\mathbf{\alpha }}}_{i}^{ * } = {\mathbf{\alpha }}_{if}^{ * } - {\dot{\mathbf{\alpha }}}_{i}$ .
|
| 311 |
+
|
| 312 |
+
From (23) and (27), we can obtain the following subsystems
|
| 313 |
+
|
| 314 |
+
$$
|
| 315 |
+
\left\{ \begin{array}{l} {\dot{\mathcal{Z}}}_{i} = {\mathbf{\mu }}_{i1}\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\mathbf{R}}_{j}{\widetilde{\mathbf{\nu }}}_{j} - {\mathbf{\kappa }}_{i1}{\mathcal{Z}}_{i} \\ {\dot{\mathcal{Z}}}_{ie} = {\widetilde{\mathbf{\alpha }}}_{i}^{ * } - {\mathbf{\Lambda }}_{i} - {\mathbf{\kappa }}_{i2}{\mathcal{Z}}_{ie}. \end{array}\right. \tag{28}
|
| 316 |
+
$$
|
| 317 |
+
|
| 318 |
+
Lemma 2: The system (28) is ISS.
|
| 319 |
+
|
| 320 |
+
Proof: Consider a Lyapunov function candidate as ${V}_{2} =$ $\left( {1/2}\right) \mathop{\sum }\limits_{{i = 1}}^{N}\left( {{\mathbf{Z}}_{i}^{T}{\mathbf{Z}}_{i} + {\mathbf{Z}}_{ie}^{T}{\mathbf{Z}}_{ie}}\right)$ . The time derivative of ${V}_{2}$ based on (28) satisfies
|
| 321 |
+
|
| 322 |
+
$$
|
| 323 |
+
{\dot{V}}_{2} \leq - {n}_{1}\parallel \mathcal{Z}{\parallel }^{2} - {n}_{2}{\begin{Vmatrix}{\mathcal{Z}}_{e}\end{Vmatrix}}^{2} + {n}_{3}{n}^{ * }\parallel \mathcal{Z}\parallel \parallel \widetilde{\nu }\parallel \tag{29}
|
| 324 |
+
$$
|
| 325 |
+
|
| 326 |
+
$$
|
| 327 |
+
+ \begin{Vmatrix}{\mathbf{Z}}_{e}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \begin{Vmatrix}{\mathbf{Z}}_{e}\end{Vmatrix}\parallel \widetilde{\mathbf{\Lambda }}\parallel
|
| 328 |
+
$$
|
| 329 |
+
|
| 330 |
+
where ${n}_{1} = {\lambda }_{\min }\left( {\mathbf{\kappa }}_{1}\right)$ with ${\mathbf{\kappa }}_{1} = \operatorname{diag}\left\{ {{\mathbf{\kappa }}_{11},\ldots ,{\mathbf{\kappa }}_{N1}}\right\} \in$ ${\mathbb{R}}^{{3N} \times {3N}} \cdot {n}_{2} = {\lambda }_{\min }\left( {\mathbf{\kappa }}_{2}\right)$ with ${\mathbf{\kappa }}_{2} = \operatorname{diag}\left\{ {{\mathbf{\kappa }}_{12},\ldots ,{\mathbf{\kappa }}_{N2}}\right\} \in$ ${\mathbb{R}}^{{3N} \times {3N}}.{n}_{3} = \mathop{\max }\limits_{{i = 1,\ldots ,N}}\left( {{\lambda }_{\max }\left( {\mathbf{\mu }}_{i1}\right) }\right) .{n}^{ * } = \mathop{\max }\limits_{{i = 1,\ldots ,N}}\left( {n}_{i}^{ * }\right)$ with ${n}_{i}^{ * } = \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ji}.\mathcal{Z} = {\left\lbrack {\mathcal{Z}}_{1}^{T},\ldots ,{\mathcal{Z}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\mathcal{Z}}_{e} =$ ${\left\lbrack {\mathbf{\mathcal{Z}}}_{1e}^{T},\ldots ,{\mathbf{\mathcal{Z}}}_{Ne}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},\widetilde{\mathbf{\nu }} = {\left\lbrack {\widetilde{\mathbf{\nu }}}_{1}^{T},\ldots ,{\widetilde{\mathbf{\nu }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N},{\widetilde{\mathbf{\alpha }}}^{ * } =$ ${\left\lbrack {\widetilde{\mathbf{\alpha }}}_{1}^{*T},\ldots ,{\widetilde{\mathbf{\alpha }}}_{N}^{*T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N}$ , and $\widetilde{\mathbf{\Lambda }} = {\left\lbrack {\widetilde{\mathbf{\Lambda }}}_{1}^{T},\ldots ,{\widetilde{\mathbf{\Lambda }}}_{N}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{3N}$ .
|
| 331 |
+
|
| 332 |
+
Define $n = \min \left( {{n}_{1},{n}_{2}}\right)$ and ${\mathcal{N}}_{2} = {\left\lbrack \parallel \mathcal{Z}\parallel ,\begin{Vmatrix}{\mathcal{Z}}_{e}\end{Vmatrix}\right\rbrack }^{T} \in {\mathbb{R}}^{2}$ . Then, (29) is further put into
|
| 333 |
+
|
| 334 |
+
$$
|
| 335 |
+
{\dot{V}}_{2} \leq - n{\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}}^{2} + {n}_{3}{n}^{ * }\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\parallel \widetilde{\nu }\parallel \tag{30}
|
| 336 |
+
$$
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
+ \begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}\parallel \widetilde{\mathbf{\Lambda }}\parallel \text{ . }
|
| 340 |
+
$$
|
| 341 |
+
|
| 342 |
+
Since $\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix} \geq 2\left( {{n}_{3}{n}^{ * }\parallel \widetilde{\mathbf{\nu }}\parallel + \begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix} + \parallel \widetilde{\mathbf{\Lambda }}\parallel }\right) /n$ , one has ${\dot{V}}_{2} \leq$ $- n{\begin{Vmatrix}{\mathcal{N}}_{2}\end{Vmatrix}}^{2}/2$ . It follows that the subsystem (28) is ISS. There exists a $\mathcal{K}\mathcal{L}$ function ${\mathcal{Y}}_{2}\left( \cdot \right)$ and ${\mathcal{K}}_{\infty }$ function ${\mathcal{C}}^{\widetilde{\nu }}\left( \cdot \right) ,{\mathcal{C}}^{{\widetilde{\alpha }}^{ * }}\left( \cdot \right)$ , and ${\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( \cdot \right)$ satisfying $\begin{Vmatrix}{{\mathcal{N}}_{2}\left( t\right) }\end{Vmatrix} \leq {\mathcal{Y}}_{2}\left( {\begin{Vmatrix}{{\mathcal{N}}_{2}\left( 0\right) }\end{Vmatrix},t}\right) + {\mathcal{C}}^{\widetilde{\mathbf{\nu }}}\left( {\parallel \widetilde{\mathbf{\nu }}\parallel }\right) +$ ${\mathcal{C}}^{{\widetilde{\mathbf{\alpha }}}^{ * }}\left( \begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix}\right) + {\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( {\parallel \widetilde{\mathbf{\Lambda }}\parallel }\right)$ , where ${\mathcal{C}}^{\widetilde{\mathbf{\nu }}}\left( s\right) = 2{n}_{3}{n}^{ * }s/n,{\mathcal{C}}^{{\widetilde{\mathbf{\alpha }}}^{ * }}\left( s\right) =$ ${2s}/n$ , and ${\mathcal{C}}^{\widetilde{\mathbf{\Lambda }}}\left( s\right) = {2s}/n$ .
|
| 343 |
+
|
| 344 |
+
< g r a p h i c s >
|
| 345 |
+
|
| 346 |
+
Fig. 1. Circular formation using the proposed method.
|
| 347 |
+
|
| 348 |
+
Theorem 1: For multi-WMRs (1) subject to initial conditions, the closed-loop system is ISS consisting of SESO (10), the DTGPG-based guidance law (22), and the SESO-based control law (26). Moreover, Zeno behavior can be avoided.
|
| 349 |
+
|
| 350 |
+
Proof: The ISS properties of subsystems (14) and (28) are proven through Lemma 1 and Lemma 2, respectively. The state of the subsystem (14), $\widetilde{\mathbf{\nu }}$ , and $\widetilde{\mathbf{\Lambda }}$ are inputs of the subsystem (28). Under Assumptions 1-2, according to the cascade stability theorem, the closed-loop system is ISS. It yields that the ultimate boundedness of $\begin{Vmatrix}{{\mathcal{N}}_{2}\left( t\right) }\end{Vmatrix}$ as $t \rightarrow \infty$
|
| 351 |
+
|
| 352 |
+
$$
|
| 353 |
+
{\begin{Vmatrix}{\mathcal{N}}_{2}\left( t\right) \end{Vmatrix}}_{t \rightarrow \infty } \leq \frac{2\begin{Vmatrix}{\widetilde{\mathbf{\alpha }}}^{ * }\end{Vmatrix}}{n} + {\mathcal{H}}^{ * }\left( {\begin{Vmatrix}{\widetilde{\mathbf{\nu }}}_{s}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{B}}_{1}}\end{Vmatrix} + \parallel \dot{\mathbf{\Lambda }}\parallel \begin{Vmatrix}{{\mathbf{P}}_{1}{\mathbf{C}}_{1}}\end{Vmatrix}}\right. \tag{31}
|
| 354 |
+
$$
|
| 355 |
+
|
| 356 |
+
with ${\mathcal{H}}^{ * } = \left( {4\left( {{n}_{3}{n}^{ * } + 1}\right) \sqrt{{\lambda }_{\max }\left( {\mathbf{P}}_{1}\right) }}\right) /\left( {n{\jmath }_{1}{\sigma }_{1}\sqrt{{\lambda }_{\min }\left( {\mathbf{P}}_{1}\right) }}\right)$ . The detailed proof process of the Zeno behavior can be referred to [5]. The proof of Theorem 1 is complete.
|
| 357 |
+
|
| 358 |
+
§ IV. SIMULATION RESULTS
|
| 359 |
+
|
| 360 |
+
From Fig. 1, it can be seen that we consider a communication topology consisting of three followers ${n}_{1},{n}_{2}$ , and ${n}_{3}$ , as well as two virtual leaders ${n}_{4}$ and ${n}_{5}$ to verify the effectiveness of the proposed controller. The physical parameters of the WMR can refer to [10]. This external disturbance is similar to [16]. The initial values of three followers are chosen as ${\mathbf{\eta }}_{1}\left( 0\right) = {\left\lbrack 0,0,3\pi /2\right\rbrack }^{T},{\mathbf{\eta }}_{2}\left( 0\right) = {\left\lbrack 2, - {10},\pi /2\right\rbrack }^{T},{\mathbf{\eta }}_{3}\left( 0\right) =$ ${\left\lbrack 2, - {17},4\pi /3\right\rbrack }^{T}$ . The trajectories of the two virtual leaders are chosen as
|
| 361 |
+
|
| 362 |
+
$$
|
| 363 |
+
\left\{ \begin{array}{l} {\mathbf{\eta }}_{4r} = {\left\lbrack -5\sin \left( {0.2}t\right) , - 5\cos \left( {0.2}t\right) ,\operatorname{atan}2\left( {\dot{\eta }}_{4y},{\dot{\eta }}_{4x}\right) \right\rbrack }^{T} \\ {\mathbf{\eta }}_{5r} = {\left\lbrack -{15}\sin \left( {0.2}t\right) , - {15}\cos \left( {0.2}t\right) ,\operatorname{atan}2\left( {\dot{\eta }}_{5y},{\dot{\eta }}_{5x}\right) \right\rbrack }^{T}. \end{array}\right.
|
| 364 |
+
$$
|
| 365 |
+
|
| 366 |
+
The main design parameters are set as ${\kappa }_{11} = \operatorname{diag}\{ {12},7,{10}\}$ , ${\kappa }_{21} = \operatorname{diag}\{ 7,7,{10}\} ,{\kappa }_{31} = \operatorname{diag}\{ {12},9,{10}\} ,{\kappa }_{i2} = \operatorname{diag}\{ {20},{20},{20}\}$ , ${\varepsilon }_{i1} = \operatorname{diag}\{ 2,2,2\} ,{\varepsilon }_{i2} = \operatorname{diag}\{ {40},{40},{40}\} ,{T}_{{1x},a} = {T}_{{1\psi },a} =$ ${T}_{{2x},a} = {T}_{{2\psi },a} = {T}_{{3x},a} = {T}_{{3\psi },a} = {0.5},{T}_{{1y},a} = {T}_{{2y},a} =$
|
| 367 |
+
|
| 368 |
+
< g r a p h i c s >
|
| 369 |
+
|
| 370 |
+
< g r a p h i c s >
|
| 371 |
+
|
| 372 |
+
Fig. 4. The number of triggering events.
|
| 373 |
+
|
| 374 |
+
Fig. 2. Tracking errors using the DTGPG. Fig. 3. The estimated disturbances using the SESO. ${T}_{{3y},a} = 1,{T}_{{1x},b} = {T}_{{2x},b} = {T}_{{3x},b} = {0.7},{T}_{{1y},b} = {T}_{{2y},b} =$ ${T}_{{3y},b} = {1.2},{T}_{{1\psi },b} = {T}_{{2\psi },b} = {T}_{{3\psi },b} = {1.5},{\omega }_{ik} =$ ${0.7},{\Theta }_{{ik},\infty } = {0.9},{\varrho }_{ik} = 2,{l}_{ik} = {10},{\mathcal{X}}_{1} = {\mathcal{X}}_{2} = {\mathcal{X}}_{3} = {0.06}$ .
|
| 375 |
+
|
| 376 |
+
Simulation results are depicted in Figs 1-4. Fig. 1 demonstrates these three vehicles forming a circular formation guided by two virtual leaders. Fig. 2 shows that the tracking profile is not constrained by the initial value and is able to dynamically adjust the performance boundaries using the proposed DTGPG control scheme. Fig. 3 shows that SESO is not only able to estimate internal uncertainties and external disturbances but also to reduce chattering. Fig. 4 shows the number of triggering events. ${\nu }_{1}^{ \star },{\nu }_{2}^{ \star }$ , and ${\nu }_{3}^{ \star }$ are triggered 179,213, and 211 times respectively. Compared to time triggering 2800 times, it effectively saves resources.
|
| 377 |
+
|
| 378 |
+
§ V. CONCLUSION
|
| 379 |
+
|
| 380 |
+
In this paper, the dynamic threshold global prescribed performance formation control problem was investigated for WMRs in the presence of unknown total disturbances. A dynamic threshold global performance-guaranteed formation control method based on SESO was proposed, which had three advantages: 1) it could adjust the steady-state performance boundary twice, 2) it resolved the initial value constraints present in standard PPC, and 3) it mitigated the chattering problem in event-triggered ESO. This cascade system consisting of the SESO, the DTGPG-based guidance law, and the SESO-based control law was proved to be ISS. The main results were demonstrated by the simulation examples.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,653 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Event-Triggered Optimal Tracking Control for Uncertain Nonlinear System Based on Reinforcement Learning
|
| 2 |
+
|
| 3 |
+
Yuanhao Wang
|
| 4 |
+
|
| 5 |
+
Navigation College
|
| 6 |
+
|
| 7 |
+
Dalian Maritime University
|
| 8 |
+
|
| 9 |
+
Dalian, China
|
| 10 |
+
|
| 11 |
+
wangyuanhao2024@163.com
|
| 12 |
+
|
| 13 |
+
Weiwei Bai
|
| 14 |
+
|
| 15 |
+
Navigation College
|
| 16 |
+
|
| 17 |
+
Dalian Maritime University
|
| 18 |
+
|
| 19 |
+
Dalian, China
|
| 20 |
+
|
| 21 |
+
baiweiwei_dl@163.com
|
| 22 |
+
|
| 23 |
+
Abstract-In this paper, an event-triggered optimal tracking control problem is studied for uncertain nonlinear systems based on reinforcement learning (RL). Firstly, a class of nonlinear dynamic systems with general uncertainty is considered and the augmented system comprising tracking error and reference signal is constructed. Secondly, an improved adaptive dynamic programming (ADP) technique, involving actor-critic algorithm and fuzzy logic systems, is developed to solve the Hamilton-Jacobi-Bellman (HJB) equation with respect to nominal augmented system. Thirdly, in order to reduce the mechanical wear of actuator and energy consumption, event-triggered mechanism is performed in controller updating. Finally, stability analysis proofs that all signals are uniformly ultimately bounded (UUB) in the closed-loop system via Lyapunov theory. Simulation results verify feasibility of proposed scheme.
|
| 24 |
+
|
| 25 |
+
Index Terms-ADP, event-triggered, reinforcement learning, nonlinear, fuzzy logic systems, tracking control.
|
| 26 |
+
|
| 27 |
+
## I. INTRODUCTION
|
| 28 |
+
|
| 29 |
+
Reinforcement Learning (RL) as an effective technique has competent in facilitating adaptive optimization strategy [1], [2]. Generally, optimization is implemented via seeking minimized or maximized cost function to solve the Hamilton-Jacobi-Bellman (HJB) equation [3]. However, there exists a challenge about acquiring analytic solution of HJB equation directly for nonlinear dynamic systems [4]. Therefore many researchers proposed numerical solution of HJB equation [5]. Adaptive dynamic programming (ADP) as an advanced numerical solving method, has been widely applied to achieve the optimal tracking control of nonlinear systems.
|
| 30 |
+
|
| 31 |
+
In contrast to traditional dynamic programming, ADP can be utilized to design optimal controller forward in time, which effectively avoids "curse of dimensionality" [6], [7]. In addition, an improved ADP framework consists of actor-critic algorithm and fuzzy logic systems is constructed. So far, there have been many scholars devoting to developing ADP techniques [8]-[10]. In [11], ADP method was implemented to solve a new neuro-optimal control problem of nonlinear dynamic systems by employing one critic and two actor networks. In [12], a neural-network-based ADP method was developed to solve the optimal tracking control problem of a class of nonlinear systems with unmatched uncertainties. In [13], linear singularly perturbed system was studied via employing ADP framework to achieve optimal control. These literatures concentrated on application and development of ADP and RL, but they did not consider the condition with mechanical wear of actuator and energy consumption. As a result, it is of necessity to perform event-triggered mechanism in control design for reducing mechanical wear and saving energy in actual engineering practice [14].
|
| 32 |
+
|
| 33 |
+
The key of event-triggered control algorithm is triggering threshold [14]. When signal exceeds triggering threshold, control policy will be updated [15], [16]. In this paper, an event-triggered optimal tracking control scheme for uncertain nonlinear systems based on RL is developed. There are two main contributions:
|
| 34 |
+
|
| 35 |
+
(1) An improved ADP and RL algorithm involving actor-critic and fuzzy logic systems is developed, which develops the optimal control strategy and effectively balances the tracking control performance and control costs.
|
| 36 |
+
|
| 37 |
+
(2) Event-triggered mechanism is performed in controller design, the unnecessary control input is avoided, achieving the reduction of mechanical wear and the energy saving in engineering practice.
|
| 38 |
+
|
| 39 |
+
The organization of this paper is shown as follows. System dynamic description and fuzzy logic systems are stated in Section II. Optimal controller and event-triggered controller are designed in Sections III and IV, respectively. Stability analysis, simulation and conclusion are shown in Sections V, VI and VII, respectively.
|
| 40 |
+
|
| 41 |
+
## II. Problem formulation and preliminaries
|
| 42 |
+
|
| 43 |
+
## A. System dynamic description
|
| 44 |
+
|
| 45 |
+
Consider a class of continuous-time nonlinear dynamic systems which can be described by
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\dot{x}\left( t\right) = f\left( {x\left( t\right) }\right) + g\left( {x\left( t\right) }\right) u\left( t\right) + \mathcal{D}\left( {x\left( t\right) }\right) \tag{1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where $x\left( t\right) \in {\mathbb{R}}^{n}$ is the state variable, $u\left( t\right) \in {\mathbb{R}}^{m}$ is the control input, $f\left( {x\left( t\right) }\right) \in {\mathbb{R}}^{n}$ and $g\left( {x\left( t\right) }\right) \in {\mathbb{R}}^{n \times m}$ are the unknown smooth function and unknown smooth function matrix respectively, $\mathcal{D}\left( {x\left( t\right) }\right)$ is the unknown disturbance with $\parallel \mathcal{D}\left( {x\left( t\right) }\right) \parallel \leq {\lambda }_{\mathcal{D}}$ and ${\lambda }_{\mathcal{D}}$ is a positive parameter.
|
| 52 |
+
|
| 53 |
+
To achieve tracking control, a reference signal is given by
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\dot{r}\left( t\right) = \delta \left( {r\left( t\right) }\right) \tag{2}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where $r\left( t\right) \in {\mathbb{R}}^{n}$ is a bounded desired trajectory and $\delta \left( {r\left( t\right) }\right)$ is a Lipschitz continuous function. Let the tracking error be
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
e\left( t\right) = x\left( t\right) - r\left( t\right) \tag{3}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
Combining equations (1), (2) and (3), one can yield the following dynamic of tracking error
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\dot{e}\left( t\right) = f\left( {x\left( t\right) }\right) + g\left( {x\left( t\right) }\right) u\left( t\right) + \mathcal{D}\left( {x\left( t\right) }\right) - \delta \left( {r\left( t\right) }\right) \tag{4}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
Note that $x\left( t\right) = e\left( t\right) + r\left( t\right)$ , equation (4) can be rewritten as
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\dot{e}\left( t\right) = f\left( {e\left( t\right) + r\left( t\right) }\right) - \delta \left( {r\left( t\right) }\right) + g\left( {e\left( t\right) + r\left( t\right) }\right) u\left( t\right) \tag{5}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
+ \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right)
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
For the sake of facilitating description, define $\xi \left( t\right) = \mathrm{i}$ ${\left\lbrack {e}^{\mathrm{T}}\left( t\right) ,{r}^{\mathrm{T}}\left( t\right) \right\rbrack }^{\mathrm{T}} \in {\mathbb{R}}^{2n}$ , and then dynamic systems (2) and (5) can be augmented as a concise form
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\dot{\xi }\left( t\right) = F\left( {\xi \left( t\right) }\right) + G\left( {\xi \left( t\right) }\right) u\left( t\right) + \Delta \mathbb{D}\left( {\xi \left( t\right) }\right) \tag{6}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $F\left( {\xi \left( t\right) }\right)$ and $G\left( {\xi \left( t\right) }\right)$ are new matrices and $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ can be still regarded as a new uncertain term. In particular, $F\left( {\xi \left( t\right) }\right) = \left\lbrack \begin{matrix} f\left( {e\left( t\right) + r\left( t\right) }\right) - \delta \left( t\right) \\ \delta \left( t\right) \end{matrix}\right\rbrack , G\left( {\xi \left( t\right) }\right) =$ $\left\lbrack \begin{matrix} g\left( {e\left( t\right) + r\left( t\right) }\right) \\ {0}_{n \times m} \end{matrix}\right\rbrack$ and $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right) = \left\lbrack \begin{matrix} \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right) \\ {0}_{n \times 1} \end{matrix}\right\rbrack$ .
|
| 88 |
+
|
| 89 |
+
Undoubtedly, the new uncertain term $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ is still upper bounded since
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\parallel \Delta \mathbb{D}\left( {\xi \left( t\right) }\right) \parallel = \parallel \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right) \parallel = \parallel \mathcal{D}\left( {x\left( t\right) }\right) \parallel \leq {\lambda }_{\mathcal{D}} \tag{7}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
To accomplish tracking control of dynamic system (1) to reference signal (2), the feedback controller $u\left( \xi \right)$ will be constructed. One can yield that the closed-loop system is asymptotically stable under the controller $u\left( \xi \right)$ for the uncertain and bounded term $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ . Therefore, the optimal control policy can be applied by considering appropriate cost function of the subsequent nominal system the same as that in [5].
|
| 96 |
+
|
| 97 |
+
## B. Fuzzy logic systems
|
| 98 |
+
|
| 99 |
+
Define a nonlinear continuous function $P\left( x\right)$ over a compact set $\mathbb{U}$ , for any constant $\varepsilon > 0$ , there exists fuzzy logic systems ${\omega }^{\mathrm{T}}\varphi \left( x\right)$ such that [17]
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\mathop{\sup }\limits_{{x \in \mathbb{U}}}\left| {P\left( x\right) - {\omega }^{\mathrm{T}}\varphi \left( x\right) }\right| \leq \varepsilon \tag{8}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $x = {\left\lbrack {x}_{1},\ldots ,{x}_{j}\right\rbrack }^{\mathrm{T}}$ is the input vector of fuzzy logic systems, $\omega = {\left\lbrack {\omega }_{1},{\omega }_{2},\ldots ,{\omega }_{L}\right\rbrack }^{\mathrm{T}} \in {\mathbb{R}}^{L}$ is the degree of membership and $L > 1$ is the number of fuzzy rules, $\varepsilon$ is the fuzzy minimum approximation error. $\varphi \left( x\right) =$ ${\left\lbrack {\varphi }_{1}\left( x\right) ,{\varphi }_{2}\left( x\right) ,\ldots ,{\varphi }_{L}\left( x\right) \right\rbrack }^{\mathrm{T}}$ is fuzzy basic function vector and ${\varphi }_{i}\left( x\right)$ is selected as follows:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{\varphi }_{i}\left( x\right) = \frac{\mathop{\prod }\limits_{{i = 1}}^{j}{\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right) }{\mathop{\sum }\limits_{{i = 1}}^{N}\left( {\mathop{\prod }\limits_{{i = 1}}^{j}{\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right) }\right) },\left( {i = 1,\ldots , L}\right) \tag{9}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${F}_{i}^{l}\left( {i = 1,\ldots , j;l = 1,\ldots , N}\right)$ is the fuzzy set and ${\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right)$ is the membership function.
|
| 112 |
+
|
| 113 |
+
## III. OPTIMAL CONTROL DESIGN
|
| 114 |
+
|
| 115 |
+
In this section, ADP comprising actor-critic algorithm and fuzzy logic systems will be employed to design the value function ${L}^{ * }\left( \xi \right)$ and control policy ${u}^{ * }\left( \xi \right)$ , and design degree of membership update laws.
|
| 116 |
+
|
| 117 |
+
In actor-critic framework, value function and control policy are approximated by critic and actor fuzzy systems, respectively. Optimal cost function (13) and feedback controller (15) represent value function and control policy for optimal tracking control problem, respectively.
|
| 118 |
+
|
| 119 |
+
Consider the nominal part of the augmented system (6), that
|
| 120 |
+
|
| 121 |
+
is
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\dot{\xi }\left( t\right) = F\left( {\xi \left( t\right) }\right) + G\left( {\xi \left( t\right) }\right) u\left( t\right) \tag{10}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
For the nominal system (10), this cost function is considered
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
L\left( \xi \right) = {\int }_{t}^{\infty }Q\left( \tau \right) + u{\left( \tau \right) }^{\mathrm{T}}{Ru}\left( \tau \right) {d\tau } \tag{11}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $Q\left( \xi \right) = {\xi }^{\mathrm{T}}\mathcal{Q}\xi , R = {R}^{\mathrm{T}}.\mathcal{Q}$ and $R$ are positive defined matrix.
|
| 134 |
+
|
| 135 |
+
Subsequently, one can define the Hamiltonian of the optimal problem
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
H\left( {\xi , u\left( \xi \right) }\right) = Q\left( \xi \right) + u{\left( \xi \right) }^{\mathrm{T}}{Ru}\left( \xi \right) \tag{12}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
+ {\nabla }^{\mathrm{T}}L\left( \xi \right) \left\lbrack {F\left( \xi \right) + G\left( \xi \right) u\left( \xi \right) }\right\rbrack
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $\nabla L\left( \xi \right)$ represents the partial derivative of $L\left( \xi \right)$ .
|
| 146 |
+
|
| 147 |
+
Generally, as long as finding the optimal cost function can we derive the optimal controller. The infinitesimal version of cost function is regarded as the optimal cost function, one has
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
{L}^{ * }\left( \xi \right) = \min {\int }_{t}^{\infty }Q\left( \tau \right) + u{\left( \tau \right) }^{\mathrm{T}}{Ru}\left( \tau \right) {d\tau } \tag{13}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
The optimal cost function is the solution of the HJB equation which satisfies
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + {u}^{ * }{\left( \xi \right) }^{\mathrm{T}}R{u}^{ * }\left( \xi \right)
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
+ {\nabla }^{\mathrm{T}}L\left( \xi \right) \left\lbrack {F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right) }\right\rbrack = 0
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
(14)
|
| 164 |
+
|
| 165 |
+
Consequently, the optimal feedback controller is yielded
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
{u}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \nabla {L}^{ * }\left( \xi \right) \tag{15}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
One need to solve the HJB equation (14) and obtain the optimal controller (15) for nominal system (10). However, the solution of HJB equation (14) is difficult to be obtained directly. Therefore, fuzzy logic systems and adaptive actor-critic will be utilized to find its estimated solution.
|
| 172 |
+
|
| 173 |
+
Fuzzy logic systems are employed to reconstruct the value function ${L}^{ * }\left( \xi \right)$
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
{L}^{ * }\left( \xi \right) = {\omega }^{\mathrm{T}}\varphi \left( \xi \right) + \varepsilon \left( \xi \right) \tag{16}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
where $\omega$ is the degree of membership of fuzzy logic systems, $\varphi \left( \xi \right)$ is the fuzzy basis function and $\varepsilon \left( \xi \right)$ is the unknown fuzzy approximate error.
|
| 180 |
+
|
| 181 |
+
Considering (15) and (16) yields the optimal controller described by fuzzy logic systems as
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
{u}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \left\lbrack {{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \omega + \nabla \varepsilon \left( \xi \right) }\right\rbrack \tag{17}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
In order to clearly analyze, define a non-negative matrix
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
A\left( \xi \right) = \nabla \varphi \left( \xi \right) G\left( \xi \right) {R}^{-1}G\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \tag{18}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
One can derive the HJB equation reconstructed by fuzzy logic systems, combining with (16), (17) and (18), one has
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + {\omega }^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right)
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
- \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega + {\varepsilon }_{HJB} = 0 \tag{19}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
and the residual error ${\varepsilon }_{HJB}$ is expressed as
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
{\varepsilon }_{HJB} = {\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) \left( {F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right) }\right)
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
+ \frac{1}{4}{\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) G\left( \xi \right) {R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \nabla \varepsilon \left( \xi \right) \tag{20}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
+ \frac{1}{2}{\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) G\left( \xi \right) {R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \omega
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
The estimation of value function ${L}^{ * }\left( \xi \right)$ and control policy ${u}^{ * }\left( \xi \right)$ can be constructed by critic and actor fuzzy, respectively.
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
{\widehat{L}}^{ * }\left( \xi \right) = {\widehat{\omega }}_{c}^{\mathrm{T}}\varphi \left( \xi \right) \tag{21}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{\widehat{u}}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widehat{\omega }}_{a} \tag{22}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
where ${\widehat{\omega }}_{a}$ is the actor estimated degree of membership and ${\widehat{\omega }}_{c}$ is the critic estimated degree of membership.
|
| 228 |
+
|
| 229 |
+
Noticing (21) and (22), one can derive the following estimated Hamiltonian
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
+ {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
(23)
|
| 240 |
+
|
| 241 |
+
To obtain the degree of membership update laws of fuzzy logic systems, defining the objective function as ${E}_{c} = \frac{1}{2}{e}_{c}{}^{\mathrm{T}}{e}_{c}$ , where ${e}_{c} = \widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right) - H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right)$ is the Bellman error. In order to conquer the difficulties of searching controller and adaptive laws, the following assumption is made and the additional term can be constructed to improve the learning process.
|
| 242 |
+
|
| 243 |
+
Assumption 1: [5] Define ${L}_{s}\left( \xi \right)$ is a continuous differentiable Lyapunov function candidate satisfying
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
{\dot{L}}_{s}\left( \xi \right) = {\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + {u}^{ * }\left( \xi \right) }\right) < 0 \tag{24}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
and then, there exists a positive matrix $\mathfrak{K} \in {\mathbb{R}}^{{2n} \times {2n}}$ ensuring that
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + {u}^{ * }\left( \xi \right) }\right) = - {\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \Re \nabla {L}_{s}\left( \xi \right) \tag{25}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\leq - {\lambda }_{\min }\left( \mathfrak{K}\right) \nabla {\begin{Vmatrix}{L}_{s}\left( \xi \right) \end{Vmatrix}}^{2}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
Based on the gradient decent, degree of membership update laws of fuzzy logic systems are designed, by considering these two Hamilton functions $H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right)$ and $\widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right)$ , one has
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
{\dot{\widehat{\omega }}}_{a} = - {\alpha }_{a}\left( {\frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a} - \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{c}}\right)
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) + \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
(26)
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
{\dot{\widehat{\omega }}}_{c} = - {\alpha }_{c}\left( {\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) + \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
(27)
|
| 288 |
+
|
| 289 |
+
where ${\alpha }_{a}$ and ${\alpha }_{c}$ are the basis learning parameters of actor and critic systems, respectively, and ${\alpha }_{s}$ is the adjustable parameter for the additional term.
|
| 290 |
+
|
| 291 |
+
## IV. EVENT-TRIGGERED CONTROL IMPLEMENTATION
|
| 292 |
+
|
| 293 |
+
The event triggering mechanism is defined as
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = {u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right) ,\forall t \in \left\lbrack {{t}_{d},{t}_{d + 1}}\right) \tag{28}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
{t}_{d + 1} = \inf \left\{ {t \in \mathbb{R}\left| \right| \Gamma \left( t\right) \left| { \geq \Delta }\right| {u}_{e}^{ * }\left( {\xi \left( t\right) }\right) \mid + M}\right\} ,{t}_{1} = 0 \tag{29}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
where the event-triggered error $\Gamma \left( t\right) = {u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right) - {u}_{e}^{ * }\left( {\xi \left( t\right) }\right)$ , the controller update time is ${t}_{d}, d \in {Z}^{ + }$ . Define the proper parameters $0 < \Delta < 1$ and $M > 0$ .
|
| 304 |
+
|
| 305 |
+
When event is not triggered, the control policy will be chosen as ${u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right)$ . Otherwise, control policy will be updated and marked as ${u}_{e}^{ * }\left( {\xi \left( {t}_{d + 1}\right) }\right)$ . Assume two continuous and time-varying parameters ${\rho }_{1}\left( t\right)$ and ${\rho }_{2}\left( t\right)$ , which results in ${u}^{ * }\left( {\xi \left( t\right) }\right) = \left( {1 + {\rho }_{1}\left( t\right) \Delta }\right) {u}_{e}^{ * }\left( {\xi \left( t\right) }\right) + {\rho }_{2}\left( t\right) M$ where $\left| {{\rho }_{1}\left( t\right) }\right| \leq 1$ and $\left| {{\rho }_{2}\left( t\right) }\right| \leq 1$ . And then, the event-triggered controller can be rewritten as
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = \frac{{u}^{ * }\left( {\xi \left( t\right) }\right) - {\rho }_{2}\left( t\right) M}{1 + {\rho }_{1}\left( t\right) \Delta } \tag{30}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
Using (17) and (30) can yield that
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = - \frac{1}{2\rho }{R}^{-1}\left\lbrack {{G}^{\mathrm{T}}\left( {\xi \left( t\right) }\right) {\nabla }^{\mathrm{T}}\varphi \left( {\xi \left( t\right) }\right) \omega + {\varepsilon }_{e}\left( {\xi \left( t\right) }\right) }\right\rbrack
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
(31)
|
| 318 |
+
|
| 319 |
+
where $\rho = 1 + {\rho }_{1}\left( t\right) \Delta ,{\varepsilon }_{e}\left( {\xi \left( t\right) }\right) = \nabla \varepsilon \left( {\xi \left( t\right) }\right) + 2{\rho }_{2}\left( t\right) {RM}$ .
|
| 320 |
+
|
| 321 |
+
Similarly, based on critic fuzzy logic systems, the estimated event-triggered controller can be obtained, one has
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
{\widehat{u}}_{e}^{ * }\left( {\xi \left( t\right) }\right) = - \frac{1}{2\rho }{R}^{-1}{G}^{\mathrm{T}}\left( {\xi \left( t\right) }\right) {\nabla }^{\mathrm{T}}\varphi \left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} \tag{32}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
Considering the HJB equation (14), value function (21) and event-triggered controller (32), one can yield the following Hamilton function as
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
{\widehat{H}}_{e}\left( {\xi \left( t\right) ,{\widehat{u}}_{e}^{ * }\left( {\xi \left( t\right) }\right) ,{\widehat{L}}^{ * }\left( {\xi \left( t\right) }\right) }\right)
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
= Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right)
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
- \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
(33)
|
| 342 |
+
|
| 343 |
+
Subsequently, degree of membership update laws with respect to event-triggered mechanism can be constructed, one has
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
{\dot{\widehat{\omega }}}_{ae} = - {\alpha }_{a}\left( {\frac{1}{2{\rho }^{2}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} - \frac{1}{2\rho }A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{c}}\right)
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
\times \left( {Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right. \tag{34}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\left. {+{\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
+ \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( {\xi \left( t\right) }\right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( {\xi \left( t\right) }\right)
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
{\dot{\widehat{\omega }}}_{ce} = - {\alpha }_{c}\left( {\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right)
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
\times \left( {Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right. \tag{35}
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
\left. {+{\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right)
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
+ \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( {\xi \left( t\right) }\right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( {\xi \left( t\right) }\right)
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
Theorem 1: Considering the dynamic system (1), the optimal feedback controller (22), event-triggered controller (32) and the degree of membership update laws (26), (27), (34) and (35) are developed. Based on Lyapunov theory, all signals are uniformly ultimately bounded (UUB) in the closed-loop system.
|
| 378 |
+
|
| 379 |
+
For the sake of investigating the stability of error dynamics and close-loop states, the following assumption is given by
|
| 380 |
+
|
| 381 |
+
Assumption 2: On a compact set $\Omega , G\left( \xi \right) ,\nabla \varphi \left( \xi \right) ,\nabla \varepsilon \left( \xi \right)$ , ${\xi }^{ * }$ and ${\varepsilon }_{HJB}$ are bounded. $\parallel G\left( \xi \right) \parallel \leq {\lambda }_{g},\parallel \nabla \varphi \left( \eta \right) \parallel \leq {\lambda }_{\varphi }$ , $\parallel \nabla \varepsilon \left( \eta \right) \parallel \leq {\lambda }_{\varepsilon },\begin{Vmatrix}{\xi }^{ * }\end{Vmatrix} \leq {\lambda }_{\xi }$ and $\begin{Vmatrix}{\varepsilon }_{HJB}\end{Vmatrix} \leq {\lambda }_{HJB}$ , where ${\lambda }_{g}$ , ${\lambda }_{\varphi },{\lambda }_{\varepsilon },{\lambda }_{\xi }$ and ${\lambda }_{HJB}$ are positive constants.
|
| 382 |
+
|
| 383 |
+
## V. STABILITY ANALYSIS
|
| 384 |
+
|
| 385 |
+
In this section, Lyapunov theory will be employed to demonstrate Theorem 1.
|
| 386 |
+
|
| 387 |
+
Case1 : Event are not triggered. Consider the feedback controller (22) and the related degree of membership update laws (26) and (27). According to HJB equation (19), it can be transformed as
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
Q\left( \xi \right) = - {\omega }^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \eta \right) + \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega - {\varepsilon }_{HJB} \tag{36}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
Considering the degree of membership update laws (26) and (27), combining with ${\widetilde{\omega }}_{a} = - {\dot{\omega }}_{a}$ and ${\widetilde{\omega }}_{c} = - {\dot{\omega }}_{c}$ , one has
|
| 394 |
+
|
| 395 |
+
$$
|
| 396 |
+
{\dot{\widetilde{\omega }}}_{a} = - {\alpha }_{a}\left( {-\frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a} + \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{c}}\right)
|
| 397 |
+
$$
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) - \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
(37)
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
{\dot{\widetilde{\omega }}}_{c} = - {\alpha }_{c}\left( {-\nabla \varphi \left( \xi \right) F\left( \eta \right) + \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
$$
|
| 414 |
+
\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 415 |
+
$$
|
| 416 |
+
|
| 417 |
+
$$
|
| 418 |
+
\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) - \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 419 |
+
$$
|
| 420 |
+
|
| 421 |
+
(38)
|
| 422 |
+
|
| 423 |
+
Then the following Lyapunov function can be chosen as
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
S\left( t\right) = \frac{1}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}{\widetilde{\omega }}_{a} + \frac{1}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}{\widetilde{\omega }}_{c} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{L}_{s}\left( \xi \right) + \frac{{\alpha }_{s}}{{\alpha }_{c}}{L}_{s}\left( \xi \right)
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
(39)
|
| 430 |
+
|
| 431 |
+
its derivative is
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\dot{S}\left( t\right) = \frac{1}{{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}{\dot{\widetilde{\omega }}}_{a} + \frac{1}{{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}{\dot{\widetilde{\omega }}}_{c} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
= \left( {{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega - \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right.
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\left. {+{\varepsilon }_{HJB} + \frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) \times \left( {-{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
$$
|
| 446 |
+
\left. {+\frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{c} + \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
|
| 447 |
+
$$
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
- \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
- \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
+ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
(40)
|
| 462 |
+
|
| 463 |
+
Substituting (22) into (10) and observing the dynamic system ${\dot{\xi }}^{ * } = F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right)$ with optimal controller ${u}^{ * }\left( \xi \right)$ , one can acquire
|
| 464 |
+
|
| 465 |
+
$$
|
| 466 |
+
\nabla \varphi \left( \xi \right) F\left( \xi \right) = \nabla \varphi \left( \xi \right) \dot{\xi } + \frac{1}{2}\nabla \varphi \left( \xi \right) {R}^{-1}{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widehat{\omega }}_{a} \tag{41}
|
| 467 |
+
$$
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\dot{\xi } = {\dot{\xi }}^{ * } + \frac{1}{2}G{R}^{-1}{G}^{\mathrm{T}}\left( {{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widetilde{\omega }}_{a} + \nabla \varepsilon \left( \xi \right) }\right) \tag{42}
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
Considering above formulations, one can further derive that
|
| 474 |
+
|
| 475 |
+
$$
|
| 476 |
+
\dot{S}\left( t\right) = \left( {{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) {\dot{\xi }}^{ * } + \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right) }\right.
|
| 477 |
+
$$
|
| 478 |
+
|
| 479 |
+
$$
|
| 480 |
+
\left. {+\frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) \omega + \frac{1}{4}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} + {\varepsilon }_{HJB}}\right)
|
| 481 |
+
$$
|
| 482 |
+
|
| 483 |
+
$$
|
| 484 |
+
\times \left( {-{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) {\dot{\xi }}^{ * } - \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right) }\right.
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
\left. {-{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a}}\right)
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
$$
|
| 492 |
+
- \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 493 |
+
$$
|
| 494 |
+
|
| 495 |
+
$$
|
| 496 |
+
- \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 497 |
+
$$
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
+ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
(43)
|
| 504 |
+
|
| 505 |
+
Next, equation (43) can be expended to conduct mathematical operations based on Assumption 2 and yields that
|
| 506 |
+
|
| 507 |
+
$$
|
| 508 |
+
\dot{S}\left( t\right) \leq - {\lambda }_{1}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix}\right) }^{4} - {\lambda }_{2}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix}\right) }^{2} + {\lambda }_{3}
|
| 509 |
+
$$
|
| 510 |
+
|
| 511 |
+
$$
|
| 512 |
+
+ \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right)
|
| 513 |
+
$$
|
| 514 |
+
|
| 515 |
+
$$
|
| 516 |
+
+ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + G{u}^{ * }\left( \xi \right) }\right) \tag{44}
|
| 517 |
+
$$
|
| 518 |
+
|
| 519 |
+
$$
|
| 520 |
+
+ \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right)
|
| 521 |
+
$$
|
| 522 |
+
|
| 523 |
+
$$
|
| 524 |
+
+ \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + G{u}^{ * }\left( \xi \right) }\right)
|
| 525 |
+
$$
|
| 526 |
+
|
| 527 |
+
where ${\lambda }_{1},{\lambda }_{2}$ and ${\lambda }_{3}$ are positive constants.
|
| 528 |
+
|
| 529 |
+
Considering Assumption 1 and equation (44), one can further derive that
|
| 530 |
+
|
| 531 |
+
$$
|
| 532 |
+
\dot{S}\left( t\right) \leq - {\lambda }_{1}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix}\right) }^{4} - {\lambda }_{2}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix}\right) }^{2} + {\lambda }_{\partial }
|
| 533 |
+
$$
|
| 534 |
+
|
| 535 |
+
$$
|
| 536 |
+
- {\lambda }_{\min }\left( \mathfrak{K}\right) {\alpha }_{s}\left( {\frac{1}{{\alpha }_{a}} + \frac{1}{{\alpha }_{c}}}\right) \left( \begin{Vmatrix}{\nabla {L}_{s}\left( \xi \right) }\end{Vmatrix}\right. \tag{45}
|
| 537 |
+
$$
|
| 538 |
+
|
| 539 |
+
$$
|
| 540 |
+
- \frac{{\lambda }_{g}^{2}{\lambda }_{\varepsilon }^{2}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{2}}{4{\lambda }_{\min }\left( \mathfrak{K}\right) }{)}^{2}
|
| 541 |
+
$$
|
| 542 |
+
|
| 543 |
+
where ${\lambda }_{\partial } = {\lambda }_{3} + \frac{{\lambda }_{g}{}^{4}{\lambda }_{\varepsilon }{}^{4}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{4}}{{16}{\lambda }_{\min }\left( \mathfrak{K}\right) }$ .
|
| 544 |
+
|
| 545 |
+
As a result, if $\begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix} \geq \sqrt[4]{\frac{{\lambda }_{\partial }}{{\lambda }_{1}}}$ or $\begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix} \geq \sqrt{\frac{{\lambda }_{\partial }}{{\lambda }_{2}}}$ or
|
| 546 |
+
|
| 547 |
+
$\begin{Vmatrix}{\nabla {L}_{s}\left( \xi \right) }\end{Vmatrix} \geq \sqrt{\frac{{\lambda }_{\partial }}{{\lambda }_{\min }\left( \mathfrak{K}\right) {\alpha }_{s}\left( {\frac{1}{{\alpha }_{a}} + \frac{1}{{\alpha }_{c}}}\right) }} + \frac{{{\lambda }_{g}}^{2}{{\lambda }_{\varepsilon }}^{2}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{2}}{4{\lambda }_{\min }\left( \mathfrak{K}\right) }$ hold, $S\left( t\right) \leq 0$ will be satisfied. Finally, one can conclude that all signals are UUB.
|
| 548 |
+
|
| 549 |
+
Case2 : Events are triggered. Consider the event-triggered controller (32) and the degree of membership update law (34) and (35).
|
| 550 |
+
|
| 551 |
+
Choosing the following Lyapunov function
|
| 552 |
+
|
| 553 |
+
$$
|
| 554 |
+
{S}_{e}\left( t\right) = \frac{1}{2{\alpha }_{a}}{\widetilde{\omega }}_{ae}^{\mathrm{T}}{\widetilde{\omega }}_{ae} + \frac{1}{2{\alpha }_{c}}{\widetilde{\omega }}_{ce}^{\mathrm{T}}{\widetilde{\omega }}_{ce} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{L}_{s}\left( \xi \right) + \frac{{\alpha }_{s}}{{\alpha }_{c}}{L}_{s}(\xi
|
| 555 |
+
$$
|
| 556 |
+
|
| 557 |
+
(ξ)
|
| 558 |
+
|
| 559 |
+
(46)
|
| 560 |
+
|
| 561 |
+
same proof as that in Case 1, we can demonstrate all signals are UUB.
|
| 562 |
+
|
| 563 |
+
Motivated by [14], the derivative of event-triggered function can be written as
|
| 564 |
+
|
| 565 |
+
$$
|
| 566 |
+
\frac{d}{dt}\left| {\Gamma \left( t\right) }\right| = \frac{d}{dt}{\left( \Gamma \left( t\right) \times \Gamma \left( t\right) \right) }^{\frac{1}{2}} = \operatorname{sgn}\left( {\Gamma \left( t\right) }\right) \dot{\Gamma }\left( t\right) \leq \left| {{\dot{u}}^{ * }\left( {\xi \left( t\right) }\right) }\right|
|
| 567 |
+
$$
|
| 568 |
+
|
| 569 |
+
(47)
|
| 570 |
+
|
| 571 |
+
Because all signals are UUB, absolutely existing a positive parameter $\kappa$ satisfies
|
| 572 |
+
|
| 573 |
+
$$
|
| 574 |
+
\left| {{\dot{u}}^{ * }\left( {\xi \left( t\right) }\right) }\right| \leq \kappa \tag{48}
|
| 575 |
+
$$
|
| 576 |
+
|
| 577 |
+
According to the event-triggered mechanism (28) and (29), one can derive that $\Gamma \left( {t}_{d}\right) = 0$ and $\mathop{\lim }\limits_{{t \rightarrow {t}_{d + 1}}}\Gamma \left( {t}_{d + 1}\right) =$ $\Delta \left| {{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) }\right| + M$ . Combining equation (47),(48) and performing some mathematical operations, the minimal inter-execution ${t}^{ * } = {t}_{d + 1} - {t}_{d}$ satisfies ${t}^{ * } > \frac{\left| {{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) }\right| + M}{\kappa },\forall t \in \left\lbrack {{t}_{d},{t}_{d + 1}}\right)$ . Consequently, it is guaranteed that the Zeno behavior is non-occurring.
|
| 578 |
+
|
| 579 |
+
## VI. Simulation
|
| 580 |
+
|
| 581 |
+
In this section, YUKUN of Dalian Maritime University is utilized to verify the validity and flexibility of the optimal control strategy considering event-triggered mechanism. The parameters of YUKUN are as follows: length between perpendiculars is ${105}\mathrm{\;m}$ , beam is ${18}\mathrm{\;m}$ , rudder area is 11.46 ${\mathrm{m}}^{2}$ , loaded speed is ${16.7}\mathrm{{kn}}$ , full amidships draft is ${5.2}\mathrm{\;m}$ , full loaded displacement is ${5735.5}{\mathrm{\;m}}^{3}$ , block coefficient is 0.5595 . Maritime environment can be set that: wind direction ${\psi }_{\text{wind }} = {30}^{ \circ }$ , wind scale $\mathcal{S} = 6$ , current direction ${\psi }_{\text{current }} =$ ${30}^{ \circ }$ , current velocity ${v}_{\text{current }} = 5\mathrm{{kn}}$ .
|
| 582 |
+
|
| 583 |
+
Therefore, a continuous-time ship dynamic system can be considered
|
| 584 |
+
|
| 585 |
+
$$
|
| 586 |
+
\left\{ \begin{array}{l} {\dot{x}}_{1} = {x}_{2} \\ {\dot{x}}_{2} = - \frac{1}{T}\left( {{\alpha }_{s}{x}_{2} + {\beta }_{s}{x}_{2}{}^{3}}\right) + \frac{K}{T}\left( {u + {\delta }_{w}}\right) \\ y = {x}_{1} \end{array}\right. \tag{49}
|
| 587 |
+
$$
|
| 588 |
+
|
| 589 |
+
where ${x}_{1}$ and ${x}_{2} \in \mathbb{R}$ are state variables, $u \in \mathbb{R}$ is the control input variable; reference signal ${x}_{1d} =$ $\sin \left( {{\pi t}/{25}}\right)$ ; the rudder gain $K = {0.314}$ and time constant $T = {62.387}$ ; designed parameters ${\alpha }_{s} = {100}$ and ${\beta }_{s} = {50}$ . Design parameters ${\alpha }_{a} = {0.001},{\alpha }_{c} = 1$ , ${\alpha }_{s} = {100000}, R = {0.067},\Delta = {0.39}, M = {0.001}$ . The initial state can be set that ${x}_{0} = {\left\lbrack -{0.3},{2.1},{0.1},{0.03}\right\rbrack }^{\mathrm{T}}$ , the initial degree of membership can be set that ${\omega }_{a0} =$ ${\left\lbrack -{3.4}, - 4, - {3.5}, - {1.8}, - 2,0, - {1.4}, - {0.8}, - {1.8}, - 2\right\rbrack }^{\mathrm{T}},{\omega }_{c0} =$ ${\left\lbrack 1,{1.3},{1.5},{1.3},0,0,{1.5},3,{3.3},3\right\rbrack }^{\mathrm{T}}$ .
|
| 590 |
+
|
| 591 |
+
Simulation results are illustrated in Fig. 1-4. The tracking trajectory and error are shown in Fig. 1, where the ship course can rapidly track the reference course in 10 seconds and tracking error can converge to a bounded compact set of zero based on the designed event-triggered adaptive optimal controller. Fig. 2 describes the general control input and the event-triggered control input. Its result illustrates event-triggered controller is superior to common controllers under the same conditions. The numerical values of event-triggered controller are smaller than that of the general controller, which effectively verifies the competent in reducing mechanical wear and saving energy of the event-triggered mechanism. Fig. 3 describes the corresponding triggered time that highlights the advantages of cost saving for event-triggered controller. In the end, Fig. 4 gives the value function and policy function degree of memberships convergence exhibitions which demonstrate degree of membership signals can rapidly coverage to a bounded range.
|
| 592 |
+
|
| 593 |
+

|
| 594 |
+
|
| 595 |
+
Fig. 1. Trajectories of the course tracking error, actual course and reference course.
|
| 596 |
+
|
| 597 |
+

|
| 598 |
+
|
| 599 |
+
Fig. 2. Trajectories of control input and event-triggered control input.
|
| 600 |
+
|
| 601 |
+
## VII. CONCLUSION
|
| 602 |
+
|
| 603 |
+
In this article, an event-triggered optimal tracking control scheme has been proposed for uncertain nonlinear systems based on RL. An improved ADP technique combining actor-critic algorithm and fuzzy logic systems have been implemented in solving HJB equation of nominal system. To reduce mechanical wear of actuator and save energy, event-triggered mechanism has been performed to update controller. All signals are UUB by Lyapunov demonstration and simulations verify the feasibility of proposed scheme. In the future, we will study the tracking control problem based on deep reinforcement learning and the multi-agent systems also is an interesting direction.
|
| 604 |
+
|
| 605 |
+

|
| 606 |
+
|
| 607 |
+
Fig. 3. Inter-event times of ${u}_{e}$ .
|
| 608 |
+
|
| 609 |
+

|
| 610 |
+
|
| 611 |
+
Fig. 4. Convergence situations of policy function degree of memberships ${\widehat{\omega }}_{a}$ and value function degree of memberships ${\widehat{\omega }}_{c}$ .
|
| 612 |
+
|
| 613 |
+
## ACKNOWLEDGMENT
|
| 614 |
+
|
| 615 |
+
This work was supported in part by the Central Guidance on Local Science and Technology Development Fund of Liaoning Province (Grant No. 2023JH6/100100055); in part by the National Natural Science Foundation of China (Grant Nos. 52271360); in part by the Dalian Outstanding Young Scientific and Technological Talents Project (Grant No. 2023RY031); in part by the Basic Scientific Research Project of Liaoning Education Department (Grant No. JYTMS20230164); and in part by the Fundamental Research Funds for the Central Universities (Grant No. 3132024125).
|
| 616 |
+
|
| 617 |
+
## REFERENCES
|
| 618 |
+
|
| 619 |
+
[1] D. Wang, N. Gao, D. Liu, J. Li, and F. L. Lewis, "Recent progress in reinforcement learning and adaptive dynamic programming for advanced
|
| 620 |
+
|
| 621 |
+
control applications," IEEE/CAA Journal of Automatica Sinica, vol. 11, no. 1, pp. 18-36, Jan. 2024.
|
| 622 |
+
|
| 623 |
+
[2] W. Bai, "Introduction to discrete-time reinforcement learning control in complex engineering systems," Complex Engineering Systems, vol. 4, no. 2, pp. 8, Apr. 2024.
|
| 624 |
+
|
| 625 |
+
[3] W. Gao, M. Mynuddin, D. Wunsch, and Z. Jiang, "Reinforcement learning-based cooperative optimal output regulation via distributed adaptive internal model," IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 10, pp. 5229-5240, Oct. 2022.
|
| 626 |
+
|
| 627 |
+
[4] D. M. Le, M. L. Greene, W. A. Makumi, and W. E. Dixon, "Real-time modular deep neural network-based adaptive control of nonlinear systems," IEEE Control Systems Letters, vol. 6, pp. 476-481, 2022.
|
| 628 |
+
|
| 629 |
+
[5] D. Wang, and C. Mu, "Adaptive-critic-based robust trajectory tracking of uncertain dynamics and its application to a spring-mass-damper system," IEEE Transactions on Industrial Electronics, vol. 65, no. 1, pp. 654-663, Jan. 2018.
|
| 630 |
+
|
| 631 |
+
[6] D. Wang, J. Qiao, and L. Cheng, "An approximate neuro-optimal solution of discounted guaranteed cost control design," IEEE Transactions on Cybernetics, vol. 52, no. 1, pp. 77-86, Jan. 2022
|
| 632 |
+
|
| 633 |
+
[7] K. G. Vamvoudakis, and F. L. Lewis, "Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem," Automatica, vol. 46, no. 5, pp. 878-888, May. 2010.
|
| 634 |
+
|
| 635 |
+
[8] X. Li, J. Ren, and D. Wang, "Multi-step policy evaluation for adaptive-critic-based tracking control towards nonlinear systems," Complex Engineering Systems, vol. 3, no. 4, pp. 20, Nov. 2023.
|
| 636 |
+
|
| 637 |
+
[9] J. Li, G. Zhang, Q. Shan, and W. Zhang, "A novel cooperative design for USV-UAV systems: 3D mapping guidance and adaptive fuzzy control," IEEE Transactions on Control of Network Systems, vol. 10, no. 2, pp. 564-574, Jun. 2023.
|
| 638 |
+
|
| 639 |
+
[10] H. Yue, and J. Xia, "Reinforcement learning-based optimal adaptive fuzzy control for nonlinear multi-agent systems with prescribed performance," Complex Engineering Systems, vol. 3, no. 4, pp. 19, Nov. 2023.
|
| 640 |
+
|
| 641 |
+
[11] Q. Wei, R. Song, and P. Yan, "Data-driven zero-sum neuro-optimal control for a class of continuous-time unknown nonlinear systems with disturbance using ADP," IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 2, pp. 444-458, Feb. 2016.
|
| 642 |
+
|
| 643 |
+
[12] C. Mu, Y. Zhang, Z. Gao and C. Sun, "ADP-Based Robust Tracking Control for a Class of Nonlinear Systems With Unmatched Uncertainties," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 11, pp. 4056-4067, Nov. 2020.
|
| 644 |
+
|
| 645 |
+
[13] J. Zhao, C. Yang, W. Gao, and J. H. Park, "ADP-based optimal control of linear singularly perturbed systems with uncertain dynamics: A two-stage value iteration method," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 70, no. 12, pp. 4399-4403, Dec. 2023.
|
| 646 |
+
|
| 647 |
+
[14] X. Yang, H. He, and D. Liu, "Event-triggered optimal neuro-controller design with reinforcement learning for unknown nonlinear systems," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 9, pp. 1866-1878, Sept. 2019.
|
| 648 |
+
|
| 649 |
+
[15] Y. Zhang, Sun, J. Zhang, H. Liang, and H. Li, "Event-triggered adaptive tracking control for multiagent systems with unknown disturbances," IEEE Transactions on Cybernetics, vol. 50, no. 3, pp. 890-901, Mar. 2020.
|
| 650 |
+
|
| 651 |
+
[16] Q. Zhang, D. Zhao, and Y. Zhu,"Event-triggered $h\infty$ control for continuous-time nonlinear system via concurrent learning," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 7, pp. 1071-1081, Jul. 2017.
|
| 652 |
+
|
| 653 |
+
[17] Z. Liu, F. Wang, Y. Zhang, X. Chen, and C. L. P. Chen, "Adaptive tracking control for a class of nonlinear systems with a fuzzy dead-zone input," IEEE Transactions on Fuzzy Systems, vol. 23, no. 1, pp. 193-204, Feb. 2015.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/8haaEllsjL/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,615 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ EVENT-TRIGGERED OPTIMAL TRACKING CONTROL FOR UNCERTAIN NONLINEAR SYSTEM BASED ON REINFORCEMENT LEARNING
|
| 2 |
+
|
| 3 |
+
Yuanhao Wang
|
| 4 |
+
|
| 5 |
+
Navigation College
|
| 6 |
+
|
| 7 |
+
Dalian Maritime University
|
| 8 |
+
|
| 9 |
+
Dalian, China
|
| 10 |
+
|
| 11 |
+
wangyuanhao2024@163.com
|
| 12 |
+
|
| 13 |
+
Weiwei Bai
|
| 14 |
+
|
| 15 |
+
Navigation College
|
| 16 |
+
|
| 17 |
+
Dalian Maritime University
|
| 18 |
+
|
| 19 |
+
Dalian, China
|
| 20 |
+
|
| 21 |
+
baiweiwei_dl@163.com
|
| 22 |
+
|
| 23 |
+
Abstract-In this paper, an event-triggered optimal tracking control problem is studied for uncertain nonlinear systems based on reinforcement learning (RL). Firstly, a class of nonlinear dynamic systems with general uncertainty is considered and the augmented system comprising tracking error and reference signal is constructed. Secondly, an improved adaptive dynamic programming (ADP) technique, involving actor-critic algorithm and fuzzy logic systems, is developed to solve the Hamilton-Jacobi-Bellman (HJB) equation with respect to nominal augmented system. Thirdly, in order to reduce the mechanical wear of actuator and energy consumption, event-triggered mechanism is performed in controller updating. Finally, stability analysis proofs that all signals are uniformly ultimately bounded (UUB) in the closed-loop system via Lyapunov theory. Simulation results verify feasibility of proposed scheme.
|
| 24 |
+
|
| 25 |
+
Index Terms-ADP, event-triggered, reinforcement learning, nonlinear, fuzzy logic systems, tracking control.
|
| 26 |
+
|
| 27 |
+
§ I. INTRODUCTION
|
| 28 |
+
|
| 29 |
+
Reinforcement Learning (RL) as an effective technique has competent in facilitating adaptive optimization strategy [1], [2]. Generally, optimization is implemented via seeking minimized or maximized cost function to solve the Hamilton-Jacobi-Bellman (HJB) equation [3]. However, there exists a challenge about acquiring analytic solution of HJB equation directly for nonlinear dynamic systems [4]. Therefore many researchers proposed numerical solution of HJB equation [5]. Adaptive dynamic programming (ADP) as an advanced numerical solving method, has been widely applied to achieve the optimal tracking control of nonlinear systems.
|
| 30 |
+
|
| 31 |
+
In contrast to traditional dynamic programming, ADP can be utilized to design optimal controller forward in time, which effectively avoids "curse of dimensionality" [6], [7]. In addition, an improved ADP framework consists of actor-critic algorithm and fuzzy logic systems is constructed. So far, there have been many scholars devoting to developing ADP techniques [8]-[10]. In [11], ADP method was implemented to solve a new neuro-optimal control problem of nonlinear dynamic systems by employing one critic and two actor networks. In [12], a neural-network-based ADP method was developed to solve the optimal tracking control problem of a class of nonlinear systems with unmatched uncertainties. In [13], linear singularly perturbed system was studied via employing ADP framework to achieve optimal control. These literatures concentrated on application and development of ADP and RL, but they did not consider the condition with mechanical wear of actuator and energy consumption. As a result, it is of necessity to perform event-triggered mechanism in control design for reducing mechanical wear and saving energy in actual engineering practice [14].
|
| 32 |
+
|
| 33 |
+
The key of event-triggered control algorithm is triggering threshold [14]. When signal exceeds triggering threshold, control policy will be updated [15], [16]. In this paper, an event-triggered optimal tracking control scheme for uncertain nonlinear systems based on RL is developed. There are two main contributions:
|
| 34 |
+
|
| 35 |
+
(1) An improved ADP and RL algorithm involving actor-critic and fuzzy logic systems is developed, which develops the optimal control strategy and effectively balances the tracking control performance and control costs.
|
| 36 |
+
|
| 37 |
+
(2) Event-triggered mechanism is performed in controller design, the unnecessary control input is avoided, achieving the reduction of mechanical wear and the energy saving in engineering practice.
|
| 38 |
+
|
| 39 |
+
The organization of this paper is shown as follows. System dynamic description and fuzzy logic systems are stated in Section II. Optimal controller and event-triggered controller are designed in Sections III and IV, respectively. Stability analysis, simulation and conclusion are shown in Sections V, VI and VII, respectively.
|
| 40 |
+
|
| 41 |
+
§ II. PROBLEM FORMULATION AND PRELIMINARIES
|
| 42 |
+
|
| 43 |
+
§ A. SYSTEM DYNAMIC DESCRIPTION
|
| 44 |
+
|
| 45 |
+
Consider a class of continuous-time nonlinear dynamic systems which can be described by
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\dot{x}\left( t\right) = f\left( {x\left( t\right) }\right) + g\left( {x\left( t\right) }\right) u\left( t\right) + \mathcal{D}\left( {x\left( t\right) }\right) \tag{1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where $x\left( t\right) \in {\mathbb{R}}^{n}$ is the state variable, $u\left( t\right) \in {\mathbb{R}}^{m}$ is the control input, $f\left( {x\left( t\right) }\right) \in {\mathbb{R}}^{n}$ and $g\left( {x\left( t\right) }\right) \in {\mathbb{R}}^{n \times m}$ are the unknown smooth function and unknown smooth function matrix respectively, $\mathcal{D}\left( {x\left( t\right) }\right)$ is the unknown disturbance with $\parallel \mathcal{D}\left( {x\left( t\right) }\right) \parallel \leq {\lambda }_{\mathcal{D}}$ and ${\lambda }_{\mathcal{D}}$ is a positive parameter.
|
| 52 |
+
|
| 53 |
+
To achieve tracking control, a reference signal is given by
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\dot{r}\left( t\right) = \delta \left( {r\left( t\right) }\right) \tag{2}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where $r\left( t\right) \in {\mathbb{R}}^{n}$ is a bounded desired trajectory and $\delta \left( {r\left( t\right) }\right)$ is a Lipschitz continuous function. Let the tracking error be
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
e\left( t\right) = x\left( t\right) - r\left( t\right) \tag{3}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
Combining equations (1), (2) and (3), one can yield the following dynamic of tracking error
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\dot{e}\left( t\right) = f\left( {x\left( t\right) }\right) + g\left( {x\left( t\right) }\right) u\left( t\right) + \mathcal{D}\left( {x\left( t\right) }\right) - \delta \left( {r\left( t\right) }\right) \tag{4}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
Note that $x\left( t\right) = e\left( t\right) + r\left( t\right)$ , equation (4) can be rewritten as
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\dot{e}\left( t\right) = f\left( {e\left( t\right) + r\left( t\right) }\right) - \delta \left( {r\left( t\right) }\right) + g\left( {e\left( t\right) + r\left( t\right) }\right) u\left( t\right) \tag{5}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
+ \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right)
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
For the sake of facilitating description, define $\xi \left( t\right) = \mathrm{i}$ ${\left\lbrack {e}^{\mathrm{T}}\left( t\right) ,{r}^{\mathrm{T}}\left( t\right) \right\rbrack }^{\mathrm{T}} \in {\mathbb{R}}^{2n}$ , and then dynamic systems (2) and (5) can be augmented as a concise form
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\dot{\xi }\left( t\right) = F\left( {\xi \left( t\right) }\right) + G\left( {\xi \left( t\right) }\right) u\left( t\right) + \Delta \mathbb{D}\left( {\xi \left( t\right) }\right) \tag{6}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $F\left( {\xi \left( t\right) }\right)$ and $G\left( {\xi \left( t\right) }\right)$ are new matrices and $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ can be still regarded as a new uncertain term. In particular, $F\left( {\xi \left( t\right) }\right) = \left\lbrack \begin{matrix} f\left( {e\left( t\right) + r\left( t\right) }\right) - \delta \left( t\right) \\ \delta \left( t\right) \end{matrix}\right\rbrack ,G\left( {\xi \left( t\right) }\right) =$ $\left\lbrack \begin{matrix} g\left( {e\left( t\right) + r\left( t\right) }\right) \\ {0}_{n \times m} \end{matrix}\right\rbrack$ and $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right) = \left\lbrack \begin{matrix} \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right) \\ {0}_{n \times 1} \end{matrix}\right\rbrack$ .
|
| 88 |
+
|
| 89 |
+
Undoubtedly, the new uncertain term $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ is still upper bounded since
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\parallel \Delta \mathbb{D}\left( {\xi \left( t\right) }\right) \parallel = \parallel \mathcal{D}\left( {e\left( t\right) + r\left( t\right) }\right) \parallel = \parallel \mathcal{D}\left( {x\left( t\right) }\right) \parallel \leq {\lambda }_{\mathcal{D}} \tag{7}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
To accomplish tracking control of dynamic system (1) to reference signal (2), the feedback controller $u\left( \xi \right)$ will be constructed. One can yield that the closed-loop system is asymptotically stable under the controller $u\left( \xi \right)$ for the uncertain and bounded term $\Delta \mathbb{D}\left( {\xi \left( t\right) }\right)$ . Therefore, the optimal control policy can be applied by considering appropriate cost function of the subsequent nominal system the same as that in [5].
|
| 96 |
+
|
| 97 |
+
§ B. FUZZY LOGIC SYSTEMS
|
| 98 |
+
|
| 99 |
+
Define a nonlinear continuous function $P\left( x\right)$ over a compact set $\mathbb{U}$ , for any constant $\varepsilon > 0$ , there exists fuzzy logic systems ${\omega }^{\mathrm{T}}\varphi \left( x\right)$ such that [17]
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\mathop{\sup }\limits_{{x \in \mathbb{U}}}\left| {P\left( x\right) - {\omega }^{\mathrm{T}}\varphi \left( x\right) }\right| \leq \varepsilon \tag{8}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $x = {\left\lbrack {x}_{1},\ldots ,{x}_{j}\right\rbrack }^{\mathrm{T}}$ is the input vector of fuzzy logic systems, $\omega = {\left\lbrack {\omega }_{1},{\omega }_{2},\ldots ,{\omega }_{L}\right\rbrack }^{\mathrm{T}} \in {\mathbb{R}}^{L}$ is the degree of membership and $L > 1$ is the number of fuzzy rules, $\varepsilon$ is the fuzzy minimum approximation error. $\varphi \left( x\right) =$ ${\left\lbrack {\varphi }_{1}\left( x\right) ,{\varphi }_{2}\left( x\right) ,\ldots ,{\varphi }_{L}\left( x\right) \right\rbrack }^{\mathrm{T}}$ is fuzzy basic function vector and ${\varphi }_{i}\left( x\right)$ is selected as follows:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{\varphi }_{i}\left( x\right) = \frac{\mathop{\prod }\limits_{{i = 1}}^{j}{\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right) }{\mathop{\sum }\limits_{{i = 1}}^{N}\left( {\mathop{\prod }\limits_{{i = 1}}^{j}{\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right) }\right) },\left( {i = 1,\ldots ,L}\right) \tag{9}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${F}_{i}^{l}\left( {i = 1,\ldots ,j;l = 1,\ldots ,N}\right)$ is the fuzzy set and ${\mu }_{{F}_{i}^{l}}\left( {x}_{i}\right)$ is the membership function.
|
| 112 |
+
|
| 113 |
+
§ III. OPTIMAL CONTROL DESIGN
|
| 114 |
+
|
| 115 |
+
In this section, ADP comprising actor-critic algorithm and fuzzy logic systems will be employed to design the value function ${L}^{ * }\left( \xi \right)$ and control policy ${u}^{ * }\left( \xi \right)$ , and design degree of membership update laws.
|
| 116 |
+
|
| 117 |
+
In actor-critic framework, value function and control policy are approximated by critic and actor fuzzy systems, respectively. Optimal cost function (13) and feedback controller (15) represent value function and control policy for optimal tracking control problem, respectively.
|
| 118 |
+
|
| 119 |
+
Consider the nominal part of the augmented system (6), that
|
| 120 |
+
|
| 121 |
+
is
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\dot{\xi }\left( t\right) = F\left( {\xi \left( t\right) }\right) + G\left( {\xi \left( t\right) }\right) u\left( t\right) \tag{10}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
For the nominal system (10), this cost function is considered
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
L\left( \xi \right) = {\int }_{t}^{\infty }Q\left( \tau \right) + u{\left( \tau \right) }^{\mathrm{T}}{Ru}\left( \tau \right) {d\tau } \tag{11}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $Q\left( \xi \right) = {\xi }^{\mathrm{T}}\mathcal{Q}\xi ,R = {R}^{\mathrm{T}}.\mathcal{Q}$ and $R$ are positive defined matrix.
|
| 134 |
+
|
| 135 |
+
Subsequently, one can define the Hamiltonian of the optimal problem
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
H\left( {\xi ,u\left( \xi \right) }\right) = Q\left( \xi \right) + u{\left( \xi \right) }^{\mathrm{T}}{Ru}\left( \xi \right) \tag{12}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
+ {\nabla }^{\mathrm{T}}L\left( \xi \right) \left\lbrack {F\left( \xi \right) + G\left( \xi \right) u\left( \xi \right) }\right\rbrack
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $\nabla L\left( \xi \right)$ represents the partial derivative of $L\left( \xi \right)$ .
|
| 146 |
+
|
| 147 |
+
Generally, as long as finding the optimal cost function can we derive the optimal controller. The infinitesimal version of cost function is regarded as the optimal cost function, one has
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
{L}^{ * }\left( \xi \right) = \min {\int }_{t}^{\infty }Q\left( \tau \right) + u{\left( \tau \right) }^{\mathrm{T}}{Ru}\left( \tau \right) {d\tau } \tag{13}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
The optimal cost function is the solution of the HJB equation which satisfies
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + {u}^{ * }{\left( \xi \right) }^{\mathrm{T}}R{u}^{ * }\left( \xi \right)
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
+ {\nabla }^{\mathrm{T}}L\left( \xi \right) \left\lbrack {F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right) }\right\rbrack = 0
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
(14)
|
| 164 |
+
|
| 165 |
+
Consequently, the optimal feedback controller is yielded
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
{u}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \nabla {L}^{ * }\left( \xi \right) \tag{15}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
One need to solve the HJB equation (14) and obtain the optimal controller (15) for nominal system (10). However, the solution of HJB equation (14) is difficult to be obtained directly. Therefore, fuzzy logic systems and adaptive actor-critic will be utilized to find its estimated solution.
|
| 172 |
+
|
| 173 |
+
Fuzzy logic systems are employed to reconstruct the value function ${L}^{ * }\left( \xi \right)$
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
{L}^{ * }\left( \xi \right) = {\omega }^{\mathrm{T}}\varphi \left( \xi \right) + \varepsilon \left( \xi \right) \tag{16}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
where $\omega$ is the degree of membership of fuzzy logic systems, $\varphi \left( \xi \right)$ is the fuzzy basis function and $\varepsilon \left( \xi \right)$ is the unknown fuzzy approximate error.
|
| 180 |
+
|
| 181 |
+
Considering (15) and (16) yields the optimal controller described by fuzzy logic systems as
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
{u}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \left\lbrack {{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \omega + \nabla \varepsilon \left( \xi \right) }\right\rbrack \tag{17}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
In order to clearly analyze, define a non-negative matrix
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
A\left( \xi \right) = \nabla \varphi \left( \xi \right) G\left( \xi \right) {R}^{-1}G\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \tag{18}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
One can derive the HJB equation reconstructed by fuzzy logic systems, combining with (16), (17) and (18), one has
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + {\omega }^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right)
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
- \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega + {\varepsilon }_{HJB} = 0 \tag{19}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
and the residual error ${\varepsilon }_{HJB}$ is expressed as
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
{\varepsilon }_{HJB} = {\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) \left( {F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right) }\right)
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
+ \frac{1}{4}{\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) G\left( \xi \right) {R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) \nabla \varepsilon \left( \xi \right) \tag{20}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
+ \frac{1}{2}{\nabla }^{\mathrm{T}}\varepsilon \left( \xi \right) G\left( \xi \right) {R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) \omega
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
The estimation of value function ${L}^{ * }\left( \xi \right)$ and control policy ${u}^{ * }\left( \xi \right)$ can be constructed by critic and actor fuzzy, respectively.
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
{\widehat{L}}^{ * }\left( \xi \right) = {\widehat{\omega }}_{c}^{\mathrm{T}}\varphi \left( \xi \right) \tag{21}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{\widehat{u}}^{ * }\left( \xi \right) = - \frac{1}{2}{R}^{-1}{G}^{\mathrm{T}}\left( \xi \right) {\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widehat{\omega }}_{a} \tag{22}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
where ${\widehat{\omega }}_{a}$ is the actor estimated degree of membership and ${\widehat{\omega }}_{c}$ is the critic estimated degree of membership.
|
| 228 |
+
|
| 229 |
+
Noticing (21) and (22), one can derive the following estimated Hamiltonian
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right) = Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
+ {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
(23)
|
| 240 |
+
|
| 241 |
+
To obtain the degree of membership update laws of fuzzy logic systems, defining the objective function as ${E}_{c} = \frac{1}{2}{e}_{c}{}^{\mathrm{T}}{e}_{c}$ , where ${e}_{c} = \widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right) - H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right)$ is the Bellman error. In order to conquer the difficulties of searching controller and adaptive laws, the following assumption is made and the additional term can be constructed to improve the learning process.
|
| 242 |
+
|
| 243 |
+
Assumption 1: [5] Define ${L}_{s}\left( \xi \right)$ is a continuous differentiable Lyapunov function candidate satisfying
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
{\dot{L}}_{s}\left( \xi \right) = {\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + {u}^{ * }\left( \xi \right) }\right) < 0 \tag{24}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
and then, there exists a positive matrix $\mathfrak{K} \in {\mathbb{R}}^{{2n} \times {2n}}$ ensuring that
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + {u}^{ * }\left( \xi \right) }\right) = - {\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \Re \nabla {L}_{s}\left( \xi \right) \tag{25}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\leq - {\lambda }_{\min }\left( \mathfrak{K}\right) \nabla {\begin{Vmatrix}{L}_{s}\left( \xi \right) \end{Vmatrix}}^{2}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
Based on the gradient decent, degree of membership update laws of fuzzy logic systems are designed, by considering these two Hamilton functions $H\left( {\xi ,{u}^{ * }\left( \xi \right) ,{L}^{ * }\left( \xi \right) }\right)$ and $\widehat{H}\left( {\xi ,{\widehat{u}}^{ * }\left( \xi \right) ,{\widehat{L}}^{ * }\left( \xi \right) }\right)$ , one has
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
{\dot{\widehat{\omega }}}_{a} = - {\alpha }_{a}\left( {\frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a} - \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{c}}\right)
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) + \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
(26)
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
{\dot{\widehat{\omega }}}_{c} = - {\alpha }_{c}\left( {\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) + \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
(27)
|
| 288 |
+
|
| 289 |
+
where ${\alpha }_{a}$ and ${\alpha }_{c}$ are the basis learning parameters of actor and critic systems, respectively, and ${\alpha }_{s}$ is the adjustable parameter for the additional term.
|
| 290 |
+
|
| 291 |
+
§ IV. EVENT-TRIGGERED CONTROL IMPLEMENTATION
|
| 292 |
+
|
| 293 |
+
The event triggering mechanism is defined as
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = {u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right) ,\forall t \in \left\lbrack {{t}_{d},{t}_{d + 1}}\right) \tag{28}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
{t}_{d + 1} = \inf \left\{ {t \in \mathbb{R}\left| \right| \Gamma \left( t\right) \left| { \geq \Delta }\right| {u}_{e}^{ * }\left( {\xi \left( t\right) }\right) \mid + M}\right\} ,{t}_{1} = 0 \tag{29}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
where the event-triggered error $\Gamma \left( t\right) = {u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right) - {u}_{e}^{ * }\left( {\xi \left( t\right) }\right)$ , the controller update time is ${t}_{d},d \in {Z}^{ + }$ . Define the proper parameters $0 < \Delta < 1$ and $M > 0$ .
|
| 304 |
+
|
| 305 |
+
When event is not triggered, the control policy will be chosen as ${u}^{ * }\left( {\xi \left( {t}_{d}\right) }\right)$ . Otherwise, control policy will be updated and marked as ${u}_{e}^{ * }\left( {\xi \left( {t}_{d + 1}\right) }\right)$ . Assume two continuous and time-varying parameters ${\rho }_{1}\left( t\right)$ and ${\rho }_{2}\left( t\right)$ , which results in ${u}^{ * }\left( {\xi \left( t\right) }\right) = \left( {1 + {\rho }_{1}\left( t\right) \Delta }\right) {u}_{e}^{ * }\left( {\xi \left( t\right) }\right) + {\rho }_{2}\left( t\right) M$ where $\left| {{\rho }_{1}\left( t\right) }\right| \leq 1$ and $\left| {{\rho }_{2}\left( t\right) }\right| \leq 1$ . And then, the event-triggered controller can be rewritten as
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = \frac{{u}^{ * }\left( {\xi \left( t\right) }\right) - {\rho }_{2}\left( t\right) M}{1 + {\rho }_{1}\left( t\right) \Delta } \tag{30}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
Using (17) and (30) can yield that
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) = - \frac{1}{2\rho }{R}^{-1}\left\lbrack {{G}^{\mathrm{T}}\left( {\xi \left( t\right) }\right) {\nabla }^{\mathrm{T}}\varphi \left( {\xi \left( t\right) }\right) \omega + {\varepsilon }_{e}\left( {\xi \left( t\right) }\right) }\right\rbrack
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
(31)
|
| 318 |
+
|
| 319 |
+
where $\rho = 1 + {\rho }_{1}\left( t\right) \Delta ,{\varepsilon }_{e}\left( {\xi \left( t\right) }\right) = \nabla \varepsilon \left( {\xi \left( t\right) }\right) + 2{\rho }_{2}\left( t\right) {RM}$ .
|
| 320 |
+
|
| 321 |
+
Similarly, based on critic fuzzy logic systems, the estimated event-triggered controller can be obtained, one has
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
{\widehat{u}}_{e}^{ * }\left( {\xi \left( t\right) }\right) = - \frac{1}{2\rho }{R}^{-1}{G}^{\mathrm{T}}\left( {\xi \left( t\right) }\right) {\nabla }^{\mathrm{T}}\varphi \left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} \tag{32}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
Considering the HJB equation (14), value function (21) and event-triggered controller (32), one can yield the following Hamilton function as
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
{\widehat{H}}_{e}\left( {\xi \left( t\right) ,{\widehat{u}}_{e}^{ * }\left( {\xi \left( t\right) }\right) ,{\widehat{L}}^{ * }\left( {\xi \left( t\right) }\right) }\right)
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
= Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right)
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
- \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
(33)
|
| 342 |
+
|
| 343 |
+
Subsequently, degree of membership update laws with respect to event-triggered mechanism can be constructed, one has
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
{\dot{\widehat{\omega }}}_{ae} = - {\alpha }_{a}\left( {\frac{1}{2{\rho }^{2}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a} - \frac{1}{2\rho }A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{c}}\right)
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
\times \left( {Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right. \tag{34}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\left. {+{\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
+ \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( {\xi \left( t\right) }\right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( {\xi \left( t\right) }\right)
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
{\dot{\widehat{\omega }}}_{ce} = - {\alpha }_{c}\left( {\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right)
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
\times \left( {Q\left( {\xi \left( t\right) }\right) + \frac{1}{4{\rho }^{2}}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right. \tag{35}
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
\left. {+{\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( {\xi \left( t\right) }\right) F\left( {\xi \left( t\right) }\right) - \frac{1}{2\rho }{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( {\xi \left( t\right) }\right) {\widehat{\omega }}_{a}}\right)
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
+ \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( {\xi \left( t\right) }\right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( {\xi \left( t\right) }\right)
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
Theorem 1: Considering the dynamic system (1), the optimal feedback controller (22), event-triggered controller (32) and the degree of membership update laws (26), (27), (34) and (35) are developed. Based on Lyapunov theory, all signals are uniformly ultimately bounded (UUB) in the closed-loop system.
|
| 378 |
+
|
| 379 |
+
For the sake of investigating the stability of error dynamics and close-loop states, the following assumption is given by
|
| 380 |
+
|
| 381 |
+
Assumption 2: On a compact set $\Omega ,G\left( \xi \right) ,\nabla \varphi \left( \xi \right) ,\nabla \varepsilon \left( \xi \right)$ , ${\xi }^{ * }$ and ${\varepsilon }_{HJB}$ are bounded. $\parallel G\left( \xi \right) \parallel \leq {\lambda }_{g},\parallel \nabla \varphi \left( \eta \right) \parallel \leq {\lambda }_{\varphi }$ , $\parallel \nabla \varepsilon \left( \eta \right) \parallel \leq {\lambda }_{\varepsilon },\begin{Vmatrix}{\xi }^{ * }\end{Vmatrix} \leq {\lambda }_{\xi }$ and $\begin{Vmatrix}{\varepsilon }_{HJB}\end{Vmatrix} \leq {\lambda }_{HJB}$ , where ${\lambda }_{g}$ , ${\lambda }_{\varphi },{\lambda }_{\varepsilon },{\lambda }_{\xi }$ and ${\lambda }_{HJB}$ are positive constants.
|
| 382 |
+
|
| 383 |
+
§ V. STABILITY ANALYSIS
|
| 384 |
+
|
| 385 |
+
In this section, Lyapunov theory will be employed to demonstrate Theorem 1.
|
| 386 |
+
|
| 387 |
+
Case1 : Event are not triggered. Consider the feedback controller (22) and the related degree of membership update laws (26) and (27). According to HJB equation (19), it can be transformed as
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
Q\left( \xi \right) = - {\omega }^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \eta \right) + \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega - {\varepsilon }_{HJB} \tag{36}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
Considering the degree of membership update laws (26) and (27), combining with ${\widetilde{\omega }}_{a} = - {\dot{\omega }}_{a}$ and ${\widetilde{\omega }}_{c} = - {\dot{\omega }}_{c}$ , one has
|
| 394 |
+
|
| 395 |
+
$$
|
| 396 |
+
{\dot{\widetilde{\omega }}}_{a} = - {\alpha }_{a}\left( {-\frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a} + \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{c}}\right)
|
| 397 |
+
$$
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) - \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
(37)
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
{\dot{\widetilde{\omega }}}_{c} = - {\alpha }_{c}\left( {-\nabla \varphi \left( \xi \right) F\left( \eta \right) + \frac{1}{2}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
$$
|
| 414 |
+
\times \left( {Q\left( \xi \right) + \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} + {\widehat{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 415 |
+
$$
|
| 416 |
+
|
| 417 |
+
$$
|
| 418 |
+
\left. {-\frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) - \frac{1}{2}{\alpha }_{s}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 419 |
+
$$
|
| 420 |
+
|
| 421 |
+
(38)
|
| 422 |
+
|
| 423 |
+
Then the following Lyapunov function can be chosen as
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
S\left( t\right) = \frac{1}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}{\widetilde{\omega }}_{a} + \frac{1}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}{\widetilde{\omega }}_{c} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{L}_{s}\left( \xi \right) + \frac{{\alpha }_{s}}{{\alpha }_{c}}{L}_{s}\left( \xi \right)
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
(39)
|
| 430 |
+
|
| 431 |
+
its derivative is
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\dot{S}\left( t\right) = \frac{1}{{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}{\dot{\widetilde{\omega }}}_{a} + \frac{1}{{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}{\dot{\widetilde{\omega }}}_{c} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
= \left( {{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) - \frac{1}{4}{\omega }^{\mathrm{T}}A\left( \xi \right) \omega - \frac{1}{4}{\widehat{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right.
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\left. {+{\varepsilon }_{HJB} + \frac{1}{2}{\widehat{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right) \times \left( {-{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) F\left( \xi \right) }\right.
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
$$
|
| 446 |
+
\left. {+\frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{c} + \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widehat{\omega }}_{a}}\right)
|
| 447 |
+
$$
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
- \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
- \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
+ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
(40)
|
| 462 |
+
|
| 463 |
+
Substituting (22) into (10) and observing the dynamic system ${\dot{\xi }}^{ * } = F\left( \xi \right) + G\left( \xi \right) {u}^{ * }\left( \xi \right)$ with optimal controller ${u}^{ * }\left( \xi \right)$ , one can acquire
|
| 464 |
+
|
| 465 |
+
$$
|
| 466 |
+
\nabla \varphi \left( \xi \right) F\left( \xi \right) = \nabla \varphi \left( \xi \right) \dot{\xi } + \frac{1}{2}\nabla \varphi \left( \xi \right) {R}^{-1}{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widehat{\omega }}_{a} \tag{41}
|
| 467 |
+
$$
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\dot{\xi } = {\dot{\xi }}^{ * } + \frac{1}{2}G{R}^{-1}{G}^{\mathrm{T}}\left( {{\nabla }^{\mathrm{T}}\varphi \left( \xi \right) {\widetilde{\omega }}_{a} + \nabla \varepsilon \left( \xi \right) }\right) \tag{42}
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
Considering above formulations, one can further derive that
|
| 474 |
+
|
| 475 |
+
$$
|
| 476 |
+
\dot{S}\left( t\right) = \left( {{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) {\dot{\xi }}^{ * } + \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right) }\right.
|
| 477 |
+
$$
|
| 478 |
+
|
| 479 |
+
$$
|
| 480 |
+
\left. {+\frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) \omega + \frac{1}{4}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} + {\varepsilon }_{HJB}}\right)
|
| 481 |
+
$$
|
| 482 |
+
|
| 483 |
+
$$
|
| 484 |
+
\times \left( {-{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) {\dot{\xi }}^{ * } - \frac{1}{2}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right) }\right.
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
\left. {-{\widetilde{\omega }}_{c}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a} - \frac{1}{2}{\widetilde{\omega }}_{a}^{\mathrm{T}}A\left( \xi \right) {\widetilde{\omega }}_{a}}\right)
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
$$
|
| 492 |
+
- \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\widetilde{\omega }}_{a}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 493 |
+
$$
|
| 494 |
+
|
| 495 |
+
$$
|
| 496 |
+
- \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\widetilde{\omega }}_{c}^{\mathrm{T}}\nabla \varphi \left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla {L}_{s}\left( \xi \right)
|
| 497 |
+
$$
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
+ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi } + \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \dot{\xi }
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
(43)
|
| 504 |
+
|
| 505 |
+
Next, equation (43) can be expended to conduct mathematical operations based on Assumption 2 and yields that
|
| 506 |
+
|
| 507 |
+
$$
|
| 508 |
+
\dot{S}\left( t\right) \leq - {\lambda }_{1}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix}\right) }^{4} - {\lambda }_{2}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix}\right) }^{2} + {\lambda }_{3}
|
| 509 |
+
$$
|
| 510 |
+
|
| 511 |
+
$$
|
| 512 |
+
+ \frac{{\alpha }_{s}}{2{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right)
|
| 513 |
+
$$
|
| 514 |
+
|
| 515 |
+
$$
|
| 516 |
+
+ \frac{{\alpha }_{s}}{{\alpha }_{a}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + G{u}^{ * }\left( \xi \right) }\right) \tag{44}
|
| 517 |
+
$$
|
| 518 |
+
|
| 519 |
+
$$
|
| 520 |
+
+ \frac{{\alpha }_{s}}{2{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) G{R}^{-1}{G}^{\mathrm{T}}\nabla \varepsilon \left( \xi \right)
|
| 521 |
+
$$
|
| 522 |
+
|
| 523 |
+
$$
|
| 524 |
+
+ \frac{{\alpha }_{s}}{{\alpha }_{c}}{\nabla }^{\mathrm{T}}{L}_{s}\left( \xi \right) \left( {F\left( \xi \right) + G{u}^{ * }\left( \xi \right) }\right)
|
| 525 |
+
$$
|
| 526 |
+
|
| 527 |
+
where ${\lambda }_{1},{\lambda }_{2}$ and ${\lambda }_{3}$ are positive constants.
|
| 528 |
+
|
| 529 |
+
Considering Assumption 1 and equation (44), one can further derive that
|
| 530 |
+
|
| 531 |
+
$$
|
| 532 |
+
\dot{S}\left( t\right) \leq - {\lambda }_{1}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix}\right) }^{4} - {\lambda }_{2}{\left( \begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix}\right) }^{2} + {\lambda }_{\partial }
|
| 533 |
+
$$
|
| 534 |
+
|
| 535 |
+
$$
|
| 536 |
+
- {\lambda }_{\min }\left( \mathfrak{K}\right) {\alpha }_{s}\left( {\frac{1}{{\alpha }_{a}} + \frac{1}{{\alpha }_{c}}}\right) \left( \begin{Vmatrix}{\nabla {L}_{s}\left( \xi \right) }\end{Vmatrix}\right. \tag{45}
|
| 537 |
+
$$
|
| 538 |
+
|
| 539 |
+
$$
|
| 540 |
+
- \frac{{\lambda }_{g}^{2}{\lambda }_{\varepsilon }^{2}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{2}}{4{\lambda }_{\min }\left( \mathfrak{K}\right) }{)}^{2}
|
| 541 |
+
$$
|
| 542 |
+
|
| 543 |
+
where ${\lambda }_{\partial } = {\lambda }_{3} + \frac{{\lambda }_{g}{}^{4}{\lambda }_{\varepsilon }{}^{4}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{4}}{{16}{\lambda }_{\min }\left( \mathfrak{K}\right) }$ .
|
| 544 |
+
|
| 545 |
+
As a result, if $\begin{Vmatrix}{\widetilde{\omega }}_{a}\end{Vmatrix} \geq \sqrt[4]{\frac{{\lambda }_{\partial }}{{\lambda }_{1}}}$ or $\begin{Vmatrix}{\widetilde{\omega }}_{c}\end{Vmatrix} \geq \sqrt{\frac{{\lambda }_{\partial }}{{\lambda }_{2}}}$ or
|
| 546 |
+
|
| 547 |
+
$\begin{Vmatrix}{\nabla {L}_{s}\left( \xi \right) }\end{Vmatrix} \geq \sqrt{\frac{{\lambda }_{\partial }}{{\lambda }_{\min }\left( \mathfrak{K}\right) {\alpha }_{s}\left( {\frac{1}{{\alpha }_{a}} + \frac{1}{{\alpha }_{c}}}\right) }} + \frac{{{\lambda }_{g}}^{2}{{\lambda }_{\varepsilon }}^{2}{\left( \begin{Vmatrix}{R}^{-1}\end{Vmatrix}\right) }^{2}}{4{\lambda }_{\min }\left( \mathfrak{K}\right) }$ hold, $S\left( t\right) \leq 0$ will be satisfied. Finally, one can conclude that all signals are UUB.
|
| 548 |
+
|
| 549 |
+
Case2 : Events are triggered. Consider the event-triggered controller (32) and the degree of membership update law (34) and (35).
|
| 550 |
+
|
| 551 |
+
Choosing the following Lyapunov function
|
| 552 |
+
|
| 553 |
+
$$
|
| 554 |
+
{S}_{e}\left( t\right) = \frac{1}{2{\alpha }_{a}}{\widetilde{\omega }}_{ae}^{\mathrm{T}}{\widetilde{\omega }}_{ae} + \frac{1}{2{\alpha }_{c}}{\widetilde{\omega }}_{ce}^{\mathrm{T}}{\widetilde{\omega }}_{ce} + \frac{{\alpha }_{s}}{{\alpha }_{a}}{L}_{s}\left( \xi \right) + \frac{{\alpha }_{s}}{{\alpha }_{c}}{L}_{s}(\xi
|
| 555 |
+
$$
|
| 556 |
+
|
| 557 |
+
(ξ)
|
| 558 |
+
|
| 559 |
+
(46)
|
| 560 |
+
|
| 561 |
+
same proof as that in Case 1, we can demonstrate all signals are UUB.
|
| 562 |
+
|
| 563 |
+
Motivated by [14], the derivative of event-triggered function can be written as
|
| 564 |
+
|
| 565 |
+
$$
|
| 566 |
+
\frac{d}{dt}\left| {\Gamma \left( t\right) }\right| = \frac{d}{dt}{\left( \Gamma \left( t\right) \times \Gamma \left( t\right) \right) }^{\frac{1}{2}} = \operatorname{sgn}\left( {\Gamma \left( t\right) }\right) \dot{\Gamma }\left( t\right) \leq \left| {{\dot{u}}^{ * }\left( {\xi \left( t\right) }\right) }\right|
|
| 567 |
+
$$
|
| 568 |
+
|
| 569 |
+
(47)
|
| 570 |
+
|
| 571 |
+
Because all signals are UUB, absolutely existing a positive parameter $\kappa$ satisfies
|
| 572 |
+
|
| 573 |
+
$$
|
| 574 |
+
\left| {{\dot{u}}^{ * }\left( {\xi \left( t\right) }\right) }\right| \leq \kappa \tag{48}
|
| 575 |
+
$$
|
| 576 |
+
|
| 577 |
+
According to the event-triggered mechanism (28) and (29), one can derive that $\Gamma \left( {t}_{d}\right) = 0$ and $\mathop{\lim }\limits_{{t \rightarrow {t}_{d + 1}}}\Gamma \left( {t}_{d + 1}\right) =$ $\Delta \left| {{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) }\right| + M$ . Combining equation (47),(48) and performing some mathematical operations, the minimal inter-execution ${t}^{ * } = {t}_{d + 1} - {t}_{d}$ satisfies ${t}^{ * } > \frac{\left| {{u}_{e}^{ * }\left( {\xi \left( t\right) }\right) }\right| + M}{\kappa },\forall t \in \left\lbrack {{t}_{d},{t}_{d + 1}}\right)$ . Consequently, it is guaranteed that the Zeno behavior is non-occurring.
|
| 578 |
+
|
| 579 |
+
§ VI. SIMULATION
|
| 580 |
+
|
| 581 |
+
In this section, YUKUN of Dalian Maritime University is utilized to verify the validity and flexibility of the optimal control strategy considering event-triggered mechanism. The parameters of YUKUN are as follows: length between perpendiculars is ${105}\mathrm{\;m}$ , beam is ${18}\mathrm{\;m}$ , rudder area is 11.46 ${\mathrm{m}}^{2}$ , loaded speed is ${16.7}\mathrm{{kn}}$ , full amidships draft is ${5.2}\mathrm{\;m}$ , full loaded displacement is ${5735.5}{\mathrm{\;m}}^{3}$ , block coefficient is 0.5595 . Maritime environment can be set that: wind direction ${\psi }_{\text{ wind }} = {30}^{ \circ }$ , wind scale $\mathcal{S} = 6$ , current direction ${\psi }_{\text{ current }} =$ ${30}^{ \circ }$ , current velocity ${v}_{\text{ current }} = 5\mathrm{{kn}}$ .
|
| 582 |
+
|
| 583 |
+
Therefore, a continuous-time ship dynamic system can be considered
|
| 584 |
+
|
| 585 |
+
$$
|
| 586 |
+
\left\{ \begin{array}{l} {\dot{x}}_{1} = {x}_{2} \\ {\dot{x}}_{2} = - \frac{1}{T}\left( {{\alpha }_{s}{x}_{2} + {\beta }_{s}{x}_{2}{}^{3}}\right) + \frac{K}{T}\left( {u + {\delta }_{w}}\right) \\ y = {x}_{1} \end{array}\right. \tag{49}
|
| 587 |
+
$$
|
| 588 |
+
|
| 589 |
+
where ${x}_{1}$ and ${x}_{2} \in \mathbb{R}$ are state variables, $u \in \mathbb{R}$ is the control input variable; reference signal ${x}_{1d} =$ $\sin \left( {{\pi t}/{25}}\right)$ ; the rudder gain $K = {0.314}$ and time constant $T = {62.387}$ ; designed parameters ${\alpha }_{s} = {100}$ and ${\beta }_{s} = {50}$ . Design parameters ${\alpha }_{a} = {0.001},{\alpha }_{c} = 1$ , ${\alpha }_{s} = {100000},R = {0.067},\Delta = {0.39},M = {0.001}$ . The initial state can be set that ${x}_{0} = {\left\lbrack -{0.3},{2.1},{0.1},{0.03}\right\rbrack }^{\mathrm{T}}$ , the initial degree of membership can be set that ${\omega }_{a0} =$ ${\left\lbrack -{3.4}, - 4, - {3.5}, - {1.8}, - 2,0, - {1.4}, - {0.8}, - {1.8}, - 2\right\rbrack }^{\mathrm{T}},{\omega }_{c0} =$ ${\left\lbrack 1,{1.3},{1.5},{1.3},0,0,{1.5},3,{3.3},3\right\rbrack }^{\mathrm{T}}$ .
|
| 590 |
+
|
| 591 |
+
Simulation results are illustrated in Fig. 1-4. The tracking trajectory and error are shown in Fig. 1, where the ship course can rapidly track the reference course in 10 seconds and tracking error can converge to a bounded compact set of zero based on the designed event-triggered adaptive optimal controller. Fig. 2 describes the general control input and the event-triggered control input. Its result illustrates event-triggered controller is superior to common controllers under the same conditions. The numerical values of event-triggered controller are smaller than that of the general controller, which effectively verifies the competent in reducing mechanical wear and saving energy of the event-triggered mechanism. Fig. 3 describes the corresponding triggered time that highlights the advantages of cost saving for event-triggered controller. In the end, Fig. 4 gives the value function and policy function degree of memberships convergence exhibitions which demonstrate degree of membership signals can rapidly coverage to a bounded range.
|
| 592 |
+
|
| 593 |
+
< g r a p h i c s >
|
| 594 |
+
|
| 595 |
+
Fig. 1. Trajectories of the course tracking error, actual course and reference course.
|
| 596 |
+
|
| 597 |
+
< g r a p h i c s >
|
| 598 |
+
|
| 599 |
+
Fig. 2. Trajectories of control input and event-triggered control input.
|
| 600 |
+
|
| 601 |
+
§ VII. CONCLUSION
|
| 602 |
+
|
| 603 |
+
In this article, an event-triggered optimal tracking control scheme has been proposed for uncertain nonlinear systems based on RL. An improved ADP technique combining actor-critic algorithm and fuzzy logic systems have been implemented in solving HJB equation of nominal system. To reduce mechanical wear of actuator and save energy, event-triggered mechanism has been performed to update controller. All signals are UUB by Lyapunov demonstration and simulations verify the feasibility of proposed scheme. In the future, we will study the tracking control problem based on deep reinforcement learning and the multi-agent systems also is an interesting direction.
|
| 604 |
+
|
| 605 |
+
< g r a p h i c s >
|
| 606 |
+
|
| 607 |
+
Fig. 3. Inter-event times of ${u}_{e}$ .
|
| 608 |
+
|
| 609 |
+
< g r a p h i c s >
|
| 610 |
+
|
| 611 |
+
Fig. 4. Convergence situations of policy function degree of memberships ${\widehat{\omega }}_{a}$ and value function degree of memberships ${\widehat{\omega }}_{c}$ .
|
| 612 |
+
|
| 613 |
+
§ ACKNOWLEDGMENT
|
| 614 |
+
|
| 615 |
+
This work was supported in part by the Central Guidance on Local Science and Technology Development Fund of Liaoning Province (Grant No. 2023JH6/100100055); in part by the National Natural Science Foundation of China (Grant Nos. 52271360); in part by the Dalian Outstanding Young Scientific and Technological Talents Project (Grant No. 2023RY031); in part by the Basic Scientific Research Project of Liaoning Education Department (Grant No. JYTMS20230164); and in part by the Fundamental Research Funds for the Central Universities (Grant No. 3132024125).
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,393 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Simulation Research on Time-Optimal Path Planning of UAV Utilizing the Flightmare Platform
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Yuling Xin
|
| 4 |
+
|
| 5 |
+
School of Automation Engineering
|
| 6 |
+
|
| 7 |
+
University of Electronic Science
|
| 8 |
+
|
| 9 |
+
and Technology of China
|
| 10 |
+
|
| 11 |
+
Chendu, China
|
| 12 |
+
|
| 13 |
+
xinyuling01@163.com
|
| 14 |
+
|
| 15 |
+
${2}^{\text{nd }}$ Xin Lu
|
| 16 |
+
|
| 17 |
+
Yangtze Delta Region Institute (Huzhou)
|
| 18 |
+
|
| 19 |
+
University of Electronic Science
|
| 20 |
+
|
| 21 |
+
and Technology of China
|
| 22 |
+
|
| 23 |
+
Huzhou, China
|
| 24 |
+
|
| 25 |
+
luxin_uestc@163.com
|
| 26 |
+
|
| 27 |
+
${3}^{\text{rd }}$ Fusheng ${\mathrm{{Li}}}^{ * }$
|
| 28 |
+
|
| 29 |
+
School of Automation Engineering
|
| 30 |
+
|
| 31 |
+
University of Electronic Science
|
| 32 |
+
|
| 33 |
+
and Technology of China
|
| 34 |
+
|
| 35 |
+
Chendu, China
|
| 36 |
+
|
| 37 |
+
lifusheng@uestc.edu.cn
|
| 38 |
+
|
| 39 |
+
Abstract-This paper presents a study on time-optimal path planning and control for Unmanned Aerial Vehicles (UAVs) using fourth-order minimum snap trajectory generation and Nonlinear Model Predictive Control (NMPC) on the Flightmare simulation platform. Targeting the demands of fast flight in complex environments, a fourth-order polynomial trajectory planner is designed to minimize flight time while adhering to dynamical constraints. Integration with an NMPC and a PID controller enables precise tracking and dynamic adjustment of planned trajectories. Experimental results demonstrate that this method generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety.
|
| 40 |
+
|
| 41 |
+
Index Terms-Flightmare Platform, Fourth-Order Minimum Snap Trajectory Generation, High-Fidelity Simulation, UAV, $\mathbf{{NMPC}}$
|
| 42 |
+
|
| 43 |
+
## I. INTRODUCTION
|
| 44 |
+
|
| 45 |
+
As Unmanned Aerial Vehicle (UAV) technology continues to evolve at a rapid pace, its applications have broadened significantly across diverse fields. UAVs, also known as drones, have become indispensable tools for tasks requiring high-speed, agile, and autonomous responses [1]. These include but are not limited to package delivery, search-and-rescue operations, aerial photography, environmental monitoring, and even military applications [2]. Within these applications, the ability to plan time-optimal flight paths that align seamlessly with UAV dynamics is paramount for improving overall performance and safety.
|
| 46 |
+
|
| 47 |
+
Time-optimal path planning for UAVs is a complex problem that involves optimizing flight trajectories to minimize the total flight time while adhering to various constraints such as dynamical limitations, obstacle avoidance, and energy efficiency [3]. This optimization process not only ensures faster completion of missions but also enhances the stability and safety of the UAVs during operation.
|
| 48 |
+
|
| 49 |
+
Traditional approaches to path planning for UAVs often focus on generating collision-free paths, but they often fail to account for the intricate dynamics of the aircraft, leading to suboptimal flight performance [4]. To overcome this limitation, recent research has explored the integration of advanced trajectory planning and control techniques [9].
|
| 50 |
+
|
| 51 |
+
The fourth-order minimum snap trajectory generation method optimizes the snap term (fourth derivative of the position) of the trajectory [15]. This approach ensures that the generated trajectories are both smooth and aggressive, which is crucial for achieving high-speed flight in complex environments. The integration of an NMPC and a PID controller further enhances the system's capabilities by dynamically adjusting control inputs based on real-time state feedback. This allows for precise tracking of the planned trajectory and resilience against uncertainties during flight.
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
|
| 55 |
+
Fig. 1. Experimental results on the Flightmare simulation platform.
|
| 56 |
+
|
| 57 |
+
The proposed framework is evaluated using the Flightmare simulation platform, a high-fidelity drone simulation based on the Unity engine. This platform offers precise physics modeling and flexible interfaces for algorithm development, making it an ideal testbed for validating the effectiveness of the proposed method. The experimental results demonstrate that the integration of fourth-order minimum snap trajectory generation with NMPC generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety. The flightmre experimental results are shown in Figure 1.
|
| 58 |
+
|
| 59 |
+
## II. Problem Formulation
|
| 60 |
+
|
| 61 |
+
## A. Agile High-speed Flight
|
| 62 |
+
|
| 63 |
+
High-speed Unmanned Aerial Vehicles (UAVs) operating in complex environments face numerous challenges in trajectory generation and control. These challenges stem from the intricate dynamics of quadrotors, the stringent requirements on agility, and the need to adapt quickly to unexpected obstacles and environmental changes [1].
|
| 64 |
+
|
| 65 |
+
In terms of trajectory generation, high-speed flight demands trajectories that are not only collision-free but also highly dynamic and aggressive to minimize flight time. Traditional methods of trajectory planning, such as spline interpolation or simple waypoint navigation, often fail to generate trajectories that fully exploit the full capabilities of the UAVs, particularly at high speeds [4]. Minimizing the flight time while adhering to strict dynamical constraints and avoiding obstacles becomes an NP-hard optimization problem that requires sophisticated algorithms to solve efficiently.
|
| 66 |
+
|
| 67 |
+
Control of high-speed UAVs further complicates the problem due to the inherent nonlinearities and uncertainties in the system dynamics. Real-time adjustments are crucial to handle external disturbances, actuator saturation, and sensor noise. Moreover, the fast-changing environment necessitates a control scheme that can rapidly replan and adjust the trajectory on the fly to ensure safety and mission success.
|
| 68 |
+
|
| 69 |
+
In summary, agile high-speed UAVs require:
|
| 70 |
+
|
| 71 |
+
1) Trajectory generation algorithms that can produce smooth yet aggressive trajectories to minimize flight time under strict dynamical and environmental constraints.
|
| 72 |
+
|
| 73 |
+
2) A robust control framework that can dynamically adjust control inputs based on real-time feedback to handle uncertainties and disturbances, ensuring precise tracking of the planned trajectory.
|
| 74 |
+
|
| 75 |
+
## B. Optimal Problem
|
| 76 |
+
|
| 77 |
+
Traditionally, optimal control problems in the context of UAVs aim to minimize a cost function subject to a set of constraints on the system dynamics and inputs. This formulation allows balancing multiple objectives, such as minimizing flight time, energy consumption, or control effort, while ensuring that the UAV operates within its physical and operational limits.
|
| 78 |
+
|
| 79 |
+
Mathematically, an optimal control problem can be formulated as follows:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathop{\min }\limits_{\mathbf{u}}\;{\int }_{{t}_{0}}^{{t}_{f}}{\mathcal{L}}_{a}\left( {\mathbf{x},\mathbf{u}}\right) {dt} \tag{1}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\text{subject to}\;\mathbf{r}\left( {\mathbf{x},\mathbf{u},\mathbf{z}}\right) = 0
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathbf{h}\left( {\mathbf{x},\mathbf{u},\mathbf{z}}\right) \leq 0
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
## III. DRONE MODELING
|
| 94 |
+
|
| 95 |
+
## A. Nomenclature
|
| 96 |
+
|
| 97 |
+
In this work, we establish a comprehensive mathematical framework for robot vision systems. We define a world frame $W$ with an orthonormal basis $\left\{ {{x}_{W},{y}_{W},{z}_{W}}\right\}$ to represent the global environment. Additionally, a body frame $B$ with an orthonormal basis $\left\{ {{x}_{B},{y}_{B},{z}_{B}}\right\}$ is introduced to describe the robot's orientation and position. The body frame is attached to the quadrotor, with its origin aligned with the center of mass as illustrated in Fig. 2.
|
| 98 |
+
|
| 99 |
+
Throughout the document, vectors are denoted in boldface with a prefix indicating the frame of reference and a suffix specifying the vector's origin and terminus. For example, ${\mathbf{w}}_{WB}$ represents the position vector of the body frame $B$ relative to the world frame $W$ , expressed in the coordinates of the world frame.
|
| 100 |
+
|
| 101 |
+
To represent the orientation of rigid bodies, including the robot, we employ quaternions. The time derivative of a quaternion ${\mathbf{q}}_{WB} = \left( {{q}_{w},{q}_{x},{q}_{y},{q}_{z}}\right)$ is governed by the skew-symmetric matrix $\Lambda \left( \omega \right)$ , where ${\mathbf{\omega }}_{B} = {\left( {\omega }_{x},{\omega }_{y},{\omega }_{z}\right) }^{T}$ represents the angular velocity.
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+
Fig. 2. Schematic diagrams of the quadrotor model being considered, along with the coordinate systems utilized.
|
| 106 |
+
|
| 107 |
+
## B. Quadrotor Dynamics
|
| 108 |
+
|
| 109 |
+
The drone is modeled as a rigid body with six degrees of freedom (DoF). The state vector $\mathbf{x} \in {\mathbb{R}}^{13}$ describing the evolution of the drone's configuration over time is given by:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\mathbf{x} = \left\lbrack \begin{matrix} {\mathbf{p}}_{WB} \\ {\mathbf{v}}_{WB} \\ {\mathbf{q}}_{WB} \\ {\mathbf{\omega }}_{B} \end{matrix}\right\rbrack \text{ and }\mathbf{u} = \left\lbrack \begin{matrix} T \\ \mathbf{\tau } \end{matrix}\right\rbrack \tag{2}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where: ${\mathbf{p}}_{WB} \in {\mathbb{R}}^{3}$ is the position of the drone’s center of mass in the world frame $W,{\mathbf{v}}_{WB} \in {\mathbb{R}}^{3}$ is the linear velocity of the drone in the world frame, ${\mathbf{q}}_{WB} \in {SO}\left( 3\right)$ is the quaternion representing the rotation from the body frame $B$ to the world frame $W,{\omega }_{B} \in {\mathbb{R}}^{3}$ is the angular velocity of the drone in the body frame. $T$ is the total thrust produced by the drone’s rotors, and $\tau$ is the total torque acting on the drone.
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\mathbf{J} = \left\lbrack \begin{matrix} {J}_{x} & 0 & 0 \\ 0 & {J}_{y} & 0 \\ 0 & 0 & {J}_{z} \end{matrix}\right\rbrack \tag{3}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where ${J}_{x},{J}_{y}$ , and ${J}_{z}$ are the moments of inertia of the drone about its principal axes.
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
T = \mathop{\sum }\limits_{{i = 1}}^{4}{f}_{i} \tag{4}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where ${f}_{i}$ is the thrust produced by the i-th rotor.
|
| 128 |
+
|
| 129 |
+
The time derivative of the state vector $\dot{\mathbf{x}}$ is governed by the following equations:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\dot{\mathbf{x}} = f\left( {\mathbf{x},\mathbf{u}}\right) = \left\lbrack \begin{matrix} {\mathbf{v}}_{WB} \\ \frac{1}{m}\left( {m{\mathbf{g}}_{W} + {\mathbf{q}}_{WB} \odot {\mathbf{T}}_{B}}\right) \\ \frac{1}{2}\mathbf{\Lambda }\left( {\mathbf{\Omega }}_{B}\right) \cdot {\mathbf{q}}_{WB} \\ {\mathbf{J}}^{-1}\left( {\mathbf{\tau } - {\mathbf{\omega }}_{B} \times J{\mathbf{\omega }}_{B}}\right) \end{matrix}\right\rbrack \tag{5}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where: $\odot$ denotes the quaternion multiplication, ${\mathbf{T}}_{B}$ and $\tau$ are the total force and torque acting on the drone, respectively, $m$ is the mass of the drone, $\mathbf{J} \in {\mathbb{R}}^{3 \times 3}$ is the inertia matrix, ${\mathbf{g}}_{W} = {\left\lbrack 0,0, - {9.81}\right\rbrack }^{T}\mathrm{\;m}/{\mathrm{s}}^{2}$ is the gravitational acceleration in the world frame.
|
| 136 |
+
|
| 137 |
+
The $\mathbf{\Lambda }$ means the skew-symmetric matrix of the angular velocity, which is given by:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\mathbf{\Lambda }\left( \omega \right) = \left\lbrack \begin{matrix} 0 & - {\omega }_{x} & - {\omega }_{y} & - {\omega }_{z} \\ {\omega }_{x} & 0 & {\omega }_{z} & - {\omega }_{y} \\ {\omega }_{y} & - {\omega }_{z} & 0 & {\omega }_{x} \\ {\omega }_{z} & {\omega }_{y} & - {\omega }_{x} & 0 \end{matrix}\right\rbrack \tag{6}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
The torque $\tau$ and total thrust $T$ are related to the individual i-th rotor thrust ${f}_{i}$ as:
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
{\mathbf{T}}_{B} = \left\lbrack \begin{array}{l} 0 \\ 0 \\ T \end{array}\right\rbrack \text{and}\tau = \left\lbrack \begin{matrix} \frac{l}{\sqrt{2}}\left( {{f}_{1} - {f}_{2} - {f}_{3} + {f}_{4}}\right) \\ \frac{l}{\sqrt{2}}\left( {-{f}_{1} - {f}_{2} + {f}_{3} + {f}_{4}}\right) \\ {c}_{\tau }\left( {{f}_{1} - {f}_{2} + {f}_{3} - {f}_{4}}\right) \end{matrix}\right\rbrack \tag{7}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
## IV. Path Generation
|
| 150 |
+
|
| 151 |
+
In this section, we discuss the methods used for generating time-optimal paths for autonomous drone racing. Specifically, we focus on polynomial trajectory planning, particularly the use of fourth-order polynomials to minimize the snap of the trajectory, as this objective leads to aggressive and smooth trajectories suitable for drone racing.
|
| 152 |
+
|
| 153 |
+
## A. Polynomial Trajectory Planning
|
| 154 |
+
|
| 155 |
+
Polynomial trajectory planning leverages the differential flatness property of quadrotors to simplify full-state trajectory planning to a problem of planning only a few flat outputs (typically position and yaw) [14]. By representing the trajectory as a polynomial, we can efficiently compute the control inputs that achieve the desired trajectory [15].
|
| 156 |
+
|
| 157 |
+
1) Minimizing Snap: To generate aggressive and smooth trajectories, the objective is to minimize the snap (fourth-order derivative of position) of the trajectory [15] [16]. The snap $s\left( t\right)$ of a polynomial trajectory $p\left( t\right) = {a}_{0} + {a}_{1}t + {a}_{2}{t}^{2} + {a}_{3}{t}^{3} + {a}_{4}{t}^{4}$ can be written as:
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
s\left( t\right) = {p}^{\left( 4\right) }\left( t\right) = {24}{a}_{4}t \tag{8}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
where ${p}^{\left( 4\right) }\left( t\right)$ denotes the fourth-order derivative of $p\left( t\right)$ with respect to time $t$ .
|
| 164 |
+
|
| 165 |
+
The optimization problem can then be formulated as finding the polynomial coefficients ${a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}$ that minimize the integral of the square of the snap over the trajectory duration $T$ :
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\mathop{\min }\limits_{{{a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}}}{\int }_{0}^{T}s{\left( t\right) }^{2}{dt} = {\int }_{0}^{T}{\left( {24}{a}_{4}t\right) }^{2}{dt} \tag{9}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
However, in practice, we often minimize the maximum snap or add additional constraints and costs related to trajectory duration, smoothness, and feasibility. The full optimization problem includes constraints on the initial and final states of the drone (position, velocity, acceleration, and jerk) as well as any intermediate waypoints or obstacle avoidance constraints.
|
| 172 |
+
|
| 173 |
+
2) Time Allocation: Finding the optimal time allocation along the trajectory (i.e., determining how fast the drone should travel through each segment) is crucial for achieving minimum lap times. This is typically done by optimizing the polynomial coefficients jointly with the trajectory duration $T$ :
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\mathop{\min }\limits_{{{a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}, T}}\left( {{\int }_{0}^{T}s{\left( t\right) }^{2}{dt} + \lambda \cdot T}\right) \tag{10}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
where $\lambda$ is a weight factor balancing the snap minimization and the total trajectory time.
|
| 180 |
+
|
| 181 |
+
## B. Implementation
|
| 182 |
+
|
| 183 |
+
Implementing a fourth-order polynomial trajectory planner involves solving the optimization problem described above. This can be done using numerical optimization techniques such as quadratic programming or nonlinear optimization solvers. The resulting trajectory is then used as a reference for the low-level controller to track.
|
| 184 |
+
|
| 185 |
+
In this paper, we adopt the polynomial trajectory planning approach to generate optimal paths. This method generates time-optimal trajectories by minimizing the snap of the trajectory.
|
| 186 |
+
|
| 187 |
+
In summary, polynomial trajectory planning with a focus on minimizing the snap of the trajectory is a powerful method for generating time-optimal and feasible paths for autonomous drone racing. This approach leverages the differential flatness property of quadrotors and enables the use of efficient optimization techniques to find optimal trajectories in real time.
|
| 188 |
+
|
| 189 |
+
## V. Model Predictive Control
|
| 190 |
+
|
| 191 |
+
Model Predictive Control (MPC) is a powerful technique for controlling complex systems with dynamical constraints [17]. For agile quadrotor flight, Nonlinear Model Predictive Control (NMPC) is particularly suited due to its ability to handle nonlinear dynamics and constraints effectively [9]. In this section, we detail the formulation and implementation of NMPC for quadrotor control.
|
| 192 |
+
|
| 193 |
+
## A. NMPC Formulation
|
| 194 |
+
|
| 195 |
+
The NMPC generates control inputs by solving a finite-time optimal control problem (OCP) over a receding horizon. The objective is to minimize the tracking error between the predicted states and reference states, while adhering to the system dynamics and constraints [5]. The optimization problem can be formulated as follows:
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
{\mathcal{L}}_{a} = {\overline{\mathbf{x}}}_{N}^{T}{Q}_{N}\overline{{\mathbf{x}}_{N}} + \mathop{\sum }\limits_{{i = 1}}^{{N - 1}}\left( {{\overline{\mathbf{x}}}_{i}^{T}{Q}_{i}\overline{{\mathbf{x}}_{i}} + {\overline{\mathbf{u}}}_{i}^{T}{R}_{i}{\overline{\mathbf{u}}}_{i}}\right)
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\text{s.t.}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
{\mathbf{x}}_{0} = {\mathbf{x}}_{\text{init }} \tag{11}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
{\mathbf{x}}_{k + 1} = f\left( {{\mathbf{x}}_{k},{\mathbf{u}}_{k}}\right) ,
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
{\mathbf{x}}_{k} \in \left\lbrack {{\mathbf{x}}_{\min },{\mathbf{x}}_{\max }}\right\rbrack
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
{\mathbf{u}}_{k} \in \left\lbrack {{\mathbf{u}}_{\min },{\mathbf{u}}_{\max }}\right\rbrack
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
where ${\overline{\mathbf{x}}}_{N}^{T}{Q}_{N}\overline{{\mathbf{x}}_{N}}$ is the terminal cost, ${\overline{\mathbf{x}}}_{i}^{T}{Q}_{i}\overline{{\mathbf{x}}_{i}}$ and ${\overline{\mathbf{u}}}_{i}^{T}{R}_{i}{\overline{\mathbf{u}}}_{i}$ are the stage costs, $f\left( {{\mathbf{x}}_{k},{\mathbf{u}}_{k}}\right)$ represents the discrete-time quadrotor dynamics, and ${Q}_{i},{R}_{i}$ , and ${Q}_{N}$ are positive definite weight matrices. The constraints ensure that the control inputs and angular velocities remain within specified bounds. And the $\overline{\mathbf{x}}$ and $\overline{\mathbf{u}}$ are defined as $\overline{\mathbf{x}} = \mathbf{x} - {\mathbf{x}}_{\text{ref }}$ and $\overline{\mathbf{u}} = \mathbf{u} - {\mathbf{u}}_{\text{ref }}$ respectively.
|
| 222 |
+
|
| 223 |
+
## B. Discretization of Dynamics
|
| 224 |
+
|
| 225 |
+
The continuous-time quadrotor dynamics need to be dis-cretized for use in the NMPC framework. This can be achieved using numerical integration schemes such as Euler integration or Runge-Kutta methods. In our implementation, we use multiple-shooting as the transcription method and Runge-Kutta integration [18] to discretize the dynamics.
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
{x}_{k + 1} = {f}_{\mathrm{{RK}}4}\left( {{x}_{k},{u}_{k},{\Delta t}}\right) \tag{12}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
where ${f}_{\mathrm{{RK}}4}$ is the Runge-Kutta 4th order integration function and ${\Delta t}$ is the discretization time step.
|
| 232 |
+
|
| 233 |
+
## C. Constraint Handling
|
| 234 |
+
|
| 235 |
+
Efficient constraint handling within the optimization framework is crucial for real-time performance. The NMPC formulation includes constraints on the angular velocities ${\mathbf{\Omega }}_{\mathrm{B}}$ , thrust $T$ , velocities ${\mathbf{v}}_{WB}$ , and control inputs $\mathbf{u}$ , ensuring that the control actions remain within the physical limits of the quadrotor.
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+
Fig. 3. Block diagram of the Nonlinear Model Predictive Controller with PID inner loop controller.
|
| 240 |
+
|
| 241 |
+
## D. Optimization Solver
|
| 242 |
+
|
| 243 |
+
The resulting nonlinear optimization problem is solved using a suitable solver, such as Sequential Quadratic Programming (SQP). In our implementation, we utilize the ACADO Toolkit [6] with qpOASES [7] as the underlying quadratic program solver.
|
| 244 |
+
|
| 245 |
+
## E. Integration with PID Controller
|
| 246 |
+
|
| 247 |
+
While NMPC provides a powerful framework for trajectory optimization and control, a PID controller can be used to complement the NMPC controller for enhanced stability and responsiveness. The PID controller can be used to regulate low-level system dynamics, such as the quadrotor's attitude, while the NMPC controller focuses on the high-level trajectory tracking. The integration of the two controllers is illustrated in Figure 3, where the NMPC controller generates the desired setpoints for the PID controller based on the time-optimal trajectory. The controller gains and parameters for the NMPC and PID controllers are summarized in Table I.
|
| 248 |
+
|
| 249 |
+
By integrating the PID and NMPC controllers, we can achieve a robust and responsive control system that can dynamically adjust to changes in the environment and mission requirements.
|
| 250 |
+
|
| 251 |
+
TABLE I
|
| 252 |
+
|
| 253 |
+
CONTROLLER GAINS AND PARAMETERS COMPARISON
|
| 254 |
+
|
| 255 |
+
<table><tr><td colspan="2">NMPC</td><td colspan="2">PID</td></tr><tr><td>Parameter</td><td>Value</td><td>Parameter</td><td>Value</td></tr><tr><td>$Q$</td><td>diag(200, 200, 500)</td><td>${K}_{p}$</td><td>50</td></tr><tr><td>$R$</td><td>diag(10, 50)</td><td>${K}_{i}$</td><td>1</td></tr><tr><td>${dt}$</td><td>50 ms</td><td>${K}_{d}$</td><td>0.01</td></tr><tr><td>$\mathrm{N}$</td><td>20</td><td/><td/></tr></table>
|
| 256 |
+
|
| 257 |
+
## VI. FLIGHTMARE
|
| 258 |
+
|
| 259 |
+
In this section, we introduce the Flightmare [8] simulation platform and discuss its advantages for validating the proposed time-optimal path planning and control framework. Flightmare is a high-fidelity quadrotor simulator designed for research and development, offering a range of features that make it an ideal testbed for evaluating UAV algorithms. We highlight the platform's unique capabilities and discuss the experimental setup used to validate the proposed method.
|
| 260 |
+
|
| 261 |
+
## A. Comparison of Quadrotor Simulators
|
| 262 |
+
|
| 263 |
+
In contrast to Hector [10], FlightGoggles [11], and AirSim [12] form Table II, Flightmare offers a unique combination of features that make it well-suited for UAV research. Flightmare's rendering engine is based on Unity, providing a flexible and high-speed rendering environment that can be tailored to the user's needs. The platform's physics simulation engine is highly configurable, supporting a range of dynamics from simple to real-world quadrotor behaviors. Flightmare is the only simulator among the compared ones that provides a point cloud extraction feature and an RL API, making it particularly suited for tasks requiring environmental $3\mathrm{D}$ information and reinforcement learning-based control policies. Additionally, Flightmare can simulate multiple vehicles concurrently, facilitating research on multi-drone applications. All in all, Flightmare is chosen as the simulation platform for validating the proposed method due to its unique features and capabilities.
|
| 264 |
+
|
| 265 |
+
TABLE II
|
| 266 |
+
|
| 267 |
+
A Comparison of Flightmare to Other Open-Source QuadroTor Simulators
|
| 268 |
+
|
| 269 |
+
<table><tr><td>Simulator</td><td>Rendering</td><td>Dynamics</td><td>Sensor Suite</td><td>Point Cloud</td><td>RL API</td><td>Vehicles</td></tr><tr><td>Hector [10]</td><td>OpenGL</td><td>Gazebo-based</td><td>IMU, RGB</td><td>✘</td><td>✘</td><td>Single</td></tr><tr><td>FlightGoggles [11]</td><td>Unity</td><td>Flexible</td><td>IMU, RGB</td><td>✘</td><td>✘</td><td>Single</td></tr><tr><td>AirSim [12]</td><td>Unreal Engine</td><td>PhysX</td><td>IMU, RGB, Depth, Seg</td><td>✘</td><td>✘</td><td>Multiple</td></tr><tr><td>Flightmare [8]</td><td>Unity</td><td>Flexible</td><td>IMU, RGB, Depth, Seg</td><td>✓</td><td>✓</td><td>Multiple</td></tr></table>
|
| 270 |
+
|
| 271 |
+
## B. Advantages of the Flightmare Platform
|
| 272 |
+
|
| 273 |
+
1) Decoupled Rendering and Physics Engine: One of the key strengths of Flightmare lies in its decoupled architecture, where the rendering engine based on Unity [19] is separated from the physics simulation engine. This design choice enables Flightmare to achieve remarkable performance: rendering speeds of up to ${230}\mathrm{{Hz}}$ and physics simulation frequencies of up to ${200},{000}\mathrm{\;{Hz}}$ on a standard laptop [8]. This separation also allows users to flexibly adjust the balance between visual fidelity and simulation speed, tailored to the specific research needs.
|
| 274 |
+
|
| 275 |
+
2) Flexible Sensor Suite: Flightmare comes equipped with a rich and configurable sensor suite, including IMU, RGB cameras with ground-truth depth and semantic segmentation, range finders, and collision detection capabilities. This enables researchers to simulate a wide range of sensing modalities, critical for developing and testing perception-driven algorithms. Furthermore, Flightmare provides APIs to extract the full 3D point cloud of the simulated environment, facilitating path planning and obstacle avoidance tasks.
|
| 276 |
+
|
| 277 |
+
3) Scalability and Parallel Simulation: The platform's flexibility extends to supporting large-scale simulations, enabling the parallel simulation of hundreds of quadrotors. This feature is invaluable for reinforcement learning applications, where data efficiency is crucial. By simulating multiple agents in parallel, Flightmare allows for rapid data collection, significantly accelerating the training process for control policies.
|
| 278 |
+
|
| 279 |
+
4) Open-Source and Modular Design: Flightmare's open-source nature and modular design encourage collaboration and extendibility. The platform provides a clear and well-documented API, facilitating integration with existing research tools and libraries. The modular structure also makes it easy to swap out components, such as the physics engine or rendering backend, based on the specific research requirements. In this work, we use the RotorS [13] as the underlying quadrotor dynamics model in Flightmare, demonstrating the platform's flexibility and modularity.
|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
|
| 283 |
+
Fig. 4. Block diagram of the integration of control algorithms with Flightmare.
|
| 284 |
+
|
| 285 |
+
## VII. EXPERIMENTS
|
| 286 |
+
|
| 287 |
+
In this section, we present the experimental setup and results of the proposed time-optimal path planning and control framework for autonomous drone racing. The integration of polynomial trajectory planning and NMPC is . validated in a simulated environment using the Flightmare platform. The results demonstrate the effectiveness of the proposed method in generating efficient and smooth flight trajectories, enabling UAVs to navigate precisely and stably along planned paths.
|
| 288 |
+
|
| 289 |
+
## A. Experimental Setup
|
| 290 |
+
|
| 291 |
+
To evaluate the proposed time-optimal path planning and control framework in the flightmare simulation platform, we firstly design the control flow as shown in Fig. 4. The Flightmare decouples the rendering and physics engines, and the interface between the rendering engine and the quadrotor dynamics is implemented using the high-performance asynchronous messaging library ZeroMQ [20].
|
| 292 |
+
|
| 293 |
+
The quadrotor configurations used in the simulation are shown in Table III.
|
| 294 |
+
|
| 295 |
+
## B. Trajectory Tracking Performance on Giving Path
|
| 296 |
+
|
| 297 |
+
To evaluate the trajectory tracking performance of the proposed framework, we first consider a simple scenario where the drone is required to track a given path. The path is defined as a spiral ascent trajectory given by:
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
\mathbf{p}\left( t\right) = \left\lbrack \begin{matrix} r\left( t\right) \cos \left( {\omega t}\right) \\ r\left( t\right) \sin \left( {\omega t}\right) \\ {v}_{z}t \end{matrix}\right\rbrack \tag{13}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
where $r\left( t\right) = {r}_{0} + {v}_{r}t$ is the radius of the spiral, $\omega$ is the angular velocity, and ${v}_{z}$ is the vertical velocity. The drone is required to track this path while maintaining a constant altitude.
|
| 304 |
+
|
| 305 |
+
TABLE III
|
| 306 |
+
|
| 307 |
+
QUADROTOR CONFIGURATIONS
|
| 308 |
+
|
| 309 |
+
<table><tr><td>Parameter(s)</td><td>Value(s)</td></tr><tr><td>$m\left\lbrack \mathrm{\;{kg}}\right\rbrack$</td><td>0.6</td></tr><tr><td>$l\left\lbrack \mathrm{\;m}\right\rbrack$</td><td>0.125</td></tr><tr><td>${J}_{x}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$</td><td>2.1e-3</td></tr><tr><td>${J}_{y}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$</td><td>${2.3}\mathrm{e} - 3$</td></tr><tr><td>${J}_{z}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$</td><td>${4.0}\mathrm{e} - 3$</td></tr><tr><td>$\left( {{T}_{\min },{T}_{\max }}\right)$ [N]</td><td>(0, 8.5)</td></tr><tr><td>${c}_{\tau }\left\lbrack {N \cdot m/{\left( rad/s\right) }^{2}}\right\rbrack$</td><td>${2.1}\mathrm{e} - 6$</td></tr><tr><td>${c}_{T}\left\lbrack {N/{\left( rad/s\right) }^{2}}\right\rbrack$</td><td>1.2e-6</td></tr></table>
|
| 310 |
+
|
| 311 |
+
The trajectory tracking performance of the proposed NMPC controller is shown in Fig. 5. In the figure, the pink dashed line represents the desired path, while the orange line represents the actual trajectory of the drone. The drone successfully tracks the spiral ascent trajectory, demonstrating the effectiveness of the proposed framework in generating smooth and accurate flight trajectories.
|
| 312 |
+
|
| 313 |
+
The error between the desired path and the actual trajectory is shown in Fig. 6. The error remains within an acceptable range, indicating that the drone is able to track the desired path accurately.
|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
|
| 317 |
+
Fig. 5. Drone tracking the trajectory of a given spiral ascent path. The pink dashed line represents the desired path, while the orange line represents the actual trajectory of the drone.
|
| 318 |
+
|
| 319 |
+
## C. Time-Optimal Path Planning for NMPC Controller
|
| 320 |
+
|
| 321 |
+
In this experiment, the drone has to navigate through four gates in a time-optimal manner, which are placed at different locations in $\left( {-{10},0,2}\right) ,\left( {0,{10},4}\right) ,\left( {{10},0,2}\right) ,\left( {0, - {10},2}\right)$ respectively.
|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
|
| 325 |
+
Fig. 6. Error between the desired path and the actual trajectory of the drone. The top, middle, and bottom plots represent the error in the $x, y$ , and $z$ directions, respectively.
|
| 326 |
+
|
| 327 |
+
The time-optimal path planning results are shown in Fig. 7 and Fig. 8. In these figures, the orange dashed line represents the time-optimal path generated by the polynomial trajectory planner, which is shown in section IV. And the pink line represents the actual trajectory of the drone, which is controlled by the NMPC controller. The drone successfully navigates through the four gates in a time-optimal manner, demonstrating the effectiveness of the proposed framework in generating aggressive and smooth flight trajectories.
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
|
| 331 |
+
Fig. 7. Time-optimal path generation and NMPC tracking of the drone through four gates. The orange dashed line represents the time-optimal path, the pink line represents the actual tracking trajectory, and the four squares represent the positions of the gates.
|
| 332 |
+
|
| 333 |
+
The tracking performance from $x, y, z$ axis of the drone is shown in Fig. 9, which indicates that the drone can track the time-optimal path accurately from the $x, y, z$ axis.
|
| 334 |
+
|
| 335 |
+
## VIII. CONCLUSION
|
| 336 |
+
|
| 337 |
+
This paper presents a comprehensive framework for time-optimal path generation and control of Unmanned Aerial Vehicles (UAVs) using fourth-order minimum snap trajectory generation and Nonlinear Model Predictive Control (NMPC). The framework is designed to address the challenges of agile high-speed flight in auto race, aiming to minimize flight time while adhering to strict dynamical constraints.
|
| 338 |
+
|
| 339 |
+

|
| 340 |
+
|
| 341 |
+
Fig. 8. Top view of the time-optimal path generation and NMPC tracking of the drone through four gates.
|
| 342 |
+
|
| 343 |
+

|
| 344 |
+
|
| 345 |
+
Fig. 9. Tracking performance of the drone through four gates in the $x, y, z$ axis. The top, middle, and bottom plots represent the tracking performance in the $x, y, z$ axis, respectively. The horizontal error indicates the control delay.
|
| 346 |
+
|
| 347 |
+
The proposed method utilizes the fourth-order polynomial trajectory generation approach to generate smooth yet aggressive trajectories. By minimizing the snap term (fourth derivative of position), the generated trajectories are optimized for high-speed performance while ensuring their feasibility and safety. The integration of NMPC controller further enhances the system capabilities by dynamically adjusting control inputs based on real-time state feedback, enabling precise trajectory tracking and resilience against uncertainties during flight.
|
| 348 |
+
|
| 349 |
+
The effectiveness of the proposed framework is evaluated using the Flightmare simulation platform, a high-fidelity drone simulator based on the Unity engine. The experimental results demonstrate that the integration of fourth-order minimum snap trajectory generation with NMPC generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety. This approach is well-suited for autonomous UAV operations in complex environments, such as drone racing and aerial photography.
|
| 350 |
+
|
| 351 |
+
Future work could further optimize the trajectory planning and control algorithms, explore adaptive control strategies, and investigate their application in real-world UAV platforms.
|
| 352 |
+
|
| 353 |
+
## REFERENCES
|
| 354 |
+
|
| 355 |
+
[1] Hanover D, Loquercio A, Bauersfeld L, Romero A, Penicka R, Song Y, et al. Autonomous Drone Racing: A Survey. IEEE Trans Robot. 2024;40:3044-67.
|
| 356 |
+
|
| 357 |
+
[2] Loquercio A, Kaufmann E, Ranftl R, Müller M, Koltun V, Scaramuzza D. Learning high-speed flight in the wild. Sci Robot. 2021 Oct 13;6(59).
|
| 358 |
+
|
| 359 |
+
[3] Romero A, Sun S, Foehn P, Scaramuzza D. Model Predictive Contouring Control for Time-Optimal Quadrotor Flight. IEEE Trans Robot. 2022 Dec;38(6):3340-56.
|
| 360 |
+
|
| 361 |
+
[4] Foehn P, Romero A, Scaramuzza D. Time-optimal planning for quadro-tor waypoint flight. Sci Robot. 2021 Jul 21;6(56).
|
| 362 |
+
|
| 363 |
+
[5] Falanga D, Foehn P, Lu P, Scaramuzza D. PAMPC: Perception-Aware Model Predictive Control for Quadrotors. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2018.
|
| 364 |
+
|
| 365 |
+
[6] Houska B, Ferreau HJ, Diehl M. ACADO toolkit-An open-source framework for automatic control and dynamic optimization. Optim Control Appl Meth. 2010 May 25;32(3):298-312.
|
| 366 |
+
|
| 367 |
+
[7] Ferreau HJ, Kirches C, Potschka A, Bock HG, Diehl M. qpOASES: a parametric active-set algorithm for quadratic programming. Math Prog Comp. 2014 Apr 30;6(4):327-63.
|
| 368 |
+
|
| 369 |
+
[8] Song Y, Naji S, Kaufmann E, Loquercio A, Scaramuzza D. Flightmare: A Flexible Quadrotor Simulator. Conference on Robot Learning. 2020;
|
| 370 |
+
|
| 371 |
+
[9] Sun S, Romero A, Foehn P, Kaufmann E, Scaramuzza D. A Comparative Study of Nonlinear MPC and Differential-Flatness-Based Control for Quadrotor Agile Flight. IEEE Trans Robot. 2022;1-17.
|
| 372 |
+
|
| 373 |
+
[10] Kohlbrecher S, Meyer J, Graber T, Petersen K, Klingauf U, von Stryk O. Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots. In: RoboCup 2013: Robot World Cup XVII. Berlin, Heidelberg: Springer Berlin Heidelberg; 2014. p. 624-31.
|
| 374 |
+
|
| 375 |
+
[11] Guerra W, Tal E, Murali V, Ryou G, Karaman S. FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2019.
|
| 376 |
+
|
| 377 |
+
[12] Shah S, Dey D, Lovett C, Kapoor A. AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles. In: Field and Service Robotics. Cham: Springer International Publishing; 2017. p. 621-35.
|
| 378 |
+
|
| 379 |
+
[13] Furrer F, Burri M, Achtelik M, Siegwart R. RotorS-A Modular Gazebo MAV Simulator Framework. In: Studies in Computational Intelligence. Cham: Springer International Publishing; 2016. p. 595-625.
|
| 380 |
+
|
| 381 |
+
[14] Faessler M, Franchi A, Scaramuzza D. Differential Flatness of Quadrotor Dynamics Subject to Rotor Drag for Accurate Tracking of High-Speed Trajectories. IEEE Robot Autom Lett. 2018 Apr;3(2):620-6.
|
| 382 |
+
|
| 383 |
+
[15] Mellinger D, Kumar V. Minimum Snap Trajectory Generation and Control for Quadrotors. In: 2011 IEEE International Conference on Robotics and Automation. IEEE; 2011.
|
| 384 |
+
|
| 385 |
+
[16] Mellinger D, Michael N, Kumar V. Trajectory generation and control for precise aggressive maneuvers with quadrotors. Int J Rob Res. 2012 Jan 25;31(5):664-74.
|
| 386 |
+
|
| 387 |
+
[17] Nguyen H, Kamel M, Alexis K, Siegwart R. Model Predictive Control for Micro Aerial Vehicles: A Survey. In: 2021 European Control Conference (ECC). IEEE; 2021.
|
| 388 |
+
|
| 389 |
+
[18] Houska B, Ferreau HJ, Diehl M. An auto-generated real-time iteration algorithm for nonlinear MPC in the microsecond range. Automatica (Oxf). 2011 Oct;47(10):2279-85.
|
| 390 |
+
|
| 391 |
+
[19] "Unity3d Game Engine," https://unity3d.com/, 2019, [Online; accessed 28-February-2019].
|
| 392 |
+
|
| 393 |
+
[20] ZeroMQ: High-performance brokerless messaging. ZeroMQ. https://zeromq.org
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/98Wp0EAx6P/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,414 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SIMULATION RESEARCH ON TIME-OPTIMAL PATH PLANNING OF UAV UTILIZING THE FLIGHTMARE PLATFORM
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Yuling Xin
|
| 4 |
+
|
| 5 |
+
School of Automation Engineering
|
| 6 |
+
|
| 7 |
+
University of Electronic Science
|
| 8 |
+
|
| 9 |
+
and Technology of China
|
| 10 |
+
|
| 11 |
+
Chendu, China
|
| 12 |
+
|
| 13 |
+
xinyuling01@163.com
|
| 14 |
+
|
| 15 |
+
${2}^{\text{ nd }}$ Xin Lu
|
| 16 |
+
|
| 17 |
+
Yangtze Delta Region Institute (Huzhou)
|
| 18 |
+
|
| 19 |
+
University of Electronic Science
|
| 20 |
+
|
| 21 |
+
and Technology of China
|
| 22 |
+
|
| 23 |
+
Huzhou, China
|
| 24 |
+
|
| 25 |
+
luxin_uestc@163.com
|
| 26 |
+
|
| 27 |
+
${3}^{\text{ rd }}$ Fusheng ${\mathrm{{Li}}}^{ * }$
|
| 28 |
+
|
| 29 |
+
School of Automation Engineering
|
| 30 |
+
|
| 31 |
+
University of Electronic Science
|
| 32 |
+
|
| 33 |
+
and Technology of China
|
| 34 |
+
|
| 35 |
+
Chendu, China
|
| 36 |
+
|
| 37 |
+
lifusheng@uestc.edu.cn
|
| 38 |
+
|
| 39 |
+
Abstract-This paper presents a study on time-optimal path planning and control for Unmanned Aerial Vehicles (UAVs) using fourth-order minimum snap trajectory generation and Nonlinear Model Predictive Control (NMPC) on the Flightmare simulation platform. Targeting the demands of fast flight in complex environments, a fourth-order polynomial trajectory planner is designed to minimize flight time while adhering to dynamical constraints. Integration with an NMPC and a PID controller enables precise tracking and dynamic adjustment of planned trajectories. Experimental results demonstrate that this method generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety.
|
| 40 |
+
|
| 41 |
+
Index Terms-Flightmare Platform, Fourth-Order Minimum Snap Trajectory Generation, High-Fidelity Simulation, UAV, $\mathbf{{NMPC}}$
|
| 42 |
+
|
| 43 |
+
§ I. INTRODUCTION
|
| 44 |
+
|
| 45 |
+
As Unmanned Aerial Vehicle (UAV) technology continues to evolve at a rapid pace, its applications have broadened significantly across diverse fields. UAVs, also known as drones, have become indispensable tools for tasks requiring high-speed, agile, and autonomous responses [1]. These include but are not limited to package delivery, search-and-rescue operations, aerial photography, environmental monitoring, and even military applications [2]. Within these applications, the ability to plan time-optimal flight paths that align seamlessly with UAV dynamics is paramount for improving overall performance and safety.
|
| 46 |
+
|
| 47 |
+
Time-optimal path planning for UAVs is a complex problem that involves optimizing flight trajectories to minimize the total flight time while adhering to various constraints such as dynamical limitations, obstacle avoidance, and energy efficiency [3]. This optimization process not only ensures faster completion of missions but also enhances the stability and safety of the UAVs during operation.
|
| 48 |
+
|
| 49 |
+
Traditional approaches to path planning for UAVs often focus on generating collision-free paths, but they often fail to account for the intricate dynamics of the aircraft, leading to suboptimal flight performance [4]. To overcome this limitation, recent research has explored the integration of advanced trajectory planning and control techniques [9].
|
| 50 |
+
|
| 51 |
+
The fourth-order minimum snap trajectory generation method optimizes the snap term (fourth derivative of the position) of the trajectory [15]. This approach ensures that the generated trajectories are both smooth and aggressive, which is crucial for achieving high-speed flight in complex environments. The integration of an NMPC and a PID controller further enhances the system's capabilities by dynamically adjusting control inputs based on real-time state feedback. This allows for precise tracking of the planned trajectory and resilience against uncertainties during flight.
|
| 52 |
+
|
| 53 |
+
< g r a p h i c s >
|
| 54 |
+
|
| 55 |
+
Fig. 1. Experimental results on the Flightmare simulation platform.
|
| 56 |
+
|
| 57 |
+
The proposed framework is evaluated using the Flightmare simulation platform, a high-fidelity drone simulation based on the Unity engine. This platform offers precise physics modeling and flexible interfaces for algorithm development, making it an ideal testbed for validating the effectiveness of the proposed method. The experimental results demonstrate that the integration of fourth-order minimum snap trajectory generation with NMPC generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety. The flightmre experimental results are shown in Figure 1.
|
| 58 |
+
|
| 59 |
+
§ II. PROBLEM FORMULATION
|
| 60 |
+
|
| 61 |
+
§ A. AGILE HIGH-SPEED FLIGHT
|
| 62 |
+
|
| 63 |
+
High-speed Unmanned Aerial Vehicles (UAVs) operating in complex environments face numerous challenges in trajectory generation and control. These challenges stem from the intricate dynamics of quadrotors, the stringent requirements on agility, and the need to adapt quickly to unexpected obstacles and environmental changes [1].
|
| 64 |
+
|
| 65 |
+
In terms of trajectory generation, high-speed flight demands trajectories that are not only collision-free but also highly dynamic and aggressive to minimize flight time. Traditional methods of trajectory planning, such as spline interpolation or simple waypoint navigation, often fail to generate trajectories that fully exploit the full capabilities of the UAVs, particularly at high speeds [4]. Minimizing the flight time while adhering to strict dynamical constraints and avoiding obstacles becomes an NP-hard optimization problem that requires sophisticated algorithms to solve efficiently.
|
| 66 |
+
|
| 67 |
+
Control of high-speed UAVs further complicates the problem due to the inherent nonlinearities and uncertainties in the system dynamics. Real-time adjustments are crucial to handle external disturbances, actuator saturation, and sensor noise. Moreover, the fast-changing environment necessitates a control scheme that can rapidly replan and adjust the trajectory on the fly to ensure safety and mission success.
|
| 68 |
+
|
| 69 |
+
In summary, agile high-speed UAVs require:
|
| 70 |
+
|
| 71 |
+
1) Trajectory generation algorithms that can produce smooth yet aggressive trajectories to minimize flight time under strict dynamical and environmental constraints.
|
| 72 |
+
|
| 73 |
+
2) A robust control framework that can dynamically adjust control inputs based on real-time feedback to handle uncertainties and disturbances, ensuring precise tracking of the planned trajectory.
|
| 74 |
+
|
| 75 |
+
§ B. OPTIMAL PROBLEM
|
| 76 |
+
|
| 77 |
+
Traditionally, optimal control problems in the context of UAVs aim to minimize a cost function subject to a set of constraints on the system dynamics and inputs. This formulation allows balancing multiple objectives, such as minimizing flight time, energy consumption, or control effort, while ensuring that the UAV operates within its physical and operational limits.
|
| 78 |
+
|
| 79 |
+
Mathematically, an optimal control problem can be formulated as follows:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathop{\min }\limits_{\mathbf{u}}\;{\int }_{{t}_{0}}^{{t}_{f}}{\mathcal{L}}_{a}\left( {\mathbf{x},\mathbf{u}}\right) {dt} \tag{1}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\text{ subject to }\;\mathbf{r}\left( {\mathbf{x},\mathbf{u},\mathbf{z}}\right) = 0
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathbf{h}\left( {\mathbf{x},\mathbf{u},\mathbf{z}}\right) \leq 0
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
§ III. DRONE MODELING
|
| 94 |
+
|
| 95 |
+
§ A. NOMENCLATURE
|
| 96 |
+
|
| 97 |
+
In this work, we establish a comprehensive mathematical framework for robot vision systems. We define a world frame $W$ with an orthonormal basis $\left\{ {{x}_{W},{y}_{W},{z}_{W}}\right\}$ to represent the global environment. Additionally, a body frame $B$ with an orthonormal basis $\left\{ {{x}_{B},{y}_{B},{z}_{B}}\right\}$ is introduced to describe the robot's orientation and position. The body frame is attached to the quadrotor, with its origin aligned with the center of mass as illustrated in Fig. 2.
|
| 98 |
+
|
| 99 |
+
Throughout the document, vectors are denoted in boldface with a prefix indicating the frame of reference and a suffix specifying the vector's origin and terminus. For example, ${\mathbf{w}}_{WB}$ represents the position vector of the body frame $B$ relative to the world frame $W$ , expressed in the coordinates of the world frame.
|
| 100 |
+
|
| 101 |
+
To represent the orientation of rigid bodies, including the robot, we employ quaternions. The time derivative of a quaternion ${\mathbf{q}}_{WB} = \left( {{q}_{w},{q}_{x},{q}_{y},{q}_{z}}\right)$ is governed by the skew-symmetric matrix $\Lambda \left( \omega \right)$ , where ${\mathbf{\omega }}_{B} = {\left( {\omega }_{x},{\omega }_{y},{\omega }_{z}\right) }^{T}$ represents the angular velocity.
|
| 102 |
+
|
| 103 |
+
< g r a p h i c s >
|
| 104 |
+
|
| 105 |
+
Fig. 2. Schematic diagrams of the quadrotor model being considered, along with the coordinate systems utilized.
|
| 106 |
+
|
| 107 |
+
§ B. QUADROTOR DYNAMICS
|
| 108 |
+
|
| 109 |
+
The drone is modeled as a rigid body with six degrees of freedom (DoF). The state vector $\mathbf{x} \in {\mathbb{R}}^{13}$ describing the evolution of the drone's configuration over time is given by:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\mathbf{x} = \left\lbrack \begin{matrix} {\mathbf{p}}_{WB} \\ {\mathbf{v}}_{WB} \\ {\mathbf{q}}_{WB} \\ {\mathbf{\omega }}_{B} \end{matrix}\right\rbrack \text{ and }\mathbf{u} = \left\lbrack \begin{matrix} T \\ \mathbf{\tau } \end{matrix}\right\rbrack \tag{2}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where: ${\mathbf{p}}_{WB} \in {\mathbb{R}}^{3}$ is the position of the drone’s center of mass in the world frame $W,{\mathbf{v}}_{WB} \in {\mathbb{R}}^{3}$ is the linear velocity of the drone in the world frame, ${\mathbf{q}}_{WB} \in {SO}\left( 3\right)$ is the quaternion representing the rotation from the body frame $B$ to the world frame $W,{\omega }_{B} \in {\mathbb{R}}^{3}$ is the angular velocity of the drone in the body frame. $T$ is the total thrust produced by the drone’s rotors, and $\tau$ is the total torque acting on the drone.
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\mathbf{J} = \left\lbrack \begin{matrix} {J}_{x} & 0 & 0 \\ 0 & {J}_{y} & 0 \\ 0 & 0 & {J}_{z} \end{matrix}\right\rbrack \tag{3}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where ${J}_{x},{J}_{y}$ , and ${J}_{z}$ are the moments of inertia of the drone about its principal axes.
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
T = \mathop{\sum }\limits_{{i = 1}}^{4}{f}_{i} \tag{4}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where ${f}_{i}$ is the thrust produced by the i-th rotor.
|
| 128 |
+
|
| 129 |
+
The time derivative of the state vector $\dot{\mathbf{x}}$ is governed by the following equations:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\dot{\mathbf{x}} = f\left( {\mathbf{x},\mathbf{u}}\right) = \left\lbrack \begin{matrix} {\mathbf{v}}_{WB} \\ \frac{1}{m}\left( {m{\mathbf{g}}_{W} + {\mathbf{q}}_{WB} \odot {\mathbf{T}}_{B}}\right) \\ \frac{1}{2}\mathbf{\Lambda }\left( {\mathbf{\Omega }}_{B}\right) \cdot {\mathbf{q}}_{WB} \\ {\mathbf{J}}^{-1}\left( {\mathbf{\tau } - {\mathbf{\omega }}_{B} \times J{\mathbf{\omega }}_{B}}\right) \end{matrix}\right\rbrack \tag{5}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where: $\odot$ denotes the quaternion multiplication, ${\mathbf{T}}_{B}$ and $\tau$ are the total force and torque acting on the drone, respectively, $m$ is the mass of the drone, $\mathbf{J} \in {\mathbb{R}}^{3 \times 3}$ is the inertia matrix, ${\mathbf{g}}_{W} = {\left\lbrack 0,0, - {9.81}\right\rbrack }^{T}\mathrm{\;m}/{\mathrm{s}}^{2}$ is the gravitational acceleration in the world frame.
|
| 136 |
+
|
| 137 |
+
The $\mathbf{\Lambda }$ means the skew-symmetric matrix of the angular velocity, which is given by:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\mathbf{\Lambda }\left( \omega \right) = \left\lbrack \begin{matrix} 0 & - {\omega }_{x} & - {\omega }_{y} & - {\omega }_{z} \\ {\omega }_{x} & 0 & {\omega }_{z} & - {\omega }_{y} \\ {\omega }_{y} & - {\omega }_{z} & 0 & {\omega }_{x} \\ {\omega }_{z} & {\omega }_{y} & - {\omega }_{x} & 0 \end{matrix}\right\rbrack \tag{6}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
The torque $\tau$ and total thrust $T$ are related to the individual i-th rotor thrust ${f}_{i}$ as:
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
{\mathbf{T}}_{B} = \left\lbrack \begin{array}{l} 0 \\ 0 \\ T \end{array}\right\rbrack \text{ and }\tau = \left\lbrack \begin{matrix} \frac{l}{\sqrt{2}}\left( {{f}_{1} - {f}_{2} - {f}_{3} + {f}_{4}}\right) \\ \frac{l}{\sqrt{2}}\left( {-{f}_{1} - {f}_{2} + {f}_{3} + {f}_{4}}\right) \\ {c}_{\tau }\left( {{f}_{1} - {f}_{2} + {f}_{3} - {f}_{4}}\right) \end{matrix}\right\rbrack \tag{7}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
§ IV. PATH GENERATION
|
| 150 |
+
|
| 151 |
+
In this section, we discuss the methods used for generating time-optimal paths for autonomous drone racing. Specifically, we focus on polynomial trajectory planning, particularly the use of fourth-order polynomials to minimize the snap of the trajectory, as this objective leads to aggressive and smooth trajectories suitable for drone racing.
|
| 152 |
+
|
| 153 |
+
§ A. POLYNOMIAL TRAJECTORY PLANNING
|
| 154 |
+
|
| 155 |
+
Polynomial trajectory planning leverages the differential flatness property of quadrotors to simplify full-state trajectory planning to a problem of planning only a few flat outputs (typically position and yaw) [14]. By representing the trajectory as a polynomial, we can efficiently compute the control inputs that achieve the desired trajectory [15].
|
| 156 |
+
|
| 157 |
+
1) Minimizing Snap: To generate aggressive and smooth trajectories, the objective is to minimize the snap (fourth-order derivative of position) of the trajectory [15] [16]. The snap $s\left( t\right)$ of a polynomial trajectory $p\left( t\right) = {a}_{0} + {a}_{1}t + {a}_{2}{t}^{2} + {a}_{3}{t}^{3} + {a}_{4}{t}^{4}$ can be written as:
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
s\left( t\right) = {p}^{\left( 4\right) }\left( t\right) = {24}{a}_{4}t \tag{8}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
where ${p}^{\left( 4\right) }\left( t\right)$ denotes the fourth-order derivative of $p\left( t\right)$ with respect to time $t$ .
|
| 164 |
+
|
| 165 |
+
The optimization problem can then be formulated as finding the polynomial coefficients ${a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}$ that minimize the integral of the square of the snap over the trajectory duration $T$ :
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\mathop{\min }\limits_{{{a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4}}}{\int }_{0}^{T}s{\left( t\right) }^{2}{dt} = {\int }_{0}^{T}{\left( {24}{a}_{4}t\right) }^{2}{dt} \tag{9}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
However, in practice, we often minimize the maximum snap or add additional constraints and costs related to trajectory duration, smoothness, and feasibility. The full optimization problem includes constraints on the initial and final states of the drone (position, velocity, acceleration, and jerk) as well as any intermediate waypoints or obstacle avoidance constraints.
|
| 172 |
+
|
| 173 |
+
2) Time Allocation: Finding the optimal time allocation along the trajectory (i.e., determining how fast the drone should travel through each segment) is crucial for achieving minimum lap times. This is typically done by optimizing the polynomial coefficients jointly with the trajectory duration $T$ :
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\mathop{\min }\limits_{{{a}_{0},{a}_{1},{a}_{2},{a}_{3},{a}_{4},T}}\left( {{\int }_{0}^{T}s{\left( t\right) }^{2}{dt} + \lambda \cdot T}\right) \tag{10}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
where $\lambda$ is a weight factor balancing the snap minimization and the total trajectory time.
|
| 180 |
+
|
| 181 |
+
§ B. IMPLEMENTATION
|
| 182 |
+
|
| 183 |
+
Implementing a fourth-order polynomial trajectory planner involves solving the optimization problem described above. This can be done using numerical optimization techniques such as quadratic programming or nonlinear optimization solvers. The resulting trajectory is then used as a reference for the low-level controller to track.
|
| 184 |
+
|
| 185 |
+
In this paper, we adopt the polynomial trajectory planning approach to generate optimal paths. This method generates time-optimal trajectories by minimizing the snap of the trajectory.
|
| 186 |
+
|
| 187 |
+
In summary, polynomial trajectory planning with a focus on minimizing the snap of the trajectory is a powerful method for generating time-optimal and feasible paths for autonomous drone racing. This approach leverages the differential flatness property of quadrotors and enables the use of efficient optimization techniques to find optimal trajectories in real time.
|
| 188 |
+
|
| 189 |
+
§ V. MODEL PREDICTIVE CONTROL
|
| 190 |
+
|
| 191 |
+
Model Predictive Control (MPC) is a powerful technique for controlling complex systems with dynamical constraints [17]. For agile quadrotor flight, Nonlinear Model Predictive Control (NMPC) is particularly suited due to its ability to handle nonlinear dynamics and constraints effectively [9]. In this section, we detail the formulation and implementation of NMPC for quadrotor control.
|
| 192 |
+
|
| 193 |
+
§ A. NMPC FORMULATION
|
| 194 |
+
|
| 195 |
+
The NMPC generates control inputs by solving a finite-time optimal control problem (OCP) over a receding horizon. The objective is to minimize the tracking error between the predicted states and reference states, while adhering to the system dynamics and constraints [5]. The optimization problem can be formulated as follows:
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
{\mathcal{L}}_{a} = {\overline{\mathbf{x}}}_{N}^{T}{Q}_{N}\overline{{\mathbf{x}}_{N}} + \mathop{\sum }\limits_{{i = 1}}^{{N - 1}}\left( {{\overline{\mathbf{x}}}_{i}^{T}{Q}_{i}\overline{{\mathbf{x}}_{i}} + {\overline{\mathbf{u}}}_{i}^{T}{R}_{i}{\overline{\mathbf{u}}}_{i}}\right)
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\text{ s.t. }
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
{\mathbf{x}}_{0} = {\mathbf{x}}_{\text{ init }} \tag{11}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
{\mathbf{x}}_{k + 1} = f\left( {{\mathbf{x}}_{k},{\mathbf{u}}_{k}}\right) ,
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
{\mathbf{x}}_{k} \in \left\lbrack {{\mathbf{x}}_{\min },{\mathbf{x}}_{\max }}\right\rbrack
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
{\mathbf{u}}_{k} \in \left\lbrack {{\mathbf{u}}_{\min },{\mathbf{u}}_{\max }}\right\rbrack
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
where ${\overline{\mathbf{x}}}_{N}^{T}{Q}_{N}\overline{{\mathbf{x}}_{N}}$ is the terminal cost, ${\overline{\mathbf{x}}}_{i}^{T}{Q}_{i}\overline{{\mathbf{x}}_{i}}$ and ${\overline{\mathbf{u}}}_{i}^{T}{R}_{i}{\overline{\mathbf{u}}}_{i}$ are the stage costs, $f\left( {{\mathbf{x}}_{k},{\mathbf{u}}_{k}}\right)$ represents the discrete-time quadrotor dynamics, and ${Q}_{i},{R}_{i}$ , and ${Q}_{N}$ are positive definite weight matrices. The constraints ensure that the control inputs and angular velocities remain within specified bounds. And the $\overline{\mathbf{x}}$ and $\overline{\mathbf{u}}$ are defined as $\overline{\mathbf{x}} = \mathbf{x} - {\mathbf{x}}_{\text{ ref }}$ and $\overline{\mathbf{u}} = \mathbf{u} - {\mathbf{u}}_{\text{ ref }}$ respectively.
|
| 222 |
+
|
| 223 |
+
§ B. DISCRETIZATION OF DYNAMICS
|
| 224 |
+
|
| 225 |
+
The continuous-time quadrotor dynamics need to be dis-cretized for use in the NMPC framework. This can be achieved using numerical integration schemes such as Euler integration or Runge-Kutta methods. In our implementation, we use multiple-shooting as the transcription method and Runge-Kutta integration [18] to discretize the dynamics.
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
{x}_{k + 1} = {f}_{\mathrm{{RK}}4}\left( {{x}_{k},{u}_{k},{\Delta t}}\right) \tag{12}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
where ${f}_{\mathrm{{RK}}4}$ is the Runge-Kutta 4th order integration function and ${\Delta t}$ is the discretization time step.
|
| 232 |
+
|
| 233 |
+
§ C. CONSTRAINT HANDLING
|
| 234 |
+
|
| 235 |
+
Efficient constraint handling within the optimization framework is crucial for real-time performance. The NMPC formulation includes constraints on the angular velocities ${\mathbf{\Omega }}_{\mathrm{B}}$ , thrust $T$ , velocities ${\mathbf{v}}_{WB}$ , and control inputs $\mathbf{u}$ , ensuring that the control actions remain within the physical limits of the quadrotor.
|
| 236 |
+
|
| 237 |
+
< g r a p h i c s >
|
| 238 |
+
|
| 239 |
+
Fig. 3. Block diagram of the Nonlinear Model Predictive Controller with PID inner loop controller.
|
| 240 |
+
|
| 241 |
+
§ D. OPTIMIZATION SOLVER
|
| 242 |
+
|
| 243 |
+
The resulting nonlinear optimization problem is solved using a suitable solver, such as Sequential Quadratic Programming (SQP). In our implementation, we utilize the ACADO Toolkit [6] with qpOASES [7] as the underlying quadratic program solver.
|
| 244 |
+
|
| 245 |
+
§ E. INTEGRATION WITH PID CONTROLLER
|
| 246 |
+
|
| 247 |
+
While NMPC provides a powerful framework for trajectory optimization and control, a PID controller can be used to complement the NMPC controller for enhanced stability and responsiveness. The PID controller can be used to regulate low-level system dynamics, such as the quadrotor's attitude, while the NMPC controller focuses on the high-level trajectory tracking. The integration of the two controllers is illustrated in Figure 3, where the NMPC controller generates the desired setpoints for the PID controller based on the time-optimal trajectory. The controller gains and parameters for the NMPC and PID controllers are summarized in Table I.
|
| 248 |
+
|
| 249 |
+
By integrating the PID and NMPC controllers, we can achieve a robust and responsive control system that can dynamically adjust to changes in the environment and mission requirements.
|
| 250 |
+
|
| 251 |
+
TABLE I
|
| 252 |
+
|
| 253 |
+
CONTROLLER GAINS AND PARAMETERS COMPARISON
|
| 254 |
+
|
| 255 |
+
max width=
|
| 256 |
+
|
| 257 |
+
2|c|NMPC 2|c|PID
|
| 258 |
+
|
| 259 |
+
1-4
|
| 260 |
+
Parameter Value Parameter Value
|
| 261 |
+
|
| 262 |
+
1-4
|
| 263 |
+
$Q$ diag(200, 200, 500) ${K}_{p}$ 50
|
| 264 |
+
|
| 265 |
+
1-4
|
| 266 |
+
$R$ diag(10, 50) ${K}_{i}$ 1
|
| 267 |
+
|
| 268 |
+
1-4
|
| 269 |
+
${dt}$ 50 ms ${K}_{d}$ 0.01
|
| 270 |
+
|
| 271 |
+
1-4
|
| 272 |
+
$\mathrm{N}$ 20 X X
|
| 273 |
+
|
| 274 |
+
1-4
|
| 275 |
+
|
| 276 |
+
§ VI. FLIGHTMARE
|
| 277 |
+
|
| 278 |
+
In this section, we introduce the Flightmare [8] simulation platform and discuss its advantages for validating the proposed time-optimal path planning and control framework. Flightmare is a high-fidelity quadrotor simulator designed for research and development, offering a range of features that make it an ideal testbed for evaluating UAV algorithms. We highlight the platform's unique capabilities and discuss the experimental setup used to validate the proposed method.
|
| 279 |
+
|
| 280 |
+
§ A. COMPARISON OF QUADROTOR SIMULATORS
|
| 281 |
+
|
| 282 |
+
In contrast to Hector [10], FlightGoggles [11], and AirSim [12] form Table II, Flightmare offers a unique combination of features that make it well-suited for UAV research. Flightmare's rendering engine is based on Unity, providing a flexible and high-speed rendering environment that can be tailored to the user's needs. The platform's physics simulation engine is highly configurable, supporting a range of dynamics from simple to real-world quadrotor behaviors. Flightmare is the only simulator among the compared ones that provides a point cloud extraction feature and an RL API, making it particularly suited for tasks requiring environmental $3\mathrm{D}$ information and reinforcement learning-based control policies. Additionally, Flightmare can simulate multiple vehicles concurrently, facilitating research on multi-drone applications. All in all, Flightmare is chosen as the simulation platform for validating the proposed method due to its unique features and capabilities.
|
| 283 |
+
|
| 284 |
+
TABLE II
|
| 285 |
+
|
| 286 |
+
A Comparison of Flightmare to Other Open-Source QuadroTor Simulators
|
| 287 |
+
|
| 288 |
+
max width=
|
| 289 |
+
|
| 290 |
+
Simulator Rendering Dynamics Sensor Suite Point Cloud RL API Vehicles
|
| 291 |
+
|
| 292 |
+
1-7
|
| 293 |
+
Hector [10] OpenGL Gazebo-based IMU, RGB ✘ ✘ Single
|
| 294 |
+
|
| 295 |
+
1-7
|
| 296 |
+
FlightGoggles [11] Unity Flexible IMU, RGB ✘ ✘ Single
|
| 297 |
+
|
| 298 |
+
1-7
|
| 299 |
+
AirSim [12] Unreal Engine PhysX IMU, RGB, Depth, Seg ✘ ✘ Multiple
|
| 300 |
+
|
| 301 |
+
1-7
|
| 302 |
+
Flightmare [8] Unity Flexible IMU, RGB, Depth, Seg ✓ ✓ Multiple
|
| 303 |
+
|
| 304 |
+
1-7
|
| 305 |
+
|
| 306 |
+
§ B. ADVANTAGES OF THE FLIGHTMARE PLATFORM
|
| 307 |
+
|
| 308 |
+
1) Decoupled Rendering and Physics Engine: One of the key strengths of Flightmare lies in its decoupled architecture, where the rendering engine based on Unity [19] is separated from the physics simulation engine. This design choice enables Flightmare to achieve remarkable performance: rendering speeds of up to ${230}\mathrm{{Hz}}$ and physics simulation frequencies of up to ${200},{000}\mathrm{\;{Hz}}$ on a standard laptop [8]. This separation also allows users to flexibly adjust the balance between visual fidelity and simulation speed, tailored to the specific research needs.
|
| 309 |
+
|
| 310 |
+
2) Flexible Sensor Suite: Flightmare comes equipped with a rich and configurable sensor suite, including IMU, RGB cameras with ground-truth depth and semantic segmentation, range finders, and collision detection capabilities. This enables researchers to simulate a wide range of sensing modalities, critical for developing and testing perception-driven algorithms. Furthermore, Flightmare provides APIs to extract the full 3D point cloud of the simulated environment, facilitating path planning and obstacle avoidance tasks.
|
| 311 |
+
|
| 312 |
+
3) Scalability and Parallel Simulation: The platform's flexibility extends to supporting large-scale simulations, enabling the parallel simulation of hundreds of quadrotors. This feature is invaluable for reinforcement learning applications, where data efficiency is crucial. By simulating multiple agents in parallel, Flightmare allows for rapid data collection, significantly accelerating the training process for control policies.
|
| 313 |
+
|
| 314 |
+
4) Open-Source and Modular Design: Flightmare's open-source nature and modular design encourage collaboration and extendibility. The platform provides a clear and well-documented API, facilitating integration with existing research tools and libraries. The modular structure also makes it easy to swap out components, such as the physics engine or rendering backend, based on the specific research requirements. In this work, we use the RotorS [13] as the underlying quadrotor dynamics model in Flightmare, demonstrating the platform's flexibility and modularity.
|
| 315 |
+
|
| 316 |
+
< g r a p h i c s >
|
| 317 |
+
|
| 318 |
+
Fig. 4. Block diagram of the integration of control algorithms with Flightmare.
|
| 319 |
+
|
| 320 |
+
§ VII. EXPERIMENTS
|
| 321 |
+
|
| 322 |
+
In this section, we present the experimental setup and results of the proposed time-optimal path planning and control framework for autonomous drone racing. The integration of polynomial trajectory planning and NMPC is . validated in a simulated environment using the Flightmare platform. The results demonstrate the effectiveness of the proposed method in generating efficient and smooth flight trajectories, enabling UAVs to navigate precisely and stably along planned paths.
|
| 323 |
+
|
| 324 |
+
§ A. EXPERIMENTAL SETUP
|
| 325 |
+
|
| 326 |
+
To evaluate the proposed time-optimal path planning and control framework in the flightmare simulation platform, we firstly design the control flow as shown in Fig. 4. The Flightmare decouples the rendering and physics engines, and the interface between the rendering engine and the quadrotor dynamics is implemented using the high-performance asynchronous messaging library ZeroMQ [20].
|
| 327 |
+
|
| 328 |
+
The quadrotor configurations used in the simulation are shown in Table III.
|
| 329 |
+
|
| 330 |
+
§ B. TRAJECTORY TRACKING PERFORMANCE ON GIVING PATH
|
| 331 |
+
|
| 332 |
+
To evaluate the trajectory tracking performance of the proposed framework, we first consider a simple scenario where the drone is required to track a given path. The path is defined as a spiral ascent trajectory given by:
|
| 333 |
+
|
| 334 |
+
$$
|
| 335 |
+
\mathbf{p}\left( t\right) = \left\lbrack \begin{matrix} r\left( t\right) \cos \left( {\omega t}\right) \\ r\left( t\right) \sin \left( {\omega t}\right) \\ {v}_{z}t \end{matrix}\right\rbrack \tag{13}
|
| 336 |
+
$$
|
| 337 |
+
|
| 338 |
+
where $r\left( t\right) = {r}_{0} + {v}_{r}t$ is the radius of the spiral, $\omega$ is the angular velocity, and ${v}_{z}$ is the vertical velocity. The drone is required to track this path while maintaining a constant altitude.
|
| 339 |
+
|
| 340 |
+
TABLE III
|
| 341 |
+
|
| 342 |
+
QUADROTOR CONFIGURATIONS
|
| 343 |
+
|
| 344 |
+
max width=
|
| 345 |
+
|
| 346 |
+
Parameter(s) Value(s)
|
| 347 |
+
|
| 348 |
+
1-2
|
| 349 |
+
$m\left\lbrack \mathrm{\;{kg}}\right\rbrack$ 0.6
|
| 350 |
+
|
| 351 |
+
1-2
|
| 352 |
+
$l\left\lbrack \mathrm{\;m}\right\rbrack$ 0.125
|
| 353 |
+
|
| 354 |
+
1-2
|
| 355 |
+
${J}_{x}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ 2.1e-3
|
| 356 |
+
|
| 357 |
+
1-2
|
| 358 |
+
${J}_{y}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ ${2.3}\mathrm{e} - 3$
|
| 359 |
+
|
| 360 |
+
1-2
|
| 361 |
+
${J}_{z}\left\lbrack {{kg} \cdot {m}^{2}}\right\rbrack$ ${4.0}\mathrm{e} - 3$
|
| 362 |
+
|
| 363 |
+
1-2
|
| 364 |
+
$\left( {{T}_{\min },{T}_{\max }}\right)$ [N] (0, 8.5)
|
| 365 |
+
|
| 366 |
+
1-2
|
| 367 |
+
${c}_{\tau }\left\lbrack {N \cdot m/{\left( rad/s\right) }^{2}}\right\rbrack$ ${2.1}\mathrm{e} - 6$
|
| 368 |
+
|
| 369 |
+
1-2
|
| 370 |
+
${c}_{T}\left\lbrack {N/{\left( rad/s\right) }^{2}}\right\rbrack$ 1.2e-6
|
| 371 |
+
|
| 372 |
+
1-2
|
| 373 |
+
|
| 374 |
+
The trajectory tracking performance of the proposed NMPC controller is shown in Fig. 5. In the figure, the pink dashed line represents the desired path, while the orange line represents the actual trajectory of the drone. The drone successfully tracks the spiral ascent trajectory, demonstrating the effectiveness of the proposed framework in generating smooth and accurate flight trajectories.
|
| 375 |
+
|
| 376 |
+
The error between the desired path and the actual trajectory is shown in Fig. 6. The error remains within an acceptable range, indicating that the drone is able to track the desired path accurately.
|
| 377 |
+
|
| 378 |
+
< g r a p h i c s >
|
| 379 |
+
|
| 380 |
+
Fig. 5. Drone tracking the trajectory of a given spiral ascent path. The pink dashed line represents the desired path, while the orange line represents the actual trajectory of the drone.
|
| 381 |
+
|
| 382 |
+
§ C. TIME-OPTIMAL PATH PLANNING FOR NMPC CONTROLLER
|
| 383 |
+
|
| 384 |
+
In this experiment, the drone has to navigate through four gates in a time-optimal manner, which are placed at different locations in $\left( {-{10},0,2}\right) ,\left( {0,{10},4}\right) ,\left( {{10},0,2}\right) ,\left( {0, - {10},2}\right)$ respectively.
|
| 385 |
+
|
| 386 |
+
< g r a p h i c s >
|
| 387 |
+
|
| 388 |
+
Fig. 6. Error between the desired path and the actual trajectory of the drone. The top, middle, and bottom plots represent the error in the $x,y$ , and $z$ directions, respectively.
|
| 389 |
+
|
| 390 |
+
The time-optimal path planning results are shown in Fig. 7 and Fig. 8. In these figures, the orange dashed line represents the time-optimal path generated by the polynomial trajectory planner, which is shown in section IV. And the pink line represents the actual trajectory of the drone, which is controlled by the NMPC controller. The drone successfully navigates through the four gates in a time-optimal manner, demonstrating the effectiveness of the proposed framework in generating aggressive and smooth flight trajectories.
|
| 391 |
+
|
| 392 |
+
< g r a p h i c s >
|
| 393 |
+
|
| 394 |
+
Fig. 7. Time-optimal path generation and NMPC tracking of the drone through four gates. The orange dashed line represents the time-optimal path, the pink line represents the actual tracking trajectory, and the four squares represent the positions of the gates.
|
| 395 |
+
|
| 396 |
+
The tracking performance from $x,y,z$ axis of the drone is shown in Fig. 9, which indicates that the drone can track the time-optimal path accurately from the $x,y,z$ axis.
|
| 397 |
+
|
| 398 |
+
§ VIII. CONCLUSION
|
| 399 |
+
|
| 400 |
+
This paper presents a comprehensive framework for time-optimal path generation and control of Unmanned Aerial Vehicles (UAVs) using fourth-order minimum snap trajectory generation and Nonlinear Model Predictive Control (NMPC). The framework is designed to address the challenges of agile high-speed flight in auto race, aiming to minimize flight time while adhering to strict dynamical constraints.
|
| 401 |
+
|
| 402 |
+
< g r a p h i c s >
|
| 403 |
+
|
| 404 |
+
Fig. 8. Top view of the time-optimal path generation and NMPC tracking of the drone through four gates.
|
| 405 |
+
|
| 406 |
+
< g r a p h i c s >
|
| 407 |
+
|
| 408 |
+
Fig. 9. Tracking performance of the drone through four gates in the $x,y,z$ axis. The top, middle, and bottom plots represent the tracking performance in the $x,y,z$ axis, respectively. The horizontal error indicates the control delay.
|
| 409 |
+
|
| 410 |
+
The proposed method utilizes the fourth-order polynomial trajectory generation approach to generate smooth yet aggressive trajectories. By minimizing the snap term (fourth derivative of position), the generated trajectories are optimized for high-speed performance while ensuring their feasibility and safety. The integration of NMPC controller further enhances the system capabilities by dynamically adjusting control inputs based on real-time state feedback, enabling precise trajectory tracking and resilience against uncertainties during flight.
|
| 411 |
+
|
| 412 |
+
The effectiveness of the proposed framework is evaluated using the Flightmare simulation platform, a high-fidelity drone simulator based on the Unity engine. The experimental results demonstrate that the integration of fourth-order minimum snap trajectory generation with NMPC generates efficient and smooth flight trajectories, significantly reducing flight time while ensuring UAV stability and safety. This approach is well-suited for autonomous UAV operations in complex environments, such as drone racing and aerial photography.
|
| 413 |
+
|
| 414 |
+
Future work could further optimize the trajectory planning and control algorithms, explore adaptive control strategies, and investigate their application in real-world UAV platforms.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,417 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Synchronization of Coupled Delayed Discontinuous Systems via Event-Trigged Intermittent Control
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Rongqiang Tang
|
| 4 |
+
|
| 5 |
+
College of Electronics and Information Engineering
|
| 6 |
+
|
| 7 |
+
Sichuan University
|
| 8 |
+
|
| 9 |
+
Chengdu, Sichuan
|
| 10 |
+
|
| 11 |
+
tangrongqiang@stu.scu.edu.cn
|
| 12 |
+
|
| 13 |
+
${2}^{\text{nd }}$ Xinsong Yang*
|
| 14 |
+
|
| 15 |
+
College of Electronics and Information Engineering
|
| 16 |
+
|
| 17 |
+
Sichuan University
|
| 18 |
+
|
| 19 |
+
Chengdu, Sichuan
|
| 20 |
+
|
| 21 |
+
xinsongyang@scu.edu.cn
|
| 22 |
+
|
| 23 |
+
Abstract-This talk focuses on the complete synchronization of coupled delayed discontinuous systems (DDSs). Without constraints on the derivatives of time delays, several new conditions are exploited to guarantee the global existence of Filippov solutions for DDSs. A nonsmooth intermittent control combined with an event-triggering strategy is then designed. The conspicuous feature of this control scheme is that the measurement error in the event-triggering mechanism is formulated as a linear form, which can reduce computation burden compared to classical approaches. To address the challenges posed by Filippov solutions and intermittent control, novel analytical techniques, including an original lemma and a weighted-norm-based Lyapunov function, are developed so that sufficient synchronization conditions for DDSs are obtained. Finally, the effectiveness of the theoretical findings is confirmed by Hopfield neural networks.
|
| 24 |
+
|
| 25 |
+
Index Terms-Discontinuous systems, event-triggered intermittent control, Filippov solution, synchronization, time delays.
|
| 26 |
+
|
| 27 |
+
## I. INTRODUCTION
|
| 28 |
+
|
| 29 |
+
Coupled discontinuous systems (DSs), modeled by some interconnected differential equations with discontinuous righthand sides, are a special type of complex network. Their applications span various areas of applied science and engineering, such as variable structure systems, neural networks [1], control synthesis [2], etc. Recently, there has been substantial attention on the dynamic behaviors of DSs with or without time delays, covering stability, stabilization, and synchronization [3]-[5].
|
| 30 |
+
|
| 31 |
+
Considering the discontinuities of the states on the righthand side of DSs, especially delayed DSs (DDSs), it is paramount to discuss the existence of Filippov solutions. Some limitations on time delays are necessary to ensure the existence of Filippov solutions for DDSs. For example, literature [1] considered DDSs with constant delays. Liu et al. [6] demanded that the state variables with time delays satisfy $\parallel z\left( {t - \sigma \left( t\right) }\right) \parallel \leq \parallel z\left( t\right) \parallel + \mathop{\max }\limits_{{1 \leq i \leq n}}\mathop{\max }\limits_{{-\sigma \leq s \leq 0}}\left\{ {{z}_{i}\left( s\right) }\right\} ,$ where $z\left( t\right) \in {\mathbb{R}}^{n}$ is the state variable and $\sigma \left( t\right) \in \left\lbrack {0,\sigma }\right\rbrack$ is the time delay. Yang et al. [7], [8] provided sufficient criteria for the existence of global Filippov solutions for DDSs, based on the condition that the derivatives of time delays are less than 1. However, in reality, the derivatives of some time delays can exceed or equal 1 , and even be non-differentiable in some cases. A fundamental question arises: What conditions guarantee the existence of Filippov solutions for DDSs when these constraints are removed?
|
| 32 |
+
|
| 33 |
+
To study the synchronization of coupled DDSs (CDDSs), the basic idea is to transform CDDSs into uncertain systems using Filippov regularization and the measurable selection theorem, and then to address the corresponding issues for the uncertain systems [8]. Quasi-synchronization criteria for CDDSs have been obtained via smooth state feedback control [6], [9]. A nonsmooth control incorporating sign functions was proposed to achieve complete synchronization of CDDSs [7], where the sign function is use to mitigate the effects of uncertainties caused by Filippov solutions. Subsequent results on exponential, finite-time, and fixed-time synchronization of CDDSs have been published in [10]-[13]. However, little work has been done to achieve the complete synchronization of CDDSs via intermittent control. Actually, intermittent control offers better robustness and lower control cost than continuous control, as control signals can be artificially interrupted without affecting the final control purposes [14]-[18]. If the intermittent control is adopted for complete synchronization of CDDSs, the main obstacle lies in that the uncertainties posed by Filippov solution are difficult to cancel out during the interrupted intervals of control signals. So, how to develop new analytical methods to study the complete synchronization of CDDSs with intermittent control is another motivation.
|
| 34 |
+
|
| 35 |
+
Event-triggered control has recently sparked increasing interest due to its ability to reduce computational overhead by updating the sampled signal based on a preset supervision mechanism [19]-[21]. To fully leverage the merits of event-triggered strategy and intermittent control, this paper considers the complete synchronization of general CDDSs via a novel event-trigged intermittent control. The primary contributions of this work are:
|
| 36 |
+
|
| 37 |
+
1) The existence of Filippov solutions of DDSs is discussed. Different from existing papers [1], [6]-[8], several harsh constrictions on delays are removed.
|
| 38 |
+
|
| 39 |
+
2) A novel lemma is developed to address the difficulties induced by intermittent control. Then, complete synchronization criteria for CDDSs with intermittent control are obtained for the first time.
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant Nos. 62373262 and 62303336, and in part by the Central guiding local science and technology development special project of Sichuan, and in part by the Fundamental Research Funds for Central Universities under Grant No. 2022SCU12009, and in part by the Sichuan Province Natural Science Foundation of China (NSFSC) under Grant Nos. 2022NSFSC0541, 2022NSFSC0875, 2023NSFSC1433.(Corresponding Author: Xinsong Yang)
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
3) A simple robust intermittent control scheme is designed by combining an event-triggered strategy with nonsmooth control. Unlike many event-triggered nonsmooth controls [12], [17], the measurement error (ME) in a linear form for the event-triggering mechanism (ETM) is considered, which facilitates easy computation (see Table I).
|
| 48 |
+
|
| 49 |
+
Notation: Let ${\mathcal{D}}^{ + }\left\lbrack \cdot \right\rbrack$ be the upper right Dini derivative operator. ${\mathbb{N}}_{k}^{j} \triangleq \{ k, k + 1,\ldots , j\}$ with $k < j \in \mathbb{N},\operatorname{dg}\left( \cdot \right)$ is the block-diagonal matrix. For $a \in {\mathbb{R}}^{n}$ , let $\operatorname{cl}{\left( {a}_{i}\right) }_{n} = {\left( {a}_{1},{a}_{2},\ldots ,{a}_{n}\right) }^{\top }$ , and $\operatorname{dg}{\left( {a}_{i}\right) }_{n} = \operatorname{diag}\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) ,\operatorname{sg}\left( a\right) = \frac{a}{\parallel a\parallel },\parallel a\parallel \neq 0$ , otherwise $\operatorname{sg}\left( a\right) = 0$ . The other notations used in this paper are same as those in [16].
|
| 50 |
+
|
| 51 |
+
## II. Preliminaries
|
| 52 |
+
|
| 53 |
+
In this paper, the problem of synchronization and control in an array of coupled DDSs is considered. Before starting the research works, several necessary preparations on the solution of DDSs and stability theorem are provided.
|
| 54 |
+
|
| 55 |
+
## A. Filippov solution of DDSs
|
| 56 |
+
|
| 57 |
+
Consider a DDS as follows:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\dot{z}\left( t\right) = F\left( {z,{z}_{\sigma }}\right) , z\left( o\right) = \tau \left( o\right) \in \mathcal{C}\left( {\left\lbrack {-\sigma ,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) . \tag{1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Here $F\left( {z,{z}_{\sigma }}\right) \triangleq {Cz}\left( t\right) + {Ah}\left( {z\left( t\right) }\right) + {Bg}\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) , z\left( t\right) \in$ ${\mathbb{R}}^{n}$ denotes the state vector, $\sigma \left( t\right) \in \left\lbrack {0,\sigma }\right\rbrack$ is the bounded delay, $C, A = {\left( {a}_{ij}\right) }_{n \times n}$ , and $B = {\left( {b}_{ij}\right) }_{n \times n} \in {\mathbb{R}}^{n \times n}$ are known constant matrices, nonlinear functions $h\left( \cdot \right) , g\left( \cdot \right) : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$ are continuous except on a series of smooth hypersurfaces domains [7]. Chosen an initial value $z\left( o\right)$ for system (1), its trajectory can establish the desired state, such as equilibrium point, chaotic orbit, or nontrivial periodic orbit.
|
| 64 |
+
|
| 65 |
+
Due to the discontinuity of $\mathbf{a}\left( \cdot \right)$ with $\mathbf{a} = \{ h, g\}$ , classical solutions of DDS (1) do not exist. To further study the dynamical behaviors of DDS (1), this paper utilizes the framework of the Filippov solution, in which the definition of Filippov solution can be founded in [6]-[8]. It is concluded that, for DDS (1), there exists a continuous function $z\left( t\right)$ on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ to be absolutely continuous on $\left\lbrack {0,\mathrm{t}}\right\rbrack$ such that
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\dot{z}\left( t\right) = \mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{2}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where $\mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) = {Cz}\left( t\right) + {A\gamma }\left( t\right) + {B\zeta }\left( {t - \sigma \left( t\right) }\right) ,\gamma \left( t\right) \in$ $\mathrm{F}\{ h\left( {z\left( t\right) }\right) \}$ and $\zeta \left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \}$ are measurable functions, and $\mathrm{F}\{ \cdot \}$ is the Filippov set-valued map [22].
|
| 72 |
+
|
| 73 |
+
For the Cauchy problem of DDS (1) in the sense of Filippov, it implies that there is a triple of function $\left( {z\left( t\right) ,\gamma \left( t\right) ,\zeta \left( t\right) }\right)$ : $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack \rightarrow {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \times {\mathbb{R}}^{n}$ such that $z\left( t\right)$ is a Filippov solution on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ with $\mathfrak{t} > 0$ and
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\left\{ \begin{array}{l} \dot{z}\left( t\right) = \mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \\ \gamma \left( s\right) = \zeta \left( s\right) = \mathrm{F}\{ \phi \left( s\right) \} ,\text{ a.a. }s \in \left\lbrack {-\sigma ,0}\right\rbrack , \\ z\left( s\right) = \varphi \left( s\right) ,\forall s \in \left\lbrack {-\sigma ,0}\right\rbrack , \end{array}\right. \tag{3}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $\varphi \left( t\right)$ is a continuous function on $\left\lbrack {-\sigma ,0}\right\rbrack$ and $\phi \left( t\right)$ is a measurable selection function.
|
| 80 |
+
|
| 81 |
+
The following lemma provides some mild conditions to ensure the existence of Filippov solutions for DDS (1).
|
| 82 |
+
|
| 83 |
+
Lemma 1: Suppose that $\mathrm{a}\left( 0\right) = 0,\mathrm{a} = \{ h, g\}$ and there exist constants ${d}_{rj}^{\mathrm{a}} \geq 0$ and ${d}_{r}^{\mathrm{a}} \geq 0$ such that, for $\forall \mathrm{x} =$ $\operatorname{cl}{\left( {x}_{i}\right) }_{n},\mathbf{y} = \operatorname{cl}{\left( {y}_{i}\right) }_{n} \in {\mathbb{R}}^{n}$ ,
|
| 84 |
+
|
| 85 |
+
$\left( {\mathbf{A}}_{1}\right) : \left| {{\mathbf{a}}_{r}\left( \mathbf{x}\right) - {\mathbf{a}}_{r}\left( \mathbf{y}\right) }\right| \leq \mathop{\sum }\limits_{{j = 1}}^{n}{d}_{rj}^{\mathbf{a}}\left| {{x}_{j} - {y}_{j}}\right| + {\widehat{d}}_{r}^{\mathbf{a}}, r \in {\mathbb{N}}_{1}^{n}$ . Then, there is at least one Filippov solution $z\left( t\right)$ to DDS (1) on $\lbrack 0, + \infty )$ .
|
| 86 |
+
|
| 87 |
+
Proof: The prove process is similar to those in [7], [8] with slightly changes, that is, the Cauchy problem in (3) is transformed into a fixed point problem.
|
| 88 |
+
|
| 89 |
+
Denote a map $\mathbb{G}\left( z\right) : \mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right) \rightarrow \mathcal{C}{\left( \left\lbrack -\sigma ,\mathfrak{t}\right\rbrack ,{\mathbb{R}}^{n}\right) }^{1}$ as:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\mathbb{G}\left( z\right) = \begin{cases} {e}^{Ct}z\left( 0\right) + {\int }_{0}^{t}{e}^{C\left( {t - s}\right) } & \lbrack B\mathrm{\;F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \} \\ + A\mathrm{\;F}\{ h\left( {z\left( t\right) }\right) \} & \mathrm{d}s, t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , t > 0, \\ \varphi \left( s\right) ,\forall s \leq 0. & \end{cases} \tag{4}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
It has that $\mathbb{G}\left( z\right)$ is completely continuous and upper semicontinuous with convex closed values. Further, one knows that the solutions of the Cauchy problem of DDS (3) are the fixed points of $\mathbb{G}\left( z\right)$ .
|
| 96 |
+
|
| 97 |
+
By $\left( {\mathbf{A}}_{1}\right)$ , the set $\Omega = \left\{ {z \in \mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right) : {\lambda z} \in \mathbb{G}\left( z\right) ,\lambda > }\right.$ $1\}$ is non-empty. Next, let us prove that the set $\Omega$ is bounded.
|
| 98 |
+
|
| 99 |
+
For $z \in \Omega$ , it holds that ${\lambda z} \in \mathbb{G}\left( z\right)$ for $\lambda > 1$ . So, there are $\gamma \left( t\right) \in \mathrm{F}\{ h\left( {z\left( t\right) }\right) \}$ and $\zeta \left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \}$ such that
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
z\left( t\right) = \frac{1}{\lambda }\left\lbrack {z\left( 0\right) {e}^{Ct} + {\int }_{0}^{t}{e}^{C\left( {t - s}\right) }\mathbb{c}\left( s\right) \mathrm{d}s}\right\rbrack ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $\mathbb{c}\left( t\right) = {A\gamma }\left( s\right) + {B\zeta }\left( {s - \tau \left( s\right) }\right)$ .
|
| 106 |
+
|
| 107 |
+
In view of $\left( {\mathbf{A}}_{1}\right)$ , there are constants ${D}_{\mathbf{a}}$ and ${d}_{\mathbf{a}}$ such that
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\parallel \mathbb{c}\left( t\right) \parallel \leq {D}_{h}\parallel A\parallel \parallel z\left( t\right) \parallel + {D}_{g}\parallel B\parallel \parallel z\left( {t - \sigma \left( t\right) }\right) \parallel + \mathbb{d}, \tag{6}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where $\mathbb{d} = \left( {{d}_{h}\parallel A\parallel + {d}_{g}\parallel B\parallel }\right)$ and $\mathbb{a} = \{ h, g\}$ . Considering inequalities (5) and (6), it follows that
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\parallel z\left( t\right) \parallel \leq {e}^{\parallel C\parallel t}\left\lbrack {\mathbb{y}\left( t\right) + {D}_{g}\parallel B\parallel {\int }_{0}^{t}{e}^{-\parallel C\parallel s}\parallel z\left( {s - \tau \left( s\right) }\right) \parallel \mathrm{d}s}\right.
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
+ {D}_{h}\parallel A\parallel {\int }_{0}^{t}{e}^{-\parallel C\parallel s}\parallel z\left( s\right) \parallel \mathrm{d}s\rbrack , a.a.t \in \left\lbrack {0,\mathrm{t}}\right\rbrack ,
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
which implies that
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathbf{z}\left( t\right) \leq \mathbb{y}\left( t\right) + \mathcal{D}{\int }_{0}^{t}\mathbf{z}\left( s\right) \mathrm{d}s,\;\text{ a.a. }t \in \left\lbrack {0,\mathfrak{t}}\right\rbrack , \tag{7}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $\mathbf{z}\left( t\right) = {e}^{-\parallel C\parallel t}\mathop{\sup }\limits_{{\theta \in \left\lbrack {-\sigma , t}\right\rbrack }}\parallel z\left( \theta \right) \parallel ,\mathcal{D} = \left( {{D}_{h}\parallel A\parallel + }\right.$ $\left. {{D}_{g}\parallel B\parallel }\right)$ , and $\mathbb{y}\left( t\right) = \parallel z\left( 0\right) \parallel + \frac{\mathrm{d}}{\parallel C\parallel }\left( {1 - {e}^{-\parallel C\parallel t}}\right)$ .
|
| 130 |
+
|
| 131 |
+
Note that, it is easy to obtain ${y}_{\max } = \parallel z\left( 0\right) \parallel + \frac{\mathrm{d}}{\parallel C\parallel }$ is a upper bound of $\mathbf{y}\left( t\right)$ on $\lbrack 0, + \infty )$ . Then, from inequality (7) and the Gronwall's lemma, it has
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
{e}^{-\parallel C\parallel t}\parallel z\left( t\right) \parallel \leq \mathbf{z}\left( t\right) \leq {y}_{\max }{e}^{\mathcal{D}t}\text{, a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{8}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
which further means that $\Omega$ is bounded, a.a. $t \in \left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ .
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
${}^{1}\mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right)$ is the Banach space of the $n$ -dimensional vector-valued continuous functions defined on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ with norm defined by $\parallel x{\parallel }_{\infty } =$ $\sup \{ \parallel x\left( t\right) \parallel , t \in \left\lbrack {-\sigma ,\mathrm{t}}\right\rbrack \}$ .
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
From the discussions in [7], it is deduced that $\mathbb{G}\left( z\right)$ has a fixed point for $\forall \mathfrak{t} > 0$ , which infers that a Filippov solution to DDS (1) can be defined on $\lbrack 0, + \infty )$ .
|
| 146 |
+
|
| 147 |
+
Remark 1: Delay $\sigma \left( t\right)$ in DDS (1) is merely bounded, which is a milder condition than those in [1], [7], [8]. For instance, the existence of Filippov solutions for DDSs has been discussed in [1], [7], [8] under the condition that the derivatives of delays are differentiable and their values do not exceed 1. Moreover, the proof in Lemma 1 differs from that in [6]. The technique in [6] for handling time delay involves the inequality $\parallel z\left( {t - \sigma \left( t\right) }\right) \parallel \leq \mathop{\max }\limits_{{1 \leq i < n}}\mathop{\max }\limits_{{-\sigma < s < 0}}\left\{ {{z}_{i}\left( s\right) }\right\} + \parallel z\left( t\right) \parallel$ , which is a difficult condition to verify.
|
| 148 |
+
|
| 149 |
+
## B. Stability Theorem of DDSs
|
| 150 |
+
|
| 151 |
+
Next, a lemma that can be used to realize synchronization of CDDSs with intermittent control is provided.
|
| 152 |
+
|
| 153 |
+
Lemma 2: Given a time sequence ${\left\{ {t}_{\rho }\right\} }_{\rho = 0}^{\infty }$ with ${t}_{0} = 0$ , $\mathop{\lim }\limits_{{\rho \rightarrow + \infty }}{t}_{\rho } = + \infty$ , and $\mathop{\lim }\limits_{{\rho \rightarrow + \infty }}\sup \frac{{t}_{{2\rho } + 2} - {t}_{{2\rho } + 1}}{{t}_{{2\rho } + 2} - {t}_{2\rho }} = \phi \in$ (0,1), if there is a continuous and nonnegative function $w\left( t\right)$ with $t \in \lbrack - \sigma , + \infty )$ such that
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\left\{ \begin{array}{l} \dot{w}\left( t\right) \leq - {a}_{1}w\left( t\right) + b\bar{w}\left( t\right) - {c}_{1}, t \in {\mathfrak{c}}_{\rho } = \left\lbrack {{t}_{2\rho },{t}_{{2\rho } + 1}}\right) , \\ \dot{w}\left( t\right) \leq {a}_{2}w\left( t\right) + b\bar{w}\left( t\right) + {c}_{2}, t \in {\mathfrak{u}}_{\rho } = \left\lbrack {{t}_{{2\rho } + 1},{t}_{{2\rho } + 2}}\right) , \end{array}\right.
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
(9)
|
| 160 |
+
|
| 161 |
+
then it has that $w\left( t\right) < M{e}^{-\widetilde{\lambda }t},\widetilde{\lambda } = \lambda - \left( {{a}_{1} + {a}_{2}}\right) \phi > 0, t \geq$ 0, where $\rho \in \mathbb{N}, M > 0,\bar{w}\left( t\right) = w\left( {t - \sigma \left( t\right) }\right) ,\lambda > 0$ is the unique solution of transcendental equation ${a}_{1} - \lambda - {b}_{2}{e}^{\lambda \sigma } = 0$ , and the other parameters meet that ${a}_{1} > b \geq 0,{c}_{1} = \left( {{a}_{1} - }\right.$ $b)d > 0$ , and ${c}_{2} = \left( {{a}_{2} + b}\right) d > 0$ .
|
| 162 |
+
|
| 163 |
+
Proof: Let $h\left( t\right) = w\left( t\right) + d$ . Then, it has that $\bar{h}\left( t\right) =$ $\bar{w}\left( t\right) + d$ and $h\left( s\right) = \phi \left( s\right) + d > 0, s \in \left\lbrack {-h,0}\right\rbrack$ ,
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
\left\{ \begin{array}{ll} \dot{h}\left( t\right) \leq - {a}_{1}h\left( t\right) + b\bar{h}\left( t\right) , & t \in {\mathfrak{c}}_{\rho }, \\ \dot{h}\left( t\right) \leq {a}_{2}h\left( t\right) + b\bar{h}\left( t\right) , & t \in {\mathfrak{u}}_{\rho }, \end{array}\right. \tag{10}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
Following the results of [14], it concludes from the definition of $h\left( t\right)$ and (10) that $w\left( t\right) < h\left( t\right) \leq \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\bar{h}\left( s\right) {e}^{-\widetilde{\lambda }t}$ . By defining $M = \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\bar{h}\left( s\right)$ , the proof is finished.
|
| 170 |
+
|
| 171 |
+
## C. Research Problem
|
| 172 |
+
|
| 173 |
+
This talk discusses the complete synchronization of coupled networks with $\ell$ DDSs (1) via an event-triggered intermittent controller. The coupled network is modeled as
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\left\{ \begin{array}{l} {\dot{x}}_{s}\left( t\right) = F\left( {{x}_{s},{x}_{s,\sigma }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {x}_{j}\left( t\right) + {r}_{s}\left( t\right) , \\ {x}_{s}\left( o\right) = {\tau }_{s}\left( o\right) \in \mathcal{C}\left( {\left\lbrack {-\sigma ,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) , s \in {\mathbb{N}}_{1}^{\ell }, \end{array}\right. \tag{11}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
where ${x}_{s}\left( t\right) ,{r}_{s}\left( t\right) \in {\mathbb{R}}^{n}$ are respectively the state variable and the control input, outer-coupling matrix $U = {\left( {u}_{ij}\right) }_{\ell \times \ell }$ satisfies the diffusive condition, $\Phi$ is the inner-coupling matrix. Similar to (2), the CDDSs (11) in sense of Filippov solution is
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
{\dot{x}}_{s}\left( t\right) = \mathbb{F}\left( {{x}_{s},{\gamma }_{s},{\zeta }_{s,\sigma }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {x}_{j}\left( t\right) + {r}_{s}\left( t\right) , \tag{12}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
where $\mathbb{F}\left( {{x}_{s},{\gamma }_{s},{\zeta }_{s,\sigma }}\right) = C{x}_{s}\left( t\right) + A{\gamma }_{s}\left( t\right) + B{\zeta }_{s}\left( {t - \sigma \left( t\right) }\right)$ , ${\gamma }_{s}\left( t\right) \in \mathrm{F}\left\{ {h\left( {{x}_{s}\left( t\right) }\right) }\right\}$ and ${\zeta }_{s}\left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\left\{ {g\left( {{x}_{s}\left( {t - \sigma \left( t\right) }\right) }\right) }\right\}$ .
|
| 186 |
+
|
| 187 |
+
Definition 1: The CDDSs (11) is said to be globally exponentially synchronized with DDS (1) if, by designing suitable controllers ${r}_{s}\left( t\right) , s \in {\mathbb{N}}_{1}^{\ell }$ , there exist $M \geq 0$ and $\alpha > 0$ such that $\parallel e\left( t\right) \parallel \leq M{e}^{-{\alpha t}}$ , for $t \geq 0$ , where $e\left( t\right) = \operatorname{cl}{\left( {e}_{s}\left( t\right) \right) }_{\ell }$ , ${e}_{s}\left( t\right) = {x}_{s}\left( t\right) - z\left( t\right)$ .
|
| 188 |
+
|
| 189 |
+
## III. Synchronization of CDDSs
|
| 190 |
+
|
| 191 |
+
## A. Control Design
|
| 192 |
+
|
| 193 |
+
According to [8], the control goal presented in Definition 1 is equivalence to the same issue for the Filippov systems (2) and (12). Hence, the subsequent study directly addresses the synchronization issue of (2) and (12). In this talk, the new event-triggered intermittent control is designed as
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
{r}_{s}\left( t\right) = \left\{ \begin{array}{l} - {K}_{s}{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {\xi }_{s}\operatorname{sg}\left( {{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\right) , \\ \;t \in {\mathfrak{c}}_{\rho } \cap \left\lbrack {{t}_{k}^{s,{2\rho }},{t}_{k + 1}^{s,{2\rho }}}\right) , \\ 0, t \in {\mathfrak{u}}_{\rho }, \end{array}\right. \tag{13}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
where ${\xi }_{s} > 0$ and ${K}_{s} \in {\mathbb{R}}^{n \times n}$ are the control gains, ${t}_{k}^{s,{2\rho }}$ is the ${k}^{th}$ control signal update instant of subsystem $s$ , which is determined by the following ETM
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
{t}_{k + 1}^{s,{2\rho }} = \inf \left\{ {t > {t}_{k}^{s,{2\rho }} : \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} - {\kappa }_{s}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} > 0}\right\} , \tag{14}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
where ${t}_{0}^{s,{2\rho }} = {t}_{2\rho },{\theta }_{s}\left( t\right) = {e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {e}_{s}\left( t\right)$ is the ME and ${\kappa }_{s} \in \left( {0,1}\right)$ is the threshold value.
|
| 206 |
+
|
| 207 |
+
Remark 2: The ME ${\theta }_{s}\left( t\right)$ in (14) is linear and demands less computing power than the nonlinear ones, such as those in [11], [12], [17], which will further be clarified in the numerical example part. In addition, it observes that the MEs in [11], [12], [17] are piecewise continuous, which also introduce additional challenges in proving the exclusion of Zeno behavior. While, these challenges will not arise in the case of a linear ME. Hence, event-triggered nonsmooth control with a linear ME is more practical.
|
| 208 |
+
|
| 209 |
+
Considering system (2) and CDDSs (12) with controller (13), the error system is obtained as
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
{\dot{e}}_{s}\left( t\right) = {\mathrm{F}}_{s}\left( t\right) , t \in {\mathfrak{c}}_{\rho }, \tag{15a}
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
{\dot{e}}_{s}\left( t\right) = {\widetilde{\mathrm{F}}}_{s}\left( t\right) , t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}, \tag{15b}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
and its compact Kronecker product form is
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
\dot{\mathbf{e}}\left( t\right) = \mathrm{F}\left( {\mathbf{e},\theta ,\mathrm{r},{\mathbf{c}}_{\sigma }}\right) , t \in {\mathfrak{c}}_{\rho }, \tag{16a}
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
\dot{\mathbf{e}}\left( t\right) = \widetilde{\mathrm{F}}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) , t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}, \tag{16b}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
where ${\mathrm{F}}_{s}\left( t\right) = {\widetilde{\mathrm{F}}}_{s}\left( t\right) - {\xi }_{s}\operatorname{sg}\left( {{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) }\right) - {K}_{s}\left( {{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) }\right)$ , ${\widetilde{\mathrm{F}}}_{s}\left( t\right) = C{e}_{s}\left( t\right) + A{\mathrm{r}}_{s}\left( t\right) + B{\mathrm{c}}_{s}\left( {t - \sigma \left( t\right) }\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {e}_{j}\left( t\right) ,$ $F\left( {e,\theta , r,{c}_{\sigma }}\right) = \widetilde{F}\left( {e,\theta , r,{c}_{\sigma }}\right) - \mathcal{K}\left( {e\left( t\right) + \theta \left( t\right) }\right) - {\xi sg}(e\left( t\right) +$ $\left. {\theta \left( t\right) }\right) ,\widetilde{\mathbf{F}}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) = \left( {\mathcal{C} + \mathcal{U}}\right) \mathbf{e}\left( t\right) + \mathcal{A}\mathbf{r}\left( t\right) + \mathcal{B}\mathbf{c}\left( {t - \sigma \left( t\right) }\right)$ , $\theta \left( t\right) = \operatorname{cl}{\left( {\theta }_{s}\left( t\right) \right) }_{\ell },\mathrm{r}\left( t\right) = \operatorname{cl}{\left( {\mathrm{r}}_{s}\left( t\right) \right) }_{\ell },{\mathrm{r}}_{s}\left( t\right) = {\gamma }_{s}\left( t\right) - \gamma \left( t\right)$ , $\mathbf{{sg}}\left( {\mathbf{e}\left( t\right) + \theta \left( t\right) }\right) = \operatorname{cl}{\left( \mathbf{{sg}}\left( {\mathbf{e}}_{s}\left( t\right) + {\theta }_{s}\left( t\right) \right) \right) }_{\ell },\mathbf{c}\left( {t - \sigma \left( t\right) }\right) =$ $\operatorname{cl}{\left( {\mathbf{c}}_{s}\left( t - \sigma \left( t\right) \right) \right) }_{\ell },{\mathbf{c}}_{s}\left( {t - \sigma \left( t\right) }\right) = {\zeta }_{s}\left( {t - \sigma \left( t\right) }\right) - \zeta \left( {t - \sigma \left( t\right) }\right)$ $\mathcal{X} = {I}_{\ell } \otimes X, X \in \{ A, B, C\} ,\mathcal{U} = U \otimes \Phi ,\mathcal{K} = \operatorname{dg}{\left( {K}_{s}\right) }_{\ell },$ and $\xi = \operatorname{dg}{\left( {\xi }_{s}{I}_{n}\right) }_{\ell }$ .
|
| 230 |
+
|
| 231 |
+
## B. Synchronization Analysis
|
| 232 |
+
|
| 233 |
+
The synchronization criteria are given below.
|
| 234 |
+
|
| 235 |
+
Theorem 1: Assume that $\left( {\mathbf{A}}_{1}\right)$ holds. For given $\phi ,{\kappa }_{s} \in$ $\left( {0,1}\right) ,{a}_{1} > b = \begin{Vmatrix}{\mathcal{B}}_{D}^{g}\end{Vmatrix}$ , and ${a}_{1} + {a}_{2} > 0$ , there are matrices $\mathcal{K} = \operatorname{dg}{\left( {K}_{s}\right) }_{\ell } \in {\mathbb{R}}^{\ell n \times \ell n}$ and $\Psi = \operatorname{dg}{\left( {\Psi }_{s}\right) }_{\ell } \in {\mathbb{D}}_{ + }^{\ell n \times \ell n}$ such that $\eta = \frac{{a}_{1} - b}{{a}_{2} + b}v > 0,{\zeta }_{s} = \frac{1 + {\widetilde{\kappa }}_{s}}{1 - {\widetilde{\kappa }}_{s}}\eta ,{\xi }_{s} = \frac{1 + {\widetilde{\kappa }}_{s}}{1 - {\widetilde{\kappa }}_{s}}v + {\zeta }_{s}, s \in {\mathbb{N}}_{1}^{\ell }$ ,
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{\Omega }_{1} = \left( \begin{matrix} \operatorname{He}\left\lbrack {{\mathbb{A}}_{1} + {\mathcal{A}}_{D}^{h}}\right\rbrack + \widetilde{\Psi } & - \mathcal{K} \\ * & - \Psi \end{matrix}\right) < 0, \tag{17}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
{\Omega }_{2} = \operatorname{He}\left\lbrack {{\mathbb{A}}_{2} + {\mathcal{A}}_{D}^{h}}\right\rbrack < 0, \tag{18}
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
then CDDS (11) with controller (13) is globally exponentially synchronized onto DDS (1), i.e., $\parallel e\left( t\right) \parallel \leq M{e}^{-\widetilde{c}t},\widetilde{c} = c -$ $\left( {{a}_{1} + {a}_{2}}\right) \phi > 0$ , where $c$ is the solution of ${a}_{1} - c - b{e}^{c\sigma } = 0,\phi$ is defined in Lemma 2, $M = \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\parallel \mathbf{e}\left( s\right) \parallel + \frac{v}{{a}_{2} + b},{\mathbb{A}}_{1} =$ $\mathcal{C} - \mathcal{K} + \mathcal{U} + {a}_{1}{I}_{\ell n},{\mathbb{A}}_{2} = \mathcal{C} + \mathcal{U} - {a}_{2}{I}_{\ell n},\widetilde{\Psi } = \operatorname{dg}{\left( {\widetilde{\kappa }}_{s}^{2}{\Psi }_{s}\right) }_{\ell },{\mathcal{A}}_{D}^{h} =$ ${I}_{\ell } \otimes {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {d}_{rj}^{h}\right) }_{n \times n},{\mathcal{B}}_{D}^{g} = {I}_{\ell } \otimes {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {d}_{rj}^{g}\right) }_{n \times n}$ , ${\mathbf{a}}_{h} = {\ell }^{\frac{1}{2}}\parallel \mathrm{{cl}}{\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {\widehat{d}}_{r}^{h}\right) }_{n}\parallel ,{\mathbf{b}}_{g} = {\ell }^{\frac{1}{2}}\parallel \mathrm{{cl}}{\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {\widehat{d}}_{r}^{g}\right) }_{n}\parallel ,$ ${\widetilde{\kappa }}_{s} = \frac{{\kappa }_{s}}{1 - {\kappa }_{s}}$ , and $v = {\mathbf{a}}_{h} + {\mathbf{b}}_{g}$ .
|
| 246 |
+
|
| 247 |
+
Proof: Design a Lyapunov function $V\left( t\right) = \parallel \mathbf{e}\left( t\right) \parallel$ .
|
| 248 |
+
|
| 249 |
+
For $t \in {\mathfrak{c}}_{\rho },\rho \in \mathbb{N}$ , it derives from (16a) that
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack = \frac{2{\mathbf{e}}^{\mathrm{T}}\left( t\right) \mathbf{F}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) }{{2V}\left( t\right) }. \tag{19}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
It follows from $\left( {\mathbf{A}}_{1}\right)$ and Cauchy-Schwarz inequality that
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
{\mathbf{e}}^{\top }\left( t\right) \mathcal{A}\mathbf{r}\left( t\right) \leq {\mathbf{e}}^{\top }\left( t\right) {\mathcal{A}}_{D}^{h}\mathbf{e}\left( t\right) + {\mathbf{a}}_{h}\parallel \mathbf{e}\left( t\right) \parallel , \tag{20}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
{\mathbf{e}}^{\top }\left( t\right) \mathcal{B}\mathbf{c}\left( {t - \sigma \left( t\right) }\right) \leq \left( {b\parallel \mathbf{e}\left( {t - \sigma \left( t\right) }\right) \parallel + {\mathbf{b}}_{h}}\right) \parallel \mathbf{e}\left( t\right) \parallel . \tag{21}
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
The ETM (14) means $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq {\widetilde{\kappa }}_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}$ and
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
{\theta }^{\top }\left( t\right) {\Psi \theta }\left( t\right) \leq {\mathbf{e}}^{\top }\left( t\right) \widetilde{\Psi }\mathbf{e}\left( t\right) . \tag{22}
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
Moreover, one has from $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq {\widetilde{\kappa }}_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}$ that
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
{\mathbf{e}}^{\top }\left( t\right) \xi \operatorname{sg}\left( {\mathbf{e}\left( t\right) + \theta \left( t\right) }\right) \geq \mathop{\sum }\limits_{{s = 1}}^{\ell }\frac{{\xi }_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}\left( {\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix} - \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}}\right) }{\begin{Vmatrix}{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) \end{Vmatrix}}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
\geq \mathop{\sum }\limits_{{s = 1}}^{\ell }\frac{{\xi }_{s}\left( {1 - {\widetilde{\kappa }}_{s}}\right) {\begin{Vmatrix}{e}_{s}\left( t\right) \end{Vmatrix}}^{2}}{\left( {1 + {\widetilde{\kappa }}_{s}}\right) \begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
$$
|
| 282 |
+
\geq \left( {v + \eta }\right) \parallel \mathbf{e}\left( t\right) \parallel \text{.} \tag{23}
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
Substituting inequalities (20)-(23) into (19) yields
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq \frac{{\varepsilon }^{\mathrm{T}}\left( t\right) {\Omega \varepsilon }\left( t\right) + {2bV}\left( t\right) V\left( {t - \sigma \left( t\right) }\right) }{{2V}\left( t\right) }
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
- {a}_{1}V\left( t\right) - \eta \tag{24}
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
where $\varepsilon \left( t\right) = {\left( {e}^{\top }\left( t\right) ,{\theta }^{\top }\left( t\right) \right) }^{\top }$ . Then, condition (17) and inequality (24) ensure that
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq - {a}_{1}V\left( t\right) + {bV}\left( {t - \sigma \left( t\right) }\right) - \eta . \tag{25}
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
Similarly, for $t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}$ , it has from (16b) and (18) that
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq {a}_{2}V\left( t\right) + {bV}\left( {t - \sigma \left( t\right) }\right) + v. \tag{26}
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
Then, from Lemma 2 and inequalities (25)-(26), the result of Theorem 1 can be obtained.
|
| 308 |
+
|
| 309 |
+
Remark 3: Based on the novel nonsmooth event-triggered intermittent control (13) and Lemma 2, Theorem 1 presents the complete synchronization criteria for CDDS (11). The result is quite general since Theorem 1 allows that the derivative of $\sigma \left( t\right)$ is less, equal to, greater than 1, or even that $\sigma \left( t\right)$ is nondifferentiable. Specially, when the derivative of the delay $\sigma \left( t\right)$ exceeds 1 or even delay $\sigma \left( t\right)$ is nondifferentiable, the nonsmooth control (13) makes the Lyapunov-Krasovskii functional methods to show limitations in achieving the complete synchronization. The main reason is that many techniques dealing with time delay in the Lyapunov-Krasovskii functional methods only depend on linear controls, which cannot achieve the complete synchronization of CDDS (11). Hence, a new analysis framework of studying the complete synchronization of CDDSs with intermittent control is proposed.
|
| 310 |
+
|
| 311 |
+
Next, let us discuss the Zeno behavior of ETM (14).
|
| 312 |
+
|
| 313 |
+
Theorem 2: Under the assumption and conditions of Theorem 1 the triggering instants generated by ETM (14) can rule out the Zeno behavior.
|
| 314 |
+
|
| 315 |
+
Proof: For $\forall s \in {\mathbb{N}}_{1}^{\ell }$ and $t \in {\mathfrak{c}}_{\rho } \cap \left\lbrack {{t}_{k}^{s,{2\rho }},{t}_{k + 1}^{s,{2\rho }}}\right)$ , it has that
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
{\mathcal{D}}^{ + }\left\lbrack \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}\right\rbrack \leq \begin{Vmatrix}{{\mathcal{D}}^{ + }\left\lbrack {{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {e}_{s}\left( t\right) }\right\rbrack }\end{Vmatrix} = \begin{Vmatrix}{{\dot{e}}_{s}\left( t\right) }\end{Vmatrix}. \tag{27}
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
In view of Theorem 1, it concludes that there is a ${u}_{s} >$ 0 such that $\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix} \leq {\mathrm{u}}_{s}$ . Then, one can obtain from error system(15a), and $\left( {\mathbf{A}}_{1}\right)$ that
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\begin{Vmatrix}{{\dot{e}}_{s}\left( t\right) }\end{Vmatrix} \leq {\vartheta }_{s} + \begin{Vmatrix}{K}_{s}\end{Vmatrix}\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}, \tag{28}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
where ${\vartheta }_{s} = \left( {\begin{Vmatrix}{C - {K}_{s}}\end{Vmatrix} + \begin{Vmatrix}{A}_{D}^{h}\end{Vmatrix} + \begin{Vmatrix}{B}_{D}^{g}\end{Vmatrix}}\right) {\mathrm{u}}_{s} + v + {\xi }_{s} +$ $2\left| {u}_{ss}\right| \parallel \Phi \parallel \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{j},{A}_{D}^{h} = {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {d}_{rj}^{h}\right) }_{n \times n}$ , and ${B}_{D}^{g} =$ ${\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {d}_{rj}^{y}\right) }_{n \times n}$ .
|
| 328 |
+
|
| 329 |
+
One has from inequalities (27)-(28) and $\begin{Vmatrix}{{\theta }_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} =$ 0 that $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq \frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}{{\vartheta }_{s}}\left( {{e}^{\begin{Vmatrix}{K}_{s}\end{Vmatrix}\left( {t - {t}_{k}^{s,{2\rho }}}\right) } - 1}\right)$ , that is, $(t -$ $\left. {t}_{k}^{s,{2\rho }}\right) \geq \frac{1}{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}\ln \left( {\frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}{{\vartheta }_{s}}\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} + 1}\right)$ . Note that, the next event will not be triggering until $\begin{Vmatrix}{{\theta }_{s}\left( {t}_{k + 1}^{s,{2\rho } - }\right) }\end{Vmatrix} = {\kappa }_{s}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix}$ . Hence, the inequality above implies that $\left( {{t}_{k + 1}^{s,{2\rho } - } - {t}_{k}^{s,{2\rho }}}\right) \geq$ $\frac{\ln \left( {\frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}{\kappa }_{s}}{{\vartheta }_{s}}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} + 1}\right) }{\begin{Vmatrix}{K}_{s}\end{Vmatrix}} > 0.$
|
| 330 |
+
|
| 331 |
+
## IV. NUMERICAL EXAMPLE
|
| 332 |
+
|
| 333 |
+
This section utilizes the Hopfield neural network (HNN) with discontinuous activation functions to verify the effectiveness of our results. The circuit diagram of the HNN is shown in Fig. 1(a) with detailed explanations provided in [23]. By applying Kirchhoff's laws, the HNN can be represented as a DDS (1). Next, the parameters of the HNN, in the form of those in DDS (1), are selected for numerical simulation.
|
| 334 |
+
|
| 335 |
+
Conside a HNN or the DDS (1) with $z\left( t\right) = {\left( {z}_{1}\left( t\right) ,{z}_{2}\left( t\right) \right) }^{\top }$ , $g\left( z\right) = {\left( {g}_{1}\left( {z}_{1}\right) ,{g}_{2}\left( {z}_{2}\right) \right) }^{\top }, h\left( z\right) = {\left( {h}_{1}\left( {z}_{1}\right) ,{h}_{2}\left( {z}_{2}\right) \right) }^{\top },\sigma \left( t\right) =$ ${0.65} + {0.35}\left| {\sin \left( t\right) }\right| , C = \mathrm{{dg}}\left( {-{1.5}, - 1}\right) , i = 1,2$ ,
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
A = \left( \begin{matrix} 2 & - {0.1} \\ - {4.9} & 3 \end{matrix}\right) ,{g}_{i}\left( {z}_{i}\right) = \left\{ \begin{array}{l} \frac{\left| {{z}_{i} + 1}\right| - \left| {{z}_{i} - 1}\right| }{2} + {0.04},{z}_{i} > 0, \\ \frac{\left| {{z}_{i} + 1}\right| - \left| {{z}_{i} - 1}\right| }{2} - {0.01},{z}_{i} < 0, \end{array}\right.
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
B = \left( \begin{matrix} - {1.5} & {0.1} \\ - {0.5} & - {0.5} \end{matrix}\right) ,{h}_{i}\left( {z}_{i}\right) = \left\{ \begin{array}{l} \tanh \left( {z}_{i}\right) + {0.01},{z}_{i} > 0, \\ \tanh \left( {z}_{i}\right) - {0.02},{z}_{i} < 0. \end{array}\right.
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
It has that $\mathbf{a}\left( \cdot \right) ,\mathbf{a} = \{ h, g\}$ meet $\left( {\mathbf{A}}_{1}\right)$ with ${d}_{11}^{\mathbf{a}} = {d}_{22}^{\mathbf{a}} = 1$ , ${d}_{12}^{\mathrm{a}} = {d}_{21}^{\mathrm{a}} = 0,{\widehat{d}}_{1}^{h} = {\widehat{d}}_{2}^{h} = {0.03}$ , and ${\widehat{d}}_{21}^{g} = {\widehat{d}}_{21}^{g} = {0.05}$ .
|
| 346 |
+
|
| 347 |
+

|
| 348 |
+
|
| 349 |
+
Fig. 1: (a) Circuit diagram of the HNN and coupling topology; (b) Trajectories of DDS (1) and CDDS (11) without controller.
|
| 350 |
+
|
| 351 |
+
Now, consider that the coupled system (11) is composed of 3 DDS (1), where $\Phi = \operatorname{dg}\left( {2,1}\right)$ and $U = {\left( {u}_{ij}\right) }_{3 \times 3}$ is the Laplacian matrix of the digraph shown in Fig. 1(a). When the initial values of DDS (1) and CDDS (11) are randomly chosen on $\left\lbrack {-5,5}\right\rbrack ,\forall t \in \left\lbrack {-1,0}\right\rbrack$ , their trajectories are given in Fig. 1(b), from which one can see that the synchronization cannot be realized without the control.
|
| 352 |
+
|
| 353 |
+
By taken ${a}_{1} = {4.6},{a}_{2} = {3.88},{\kappa }_{1} = {0.12},{\kappa }_{2} = {0.17}$ , and ${\kappa }_{3} = {0.15}$ , one gains that $b = {1.603}{\xi }_{1} = {1.197}$ , ${\xi }_{2} = {1.378},{\xi }_{3} = {1.299}$ and $\phi = {0.1002}$ . Solving conditions (17) and (18) obtains ${K}_{1} = \left( \begin{matrix} {11.480} & {3.759} \\ {3.759} & {13.908} \end{matrix}\right) ,{K}_{2} =$ $\left( \begin{matrix} {11.690} & {3.815} \\ {3.815} & {14.139} \end{matrix}\right) ,{K}_{3} = \left( \begin{matrix} {11.744} & {3.854} \\ {3.854} & {14.236} \end{matrix}\right)$ . Hence, Theorem 1 is true, that is, CDDS (11) with controller (13) can be synchronized onto DDS (1). Fig. 2(a) shows the evolution of error trajectories of (11) and (1) when the work intervals of controller (13) are $\lbrack 0,{0.5}) \cup \lbrack {0.5},{0.7}) \cup \lbrack {0.7},{1.6}) \cup \lbrack {1.6},{1.65}) \cup$ $\lbrack {1.65},{2.55}) \cup \lbrack {2.55},{2.68}) \cup \lbrack {2.68},{3.98}) \cup \lbrack {3.98},4)\cdots$ . In addition, the triggering instants and intervals of three subsystems are displayed in Fig. 2(b), respectively. It finds from Fig. 1 (b) and Fig. 2 that the designed event-triggered controller (13) is not only efficient but also resource-efficient.
|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
|
| 357 |
+
Fig. 2: (a) Error trajectories of DDS (1) and CDDS (11) with controller (13); (b) Triggering instants and intervals.
|
| 358 |
+
|
| 359 |
+
Comparative Experiment: To prove the novelty 3), a comparative experiment with the ETMs from in [11], [12], [17] is conducted, where average running time (ART) and trigger rate (RT) are the measurement standards. The results are listed in Table I. In the simulation, the time-step size is 0.001 , and a total of 12420 control signals are generated for $\left\lbrack {0,{15}}\right\rbrack$ . The experiment code runs on a computer with Windows 10, Intel Core i5-10400, 2.9GHz, and 16GB RAM. It observes from Table I that ETM (14) not only saves ${52.78}\%$ of the running time but also reduces trigger frequency.
|
| 360 |
+
|
| 361 |
+
TABLE I: ${\mathbf{{TR}}}^{1}$ and ${\mathbf{{ART}}}^{2}$ of ETM (14) and [11],[12],[17].
|
| 362 |
+
|
| 363 |
+
<table><tr><td>Methods</td><td colspan="3">(14)</td><td colspan="3">[11], [12], [17]</td></tr><tr><td>Nodes</td><td>1</td><td>2</td><td>3</td><td>1</td><td>2</td><td>3</td></tr><tr><td>TR (%)</td><td>27.17</td><td>36.43</td><td>31.84</td><td>39.51</td><td>38.93</td><td>38.38</td></tr><tr><td>$\mathbf{{ART}}$ (sec)</td><td colspan="3">0.5214</td><td colspan="3">0.7966</td></tr></table>
|
| 364 |
+
|
| 365 |
+
${}^{1}$ TR $= \frac{\text{The number of trigger releases}}{\text{Total signals}}$ ; ${}^{2}$ ART is the average obtained from 10 runs of the code.
|
| 366 |
+
|
| 367 |
+
## V. CONCLUSION
|
| 368 |
+
|
| 369 |
+
This talk has considered the complete synchronization of CDDSs under event-triggered intermittent control. By developing a new stability inequality and a weighted-norm-based Lyapunov function, sufficient synchronization conditions have been derived. Note that, the results of this talk did not impose any restrictions on the derivatives of the delay. Moreover, experiments shown that the novel event-triggered control with a linear ME requires less computing power than existing papers.
|
| 370 |
+
|
| 371 |
+
## REFERENCES
|
| 372 |
+
|
| 373 |
+
[1] M. Forti, P. Nistri, and D. Papini, "Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain," IEEE Trans. Neural Netw., vol. 16, no. 6, pp. 1449-1463, 2005.
|
| 374 |
+
|
| 375 |
+
[2] P. Wang, G. Wen, T. Huang, et.al., "Consensus of Lur'e multi-agent systems with directed switching topology," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 69, no. 2, pp. 474-478, 2021.
|
| 376 |
+
|
| 377 |
+
[3] W. Zhu, X. Yu, S. Li, et.al., "Finite-time discontinuous control of nonholonomic chained-form systems," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 70, no. 6, pp. 2001-2005, 2023.
|
| 378 |
+
|
| 379 |
+
[4] Z. Cai, L. Huang, and Z. Wang, "Fixed/preassigned-time stability of time-varying nonlinear system with discontinuity: application to Chua's circuit," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 69, no. 6, pp. 2987-2991, 2022.
|
| 380 |
+
|
| 381 |
+
[5] Z. Zhang, H. Chen, and H. Zhu, "Generalized halanay inequality and its application to delay differential inclusions," Automatica, vol. 166, p. 111704, 2024.
|
| 382 |
+
|
| 383 |
+
[6] X. Liu, T. Chen, J. Cao, et.al., "Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches," Neural Netw., vol. 24, no. 10, pp. 1013-1021, 2011.
|
| 384 |
+
|
| 385 |
+
[7] X. Yang, Z. Yang, and X. Nie, "Exponential synchronization of discontinuous chaotic systems via delayed impulsive control and its application to secure communication," Commun. Nonlinear Sci. Numer. Simul., vol. 19, no. 5, pp. 1529-1543, 2014.
|
| 386 |
+
|
| 387 |
+
[8] X. Yang, Q. Song, J. Liang, et.al., "Finite-time synchronization of coupled discontinuous neural networks with mixed delays and nonidentical perturbations," J. Franklin Inst., vol. 352, no. 10, pp. 4382-4406, 2015.
|
| 388 |
+
|
| 389 |
+
[9] X. Zhang, P. Niu, X. Hu, et.al., "Global quasi-synchronization and global anti-synchronization of delayed neural networks with discontinuous activations via non-fragile control strategy," Neurocomputing, vol. 361, pp. 1-9, 2019.
|
| 390 |
+
|
| 391 |
+
[10] W. Zhang, X. Yang, C. Xu, et.al., "Finite-time synchronization of discontinuous neural networks with delays and mismatched parameters," IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 8, pp. 3761-3771, 2017.
|
| 392 |
+
|
| 393 |
+
[11] Y. Zhou and Z. Zeng, "Event-triggered finite-time stabilization of fuzzy neural networks with infinite time delays and discontinuous activations," IEEE Trans. Fuzzy Syst., vol. 32, no. 1, pp. 1-11, 2024.
|
| 394 |
+
|
| 395 |
+
[12] N. Rong and Z. Wang, "Event-based fixed-time control for interconnected systems with discontinuous interactions," IEEE Trans. Syst. Man Cybern.: Syst., vol. 52, no. 8, pp. 4925-4936, 2021.
|
| 396 |
+
|
| 397 |
+
[13] X. She, L. Wang, and Y. Zhang, "Finite-time stability of genetic regulatory networks with nondifferential delays," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 70, no. 6, pp. 2107-2111, 2023.
|
| 398 |
+
|
| 399 |
+
[14] X. Liu and T. Chen, "Synchronization of linearly coupled networks with delays via aperiodically intermittent pinning control," IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 10, pp. 2396-2407, 2015.
|
| 400 |
+
|
| 401 |
+
[15] N. Xavier and B. Bandyopadhyay, "Practical sliding mode using state depended intermittent control," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 68, no. 1, pp. 341-345, 2020.
|
| 402 |
+
|
| 403 |
+
[16] R. Tang, X. Yang, P. Shi, et.al.,"Finite-time ${\mathcal{L}}_{2}$ stabilization of uncertain delayed ts fuzzy systems via intermittent control," IEEE Trans. Fuzzy Syst., vol. 32, no. 1, pp. 116-125, 2024.
|
| 404 |
+
|
| 405 |
+
[17] Y. Zou, E. Tian, and H. Chen, "Finite-time synchronization of neutral-type coupled systems via event-triggered control with controller failure," IEEE Trans. Control Network Syst., DOI: 10.1109/TCNS.2023.3336594, 2023.
|
| 406 |
+
|
| 407 |
+
[18] X. Geng, J. Feng, N. Li, et.al., "Finite-time stochastic synchronization of multiweighted directed complex networks via intermittent control," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 70, no. 8, pp. 2964-2968, 2023.
|
| 408 |
+
|
| 409 |
+
[19] C.-H. Yan, B. Liu, P. Xiao, et.al., "Stabilization of load frequency control system via event-triggered intermittent control," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 69, no. 12, pp. 4934-4938, 2022.
|
| 410 |
+
|
| 411 |
+
[20] B. Liu, T. Liu, and P. Xiao, "Dynamic event-triggered intermittent control for stabilization of delayed dynamical systems," Automatica, vol. 149, p. 110847, 2023.
|
| 412 |
+
|
| 413 |
+
[21] G. Yang, F. Hao, L. Zhang, et.al., "Stabilization for positive linear systems: A novel event-triggered mechanism," IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 71, no. 3, pp. 1231-1235, 2024.
|
| 414 |
+
|
| 415 |
+
[22] A. F. Filippov, "Differential equations with discontinuous right-hand side," Matematicheskii sbornik, vol. 93, no. 1, pp. 99-128, 1960.
|
| 416 |
+
|
| 417 |
+
[23] X. Yang, J. Cao, and J. Qiu, "Pth moment exponential stochastic synchronization of coupled memristor-based neural networks with mixed delays via delayed impulsive control," Neural Netw., vol. 65, pp. 80-91, 2015.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/AQH0VuK6rp/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,374 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SYNCHRONIZATION OF COUPLED DELAYED DISCONTINUOUS SYSTEMS VIA EVENT-TRIGGED INTERMITTENT CONTROL
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Rongqiang Tang
|
| 4 |
+
|
| 5 |
+
College of Electronics and Information Engineering
|
| 6 |
+
|
| 7 |
+
Sichuan University
|
| 8 |
+
|
| 9 |
+
Chengdu, Sichuan
|
| 10 |
+
|
| 11 |
+
tangrongqiang@stu.scu.edu.cn
|
| 12 |
+
|
| 13 |
+
${2}^{\text{ nd }}$ Xinsong Yang*
|
| 14 |
+
|
| 15 |
+
College of Electronics and Information Engineering
|
| 16 |
+
|
| 17 |
+
Sichuan University
|
| 18 |
+
|
| 19 |
+
Chengdu, Sichuan
|
| 20 |
+
|
| 21 |
+
xinsongyang@scu.edu.cn
|
| 22 |
+
|
| 23 |
+
Abstract-This talk focuses on the complete synchronization of coupled delayed discontinuous systems (DDSs). Without constraints on the derivatives of time delays, several new conditions are exploited to guarantee the global existence of Filippov solutions for DDSs. A nonsmooth intermittent control combined with an event-triggering strategy is then designed. The conspicuous feature of this control scheme is that the measurement error in the event-triggering mechanism is formulated as a linear form, which can reduce computation burden compared to classical approaches. To address the challenges posed by Filippov solutions and intermittent control, novel analytical techniques, including an original lemma and a weighted-norm-based Lyapunov function, are developed so that sufficient synchronization conditions for DDSs are obtained. Finally, the effectiveness of the theoretical findings is confirmed by Hopfield neural networks.
|
| 24 |
+
|
| 25 |
+
Index Terms-Discontinuous systems, event-triggered intermittent control, Filippov solution, synchronization, time delays.
|
| 26 |
+
|
| 27 |
+
§ I. INTRODUCTION
|
| 28 |
+
|
| 29 |
+
Coupled discontinuous systems (DSs), modeled by some interconnected differential equations with discontinuous righthand sides, are a special type of complex network. Their applications span various areas of applied science and engineering, such as variable structure systems, neural networks [1], control synthesis [2], etc. Recently, there has been substantial attention on the dynamic behaviors of DSs with or without time delays, covering stability, stabilization, and synchronization [3]-[5].
|
| 30 |
+
|
| 31 |
+
Considering the discontinuities of the states on the righthand side of DSs, especially delayed DSs (DDSs), it is paramount to discuss the existence of Filippov solutions. Some limitations on time delays are necessary to ensure the existence of Filippov solutions for DDSs. For example, literature [1] considered DDSs with constant delays. Liu et al. [6] demanded that the state variables with time delays satisfy $\parallel z\left( {t - \sigma \left( t\right) }\right) \parallel \leq \parallel z\left( t\right) \parallel + \mathop{\max }\limits_{{1 \leq i \leq n}}\mathop{\max }\limits_{{-\sigma \leq s \leq 0}}\left\{ {{z}_{i}\left( s\right) }\right\} ,$ where $z\left( t\right) \in {\mathbb{R}}^{n}$ is the state variable and $\sigma \left( t\right) \in \left\lbrack {0,\sigma }\right\rbrack$ is the time delay. Yang et al. [7], [8] provided sufficient criteria for the existence of global Filippov solutions for DDSs, based on the condition that the derivatives of time delays are less than 1. However, in reality, the derivatives of some time delays can exceed or equal 1, and even be non-differentiable in some cases. A fundamental question arises: What conditions guarantee the existence of Filippov solutions for DDSs when these constraints are removed?
|
| 32 |
+
|
| 33 |
+
To study the synchronization of coupled DDSs (CDDSs), the basic idea is to transform CDDSs into uncertain systems using Filippov regularization and the measurable selection theorem, and then to address the corresponding issues for the uncertain systems [8]. Quasi-synchronization criteria for CDDSs have been obtained via smooth state feedback control [6], [9]. A nonsmooth control incorporating sign functions was proposed to achieve complete synchronization of CDDSs [7], where the sign function is use to mitigate the effects of uncertainties caused by Filippov solutions. Subsequent results on exponential, finite-time, and fixed-time synchronization of CDDSs have been published in [10]-[13]. However, little work has been done to achieve the complete synchronization of CDDSs via intermittent control. Actually, intermittent control offers better robustness and lower control cost than continuous control, as control signals can be artificially interrupted without affecting the final control purposes [14]-[18]. If the intermittent control is adopted for complete synchronization of CDDSs, the main obstacle lies in that the uncertainties posed by Filippov solution are difficult to cancel out during the interrupted intervals of control signals. So, how to develop new analytical methods to study the complete synchronization of CDDSs with intermittent control is another motivation.
|
| 34 |
+
|
| 35 |
+
Event-triggered control has recently sparked increasing interest due to its ability to reduce computational overhead by updating the sampled signal based on a preset supervision mechanism [19]-[21]. To fully leverage the merits of event-triggered strategy and intermittent control, this paper considers the complete synchronization of general CDDSs via a novel event-trigged intermittent control. The primary contributions of this work are:
|
| 36 |
+
|
| 37 |
+
1) The existence of Filippov solutions of DDSs is discussed. Different from existing papers [1], [6]-[8], several harsh constrictions on delays are removed.
|
| 38 |
+
|
| 39 |
+
2) A novel lemma is developed to address the difficulties induced by intermittent control. Then, complete synchronization criteria for CDDSs with intermittent control are obtained for the first time.
|
| 40 |
+
|
| 41 |
+
This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant Nos. 62373262 and 62303336, and in part by the Central guiding local science and technology development special project of Sichuan, and in part by the Fundamental Research Funds for Central Universities under Grant No. 2022SCU12009, and in part by the Sichuan Province Natural Science Foundation of China (NSFSC) under Grant Nos. 2022NSFSC0541, 2022NSFSC0875, 2023NSFSC1433.(Corresponding Author: Xinsong Yang)
|
| 42 |
+
|
| 43 |
+
3) A simple robust intermittent control scheme is designed by combining an event-triggered strategy with nonsmooth control. Unlike many event-triggered nonsmooth controls [12], [17], the measurement error (ME) in a linear form for the event-triggering mechanism (ETM) is considered, which facilitates easy computation (see Table I).
|
| 44 |
+
|
| 45 |
+
Notation: Let ${\mathcal{D}}^{ + }\left\lbrack \cdot \right\rbrack$ be the upper right Dini derivative operator. ${\mathbb{N}}_{k}^{j} \triangleq \{ k,k + 1,\ldots ,j\}$ with $k < j \in \mathbb{N},\operatorname{dg}\left( \cdot \right)$ is the block-diagonal matrix. For $a \in {\mathbb{R}}^{n}$ , let $\operatorname{cl}{\left( {a}_{i}\right) }_{n} = {\left( {a}_{1},{a}_{2},\ldots ,{a}_{n}\right) }^{\top }$ , and $\operatorname{dg}{\left( {a}_{i}\right) }_{n} = \operatorname{diag}\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) ,\operatorname{sg}\left( a\right) = \frac{a}{\parallel a\parallel },\parallel a\parallel \neq 0$ , otherwise $\operatorname{sg}\left( a\right) = 0$ . The other notations used in this paper are same as those in [16].
|
| 46 |
+
|
| 47 |
+
§ II. PRELIMINARIES
|
| 48 |
+
|
| 49 |
+
In this paper, the problem of synchronization and control in an array of coupled DDSs is considered. Before starting the research works, several necessary preparations on the solution of DDSs and stability theorem are provided.
|
| 50 |
+
|
| 51 |
+
§ A. FILIPPOV SOLUTION OF DDSS
|
| 52 |
+
|
| 53 |
+
Consider a DDS as follows:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\dot{z}\left( t\right) = F\left( {z,{z}_{\sigma }}\right) ,z\left( o\right) = \tau \left( o\right) \in \mathcal{C}\left( {\left\lbrack {-\sigma ,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) . \tag{1}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
Here $F\left( {z,{z}_{\sigma }}\right) \triangleq {Cz}\left( t\right) + {Ah}\left( {z\left( t\right) }\right) + {Bg}\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) ,z\left( t\right) \in$ ${\mathbb{R}}^{n}$ denotes the state vector, $\sigma \left( t\right) \in \left\lbrack {0,\sigma }\right\rbrack$ is the bounded delay, $C,A = {\left( {a}_{ij}\right) }_{n \times n}$ , and $B = {\left( {b}_{ij}\right) }_{n \times n} \in {\mathbb{R}}^{n \times n}$ are known constant matrices, nonlinear functions $h\left( \cdot \right) ,g\left( \cdot \right) : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$ are continuous except on a series of smooth hypersurfaces domains [7]. Chosen an initial value $z\left( o\right)$ for system (1), its trajectory can establish the desired state, such as equilibrium point, chaotic orbit, or nontrivial periodic orbit.
|
| 60 |
+
|
| 61 |
+
Due to the discontinuity of $\mathbf{a}\left( \cdot \right)$ with $\mathbf{a} = \{ h,g\}$ , classical solutions of DDS (1) do not exist. To further study the dynamical behaviors of DDS (1), this paper utilizes the framework of the Filippov solution, in which the definition of Filippov solution can be founded in [6]-[8]. It is concluded that, for DDS (1), there exists a continuous function $z\left( t\right)$ on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ to be absolutely continuous on $\left\lbrack {0,\mathrm{t}}\right\rbrack$ such that
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\dot{z}\left( t\right) = \mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
where $\mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) = {Cz}\left( t\right) + {A\gamma }\left( t\right) + {B\zeta }\left( {t - \sigma \left( t\right) }\right) ,\gamma \left( t\right) \in$ $\mathrm{F}\{ h\left( {z\left( t\right) }\right) \}$ and $\zeta \left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \}$ are measurable functions, and $\mathrm{F}\{ \cdot \}$ is the Filippov set-valued map [22].
|
| 68 |
+
|
| 69 |
+
For the Cauchy problem of DDS (1) in the sense of Filippov, it implies that there is a triple of function $\left( {z\left( t\right) ,\gamma \left( t\right) ,\zeta \left( t\right) }\right)$ : $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack \rightarrow {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \times {\mathbb{R}}^{n}$ such that $z\left( t\right)$ is a Filippov solution on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ with $\mathfrak{t} > 0$ and
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\left\{ \begin{array}{l} \dot{z}\left( t\right) = \mathbb{F}\left( {z,\gamma ,{\zeta }_{\sigma }}\right) ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \\ \gamma \left( s\right) = \zeta \left( s\right) = \mathrm{F}\{ \phi \left( s\right) \} ,\text{ a.a. }s \in \left\lbrack {-\sigma ,0}\right\rbrack , \\ z\left( s\right) = \varphi \left( s\right) ,\forall s \in \left\lbrack {-\sigma ,0}\right\rbrack , \end{array}\right. \tag{3}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where $\varphi \left( t\right)$ is a continuous function on $\left\lbrack {-\sigma ,0}\right\rbrack$ and $\phi \left( t\right)$ is a measurable selection function.
|
| 76 |
+
|
| 77 |
+
The following lemma provides some mild conditions to ensure the existence of Filippov solutions for DDS (1).
|
| 78 |
+
|
| 79 |
+
Lemma 1: Suppose that $\mathrm{a}\left( 0\right) = 0,\mathrm{a} = \{ h,g\}$ and there exist constants ${d}_{rj}^{\mathrm{a}} \geq 0$ and ${d}_{r}^{\mathrm{a}} \geq 0$ such that, for $\forall \mathrm{x} =$ $\operatorname{cl}{\left( {x}_{i}\right) }_{n},\mathbf{y} = \operatorname{cl}{\left( {y}_{i}\right) }_{n} \in {\mathbb{R}}^{n}$ ,
|
| 80 |
+
|
| 81 |
+
$\left( {\mathbf{A}}_{1}\right) : \left| {{\mathbf{a}}_{r}\left( \mathbf{x}\right) - {\mathbf{a}}_{r}\left( \mathbf{y}\right) }\right| \leq \mathop{\sum }\limits_{{j = 1}}^{n}{d}_{rj}^{\mathbf{a}}\left| {{x}_{j} - {y}_{j}}\right| + {\widehat{d}}_{r}^{\mathbf{a}},r \in {\mathbb{N}}_{1}^{n}$ . Then, there is at least one Filippov solution $z\left( t\right)$ to DDS (1) on $\lbrack 0, + \infty )$ .
|
| 82 |
+
|
| 83 |
+
Proof: The prove process is similar to those in [7], [8] with slightly changes, that is, the Cauchy problem in (3) is transformed into a fixed point problem.
|
| 84 |
+
|
| 85 |
+
Denote a map $\mathbb{G}\left( z\right) : \mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right) \rightarrow \mathcal{C}{\left( \left\lbrack -\sigma ,\mathfrak{t}\right\rbrack ,{\mathbb{R}}^{n}\right) }^{1}$ as:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\mathbb{G}\left( z\right) = \begin{cases} {e}^{Ct}z\left( 0\right) + {\int }_{0}^{t}{e}^{C\left( {t - s}\right) } & \lbrack B\mathrm{\;F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \} \\ + A\mathrm{\;F}\{ h\left( {z\left( t\right) }\right) \} & \mathrm{d}s,t \in \left\lbrack {0,\mathrm{t}}\right\rbrack ,t > 0, \\ \varphi \left( s\right) ,\forall s \leq 0. & \end{cases} \tag{4}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
It has that $\mathbb{G}\left( z\right)$ is completely continuous and upper semicontinuous with convex closed values. Further, one knows that the solutions of the Cauchy problem of DDS (3) are the fixed points of $\mathbb{G}\left( z\right)$ .
|
| 92 |
+
|
| 93 |
+
By $\left( {\mathbf{A}}_{1}\right)$ , the set $\Omega = \left\{ {z \in \mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right) : {\lambda z} \in \mathbb{G}\left( z\right) ,\lambda > }\right.$ $1\}$ is non-empty. Next, let us prove that the set $\Omega$ is bounded.
|
| 94 |
+
|
| 95 |
+
For $z \in \Omega$ , it holds that ${\lambda z} \in \mathbb{G}\left( z\right)$ for $\lambda > 1$ . So, there are $\gamma \left( t\right) \in \mathrm{F}\{ h\left( {z\left( t\right) }\right) \}$ and $\zeta \left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\{ g\left( {z\left( {t - \sigma \left( t\right) }\right) }\right) \}$ such that
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
z\left( t\right) = \frac{1}{\lambda }\left\lbrack {z\left( 0\right) {e}^{Ct} + {\int }_{0}^{t}{e}^{C\left( {t - s}\right) }\mathbb{c}\left( s\right) \mathrm{d}s}\right\rbrack ,\text{ a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{5}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $\mathbb{c}\left( t\right) = {A\gamma }\left( s\right) + {B\zeta }\left( {s - \tau \left( s\right) }\right)$ .
|
| 102 |
+
|
| 103 |
+
In view of $\left( {\mathbf{A}}_{1}\right)$ , there are constants ${D}_{\mathbf{a}}$ and ${d}_{\mathbf{a}}$ such that
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\parallel \mathbb{c}\left( t\right) \parallel \leq {D}_{h}\parallel A\parallel \parallel z\left( t\right) \parallel + {D}_{g}\parallel B\parallel \parallel z\left( {t - \sigma \left( t\right) }\right) \parallel + \mathbb{d}, \tag{6}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $\mathbb{d} = \left( {{d}_{h}\parallel A\parallel + {d}_{g}\parallel B\parallel }\right)$ and $\mathbb{a} = \{ h,g\}$ . Considering inequalities (5) and (6), it follows that
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\parallel z\left( t\right) \parallel \leq {e}^{\parallel C\parallel t}\left\lbrack {\mathbb{y}\left( t\right) + {D}_{g}\parallel B\parallel {\int }_{0}^{t}{e}^{-\parallel C\parallel s}\parallel z\left( {s - \tau \left( s\right) }\right) \parallel \mathrm{d}s}\right.
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
+ {D}_{h}\parallel A\parallel {\int }_{0}^{t}{e}^{-\parallel C\parallel s}\parallel z\left( s\right) \parallel \mathrm{d}s\rbrack ,a.a.t \in \left\lbrack {0,\mathrm{t}}\right\rbrack ,
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
which implies that
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\mathbf{z}\left( t\right) \leq \mathbb{y}\left( t\right) + \mathcal{D}{\int }_{0}^{t}\mathbf{z}\left( s\right) \mathrm{d}s,\;\text{ a.a. }t \in \left\lbrack {0,\mathfrak{t}}\right\rbrack , \tag{7}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $\mathbf{z}\left( t\right) = {e}^{-\parallel C\parallel t}\mathop{\sup }\limits_{{\theta \in \left\lbrack {-\sigma ,t}\right\rbrack }}\parallel z\left( \theta \right) \parallel ,\mathcal{D} = \left( {{D}_{h}\parallel A\parallel + }\right.$ $\left. {{D}_{g}\parallel B\parallel }\right)$ , and $\mathbb{y}\left( t\right) = \parallel z\left( 0\right) \parallel + \frac{\mathrm{d}}{\parallel C\parallel }\left( {1 - {e}^{-\parallel C\parallel t}}\right)$ .
|
| 126 |
+
|
| 127 |
+
Note that, it is easy to obtain ${y}_{\max } = \parallel z\left( 0\right) \parallel + \frac{\mathrm{d}}{\parallel C\parallel }$ is a upper bound of $\mathbf{y}\left( t\right)$ on $\lbrack 0, + \infty )$ . Then, from inequality (7) and the Gronwall's lemma, it has
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{e}^{-\parallel C\parallel t}\parallel z\left( t\right) \parallel \leq \mathbf{z}\left( t\right) \leq {y}_{\max }{e}^{\mathcal{D}t}\text{ , a.a. }t \in \left\lbrack {0,\mathrm{t}}\right\rbrack , \tag{8}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
which further means that $\Omega$ is bounded, a.a. $t \in \left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ .
|
| 134 |
+
|
| 135 |
+
${}^{1}\mathcal{C}\left( {\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack ,{\mathbb{R}}^{n}}\right)$ is the Banach space of the $n$ -dimensional vector-valued continuous functions defined on $\left\lbrack {-\sigma ,\mathfrak{t}}\right\rbrack$ with norm defined by $\parallel x{\parallel }_{\infty } =$ $\sup \{ \parallel x\left( t\right) \parallel ,t \in \left\lbrack {-\sigma ,\mathrm{t}}\right\rbrack \}$ .
|
| 136 |
+
|
| 137 |
+
From the discussions in [7], it is deduced that $\mathbb{G}\left( z\right)$ has a fixed point for $\forall \mathfrak{t} > 0$ , which infers that a Filippov solution to DDS (1) can be defined on $\lbrack 0, + \infty )$ .
|
| 138 |
+
|
| 139 |
+
Remark 1: Delay $\sigma \left( t\right)$ in DDS (1) is merely bounded, which is a milder condition than those in [1], [7], [8]. For instance, the existence of Filippov solutions for DDSs has been discussed in [1], [7], [8] under the condition that the derivatives of delays are differentiable and their values do not exceed 1. Moreover, the proof in Lemma 1 differs from that in [6]. The technique in [6] for handling time delay involves the inequality $\parallel z\left( {t - \sigma \left( t\right) }\right) \parallel \leq \mathop{\max }\limits_{{1 \leq i < n}}\mathop{\max }\limits_{{-\sigma < s < 0}}\left\{ {{z}_{i}\left( s\right) }\right\} + \parallel z\left( t\right) \parallel$ , which is a difficult condition to verify.
|
| 140 |
+
|
| 141 |
+
§ B. STABILITY THEOREM OF DDSS
|
| 142 |
+
|
| 143 |
+
Next, a lemma that can be used to realize synchronization of CDDSs with intermittent control is provided.
|
| 144 |
+
|
| 145 |
+
Lemma 2: Given a time sequence ${\left\{ {t}_{\rho }\right\} }_{\rho = 0}^{\infty }$ with ${t}_{0} = 0$ , $\mathop{\lim }\limits_{{\rho \rightarrow + \infty }}{t}_{\rho } = + \infty$ , and $\mathop{\lim }\limits_{{\rho \rightarrow + \infty }}\sup \frac{{t}_{{2\rho } + 2} - {t}_{{2\rho } + 1}}{{t}_{{2\rho } + 2} - {t}_{2\rho }} = \phi \in$ (0,1), if there is a continuous and nonnegative function $w\left( t\right)$ with $t \in \lbrack - \sigma , + \infty )$ such that
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\left\{ \begin{array}{l} \dot{w}\left( t\right) \leq - {a}_{1}w\left( t\right) + b\bar{w}\left( t\right) - {c}_{1},t \in {\mathfrak{c}}_{\rho } = \left\lbrack {{t}_{2\rho },{t}_{{2\rho } + 1}}\right) , \\ \dot{w}\left( t\right) \leq {a}_{2}w\left( t\right) + b\bar{w}\left( t\right) + {c}_{2},t \in {\mathfrak{u}}_{\rho } = \left\lbrack {{t}_{{2\rho } + 1},{t}_{{2\rho } + 2}}\right) , \end{array}\right.
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
(9)
|
| 152 |
+
|
| 153 |
+
then it has that $w\left( t\right) < M{e}^{-\widetilde{\lambda }t},\widetilde{\lambda } = \lambda - \left( {{a}_{1} + {a}_{2}}\right) \phi > 0,t \geq$ 0, where $\rho \in \mathbb{N},M > 0,\bar{w}\left( t\right) = w\left( {t - \sigma \left( t\right) }\right) ,\lambda > 0$ is the unique solution of transcendental equation ${a}_{1} - \lambda - {b}_{2}{e}^{\lambda \sigma } = 0$ , and the other parameters meet that ${a}_{1} > b \geq 0,{c}_{1} = \left( {{a}_{1} - }\right.$ $b)d > 0$ , and ${c}_{2} = \left( {{a}_{2} + b}\right) d > 0$ .
|
| 154 |
+
|
| 155 |
+
Proof: Let $h\left( t\right) = w\left( t\right) + d$ . Then, it has that $\bar{h}\left( t\right) =$ $\bar{w}\left( t\right) + d$ and $h\left( s\right) = \phi \left( s\right) + d > 0,s \in \left\lbrack {-h,0}\right\rbrack$ ,
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\left\{ \begin{array}{ll} \dot{h}\left( t\right) \leq - {a}_{1}h\left( t\right) + b\bar{h}\left( t\right) , & t \in {\mathfrak{c}}_{\rho }, \\ \dot{h}\left( t\right) \leq {a}_{2}h\left( t\right) + b\bar{h}\left( t\right) , & t \in {\mathfrak{u}}_{\rho }, \end{array}\right. \tag{10}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
Following the results of [14], it concludes from the definition of $h\left( t\right)$ and (10) that $w\left( t\right) < h\left( t\right) \leq \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\bar{h}\left( s\right) {e}^{-\widetilde{\lambda }t}$ . By defining $M = \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\bar{h}\left( s\right)$ , the proof is finished.
|
| 162 |
+
|
| 163 |
+
§ C. RESEARCH PROBLEM
|
| 164 |
+
|
| 165 |
+
This talk discusses the complete synchronization of coupled networks with $\ell$ DDSs (1) via an event-triggered intermittent controller. The coupled network is modeled as
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\left\{ \begin{array}{l} {\dot{x}}_{s}\left( t\right) = F\left( {{x}_{s},{x}_{s,\sigma }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {x}_{j}\left( t\right) + {r}_{s}\left( t\right) , \\ {x}_{s}\left( o\right) = {\tau }_{s}\left( o\right) \in \mathcal{C}\left( {\left\lbrack {-\sigma ,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) ,s \in {\mathbb{N}}_{1}^{\ell }, \end{array}\right. \tag{11}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
where ${x}_{s}\left( t\right) ,{r}_{s}\left( t\right) \in {\mathbb{R}}^{n}$ are respectively the state variable and the control input, outer-coupling matrix $U = {\left( {u}_{ij}\right) }_{\ell \times \ell }$ satisfies the diffusive condition, $\Phi$ is the inner-coupling matrix. Similar to (2), the CDDSs (11) in sense of Filippov solution is
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
{\dot{x}}_{s}\left( t\right) = \mathbb{F}\left( {{x}_{s},{\gamma }_{s},{\zeta }_{s,\sigma }}\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {x}_{j}\left( t\right) + {r}_{s}\left( t\right) , \tag{12}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
where $\mathbb{F}\left( {{x}_{s},{\gamma }_{s},{\zeta }_{s,\sigma }}\right) = C{x}_{s}\left( t\right) + A{\gamma }_{s}\left( t\right) + B{\zeta }_{s}\left( {t - \sigma \left( t\right) }\right)$ , ${\gamma }_{s}\left( t\right) \in \mathrm{F}\left\{ {h\left( {{x}_{s}\left( t\right) }\right) }\right\}$ and ${\zeta }_{s}\left( {t - \sigma \left( t\right) }\right) \in \mathrm{F}\left\{ {g\left( {{x}_{s}\left( {t - \sigma \left( t\right) }\right) }\right) }\right\}$ .
|
| 178 |
+
|
| 179 |
+
Definition 1: The CDDSs (11) is said to be globally exponentially synchronized with DDS (1) if, by designing suitable controllers ${r}_{s}\left( t\right) ,s \in {\mathbb{N}}_{1}^{\ell }$ , there exist $M \geq 0$ and $\alpha > 0$ such that $\parallel e\left( t\right) \parallel \leq M{e}^{-{\alpha t}}$ , for $t \geq 0$ , where $e\left( t\right) = \operatorname{cl}{\left( {e}_{s}\left( t\right) \right) }_{\ell }$ , ${e}_{s}\left( t\right) = {x}_{s}\left( t\right) - z\left( t\right)$ .
|
| 180 |
+
|
| 181 |
+
§ III. SYNCHRONIZATION OF CDDSS
|
| 182 |
+
|
| 183 |
+
§ A. CONTROL DESIGN
|
| 184 |
+
|
| 185 |
+
According to [8], the control goal presented in Definition 1 is equivalence to the same issue for the Filippov systems (2) and (12). Hence, the subsequent study directly addresses the synchronization issue of (2) and (12). In this talk, the new event-triggered intermittent control is designed as
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
{r}_{s}\left( t\right) = \left\{ \begin{array}{l} - {K}_{s}{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {\xi }_{s}\operatorname{sg}\left( {{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\right) , \\ \;t \in {\mathfrak{c}}_{\rho } \cap \left\lbrack {{t}_{k}^{s,{2\rho }},{t}_{k + 1}^{s,{2\rho }}}\right) , \\ 0,t \in {\mathfrak{u}}_{\rho }, \end{array}\right. \tag{13}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where ${\xi }_{s} > 0$ and ${K}_{s} \in {\mathbb{R}}^{n \times n}$ are the control gains, ${t}_{k}^{s,{2\rho }}$ is the ${k}^{th}$ control signal update instant of subsystem $s$ , which is determined by the following ETM
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
{t}_{k + 1}^{s,{2\rho }} = \inf \left\{ {t > {t}_{k}^{s,{2\rho }} : \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} - {\kappa }_{s}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} > 0}\right\} , \tag{14}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
where ${t}_{0}^{s,{2\rho }} = {t}_{2\rho },{\theta }_{s}\left( t\right) = {e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {e}_{s}\left( t\right)$ is the ME and ${\kappa }_{s} \in \left( {0,1}\right)$ is the threshold value.
|
| 198 |
+
|
| 199 |
+
Remark 2: The ME ${\theta }_{s}\left( t\right)$ in (14) is linear and demands less computing power than the nonlinear ones, such as those in [11], [12], [17], which will further be clarified in the numerical example part. In addition, it observes that the MEs in [11], [12], [17] are piecewise continuous, which also introduce additional challenges in proving the exclusion of Zeno behavior. While, these challenges will not arise in the case of a linear ME. Hence, event-triggered nonsmooth control with a linear ME is more practical.
|
| 200 |
+
|
| 201 |
+
Considering system (2) and CDDSs (12) with controller (13), the error system is obtained as
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
{\dot{e}}_{s}\left( t\right) = {\mathrm{F}}_{s}\left( t\right) ,t \in {\mathfrak{c}}_{\rho }, \tag{15a}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
{\dot{e}}_{s}\left( t\right) = {\widetilde{\mathrm{F}}}_{s}\left( t\right) ,t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}, \tag{15b}
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
and its compact Kronecker product form is
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\dot{\mathbf{e}}\left( t\right) = \mathrm{F}\left( {\mathbf{e},\theta ,\mathrm{r},{\mathbf{c}}_{\sigma }}\right) ,t \in {\mathfrak{c}}_{\rho }, \tag{16a}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
\dot{\mathbf{e}}\left( t\right) = \widetilde{\mathrm{F}}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) ,t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}, \tag{16b}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
where ${\mathrm{F}}_{s}\left( t\right) = {\widetilde{\mathrm{F}}}_{s}\left( t\right) - {\xi }_{s}\operatorname{sg}\left( {{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) }\right) - {K}_{s}\left( {{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) }\right)$ , ${\widetilde{\mathrm{F}}}_{s}\left( t\right) = C{e}_{s}\left( t\right) + A{\mathrm{r}}_{s}\left( t\right) + B{\mathrm{c}}_{s}\left( {t - \sigma \left( t\right) }\right) + \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{sj}\Phi {e}_{j}\left( t\right) ,$ $F\left( {e,\theta ,r,{c}_{\sigma }}\right) = \widetilde{F}\left( {e,\theta ,r,{c}_{\sigma }}\right) - \mathcal{K}\left( {e\left( t\right) + \theta \left( t\right) }\right) - {\xi sg}(e\left( t\right) +$ $\left. {\theta \left( t\right) }\right) ,\widetilde{\mathbf{F}}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) = \left( {\mathcal{C} + \mathcal{U}}\right) \mathbf{e}\left( t\right) + \mathcal{A}\mathbf{r}\left( t\right) + \mathcal{B}\mathbf{c}\left( {t - \sigma \left( t\right) }\right)$ , $\theta \left( t\right) = \operatorname{cl}{\left( {\theta }_{s}\left( t\right) \right) }_{\ell },\mathrm{r}\left( t\right) = \operatorname{cl}{\left( {\mathrm{r}}_{s}\left( t\right) \right) }_{\ell },{\mathrm{r}}_{s}\left( t\right) = {\gamma }_{s}\left( t\right) - \gamma \left( t\right)$ , $\mathbf{{sg}}\left( {\mathbf{e}\left( t\right) + \theta \left( t\right) }\right) = \operatorname{cl}{\left( \mathbf{{sg}}\left( {\mathbf{e}}_{s}\left( t\right) + {\theta }_{s}\left( t\right) \right) \right) }_{\ell },\mathbf{c}\left( {t - \sigma \left( t\right) }\right) =$ $\operatorname{cl}{\left( {\mathbf{c}}_{s}\left( t - \sigma \left( t\right) \right) \right) }_{\ell },{\mathbf{c}}_{s}\left( {t - \sigma \left( t\right) }\right) = {\zeta }_{s}\left( {t - \sigma \left( t\right) }\right) - \zeta \left( {t - \sigma \left( t\right) }\right)$ $\mathcal{X} = {I}_{\ell } \otimes X,X \in \{ A,B,C\} ,\mathcal{U} = U \otimes \Phi ,\mathcal{K} = \operatorname{dg}{\left( {K}_{s}\right) }_{\ell },$ and $\xi = \operatorname{dg}{\left( {\xi }_{s}{I}_{n}\right) }_{\ell }$ .
|
| 222 |
+
|
| 223 |
+
§ B. SYNCHRONIZATION ANALYSIS
|
| 224 |
+
|
| 225 |
+
The synchronization criteria are given below.
|
| 226 |
+
|
| 227 |
+
Theorem 1: Assume that $\left( {\mathbf{A}}_{1}\right)$ holds. For given $\phi ,{\kappa }_{s} \in$ $\left( {0,1}\right) ,{a}_{1} > b = \begin{Vmatrix}{\mathcal{B}}_{D}^{g}\end{Vmatrix}$ , and ${a}_{1} + {a}_{2} > 0$ , there are matrices $\mathcal{K} = \operatorname{dg}{\left( {K}_{s}\right) }_{\ell } \in {\mathbb{R}}^{\ell n \times \ell n}$ and $\Psi = \operatorname{dg}{\left( {\Psi }_{s}\right) }_{\ell } \in {\mathbb{D}}_{ + }^{\ell n \times \ell n}$ such that $\eta = \frac{{a}_{1} - b}{{a}_{2} + b}v > 0,{\zeta }_{s} = \frac{1 + {\widetilde{\kappa }}_{s}}{1 - {\widetilde{\kappa }}_{s}}\eta ,{\xi }_{s} = \frac{1 + {\widetilde{\kappa }}_{s}}{1 - {\widetilde{\kappa }}_{s}}v + {\zeta }_{s},s \in {\mathbb{N}}_{1}^{\ell }$ ,
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
{\Omega }_{1} = \left( \begin{matrix} \operatorname{He}\left\lbrack {{\mathbb{A}}_{1} + {\mathcal{A}}_{D}^{h}}\right\rbrack + \widetilde{\Psi } & - \mathcal{K} \\ * & - \Psi \end{matrix}\right) < 0, \tag{17}
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
{\Omega }_{2} = \operatorname{He}\left\lbrack {{\mathbb{A}}_{2} + {\mathcal{A}}_{D}^{h}}\right\rbrack < 0, \tag{18}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
then CDDS (11) with controller (13) is globally exponentially synchronized onto DDS (1), i.e., $\parallel e\left( t\right) \parallel \leq M{e}^{-\widetilde{c}t},\widetilde{c} = c -$ $\left( {{a}_{1} + {a}_{2}}\right) \phi > 0$ , where $c$ is the solution of ${a}_{1} - c - b{e}^{c\sigma } = 0,\phi$ is defined in Lemma 2, $M = \mathop{\sup }\limits_{{s \in \left\lbrack {-\sigma ,0}\right\rbrack }}\parallel \mathbf{e}\left( s\right) \parallel + \frac{v}{{a}_{2} + b},{\mathbb{A}}_{1} =$ $\mathcal{C} - \mathcal{K} + \mathcal{U} + {a}_{1}{I}_{\ell n},{\mathbb{A}}_{2} = \mathcal{C} + \mathcal{U} - {a}_{2}{I}_{\ell n},\widetilde{\Psi } = \operatorname{dg}{\left( {\widetilde{\kappa }}_{s}^{2}{\Psi }_{s}\right) }_{\ell },{\mathcal{A}}_{D}^{h} =$ ${I}_{\ell } \otimes {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {d}_{rj}^{h}\right) }_{n \times n},{\mathcal{B}}_{D}^{g} = {I}_{\ell } \otimes {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {d}_{rj}^{g}\right) }_{n \times n}$ , ${\mathbf{a}}_{h} = {\ell }^{\frac{1}{2}}\parallel \mathrm{{cl}}{\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {\widehat{d}}_{r}^{h}\right) }_{n}\parallel ,{\mathbf{b}}_{g} = {\ell }^{\frac{1}{2}}\parallel \mathrm{{cl}}{\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {\widehat{d}}_{r}^{g}\right) }_{n}\parallel ,$ ${\widetilde{\kappa }}_{s} = \frac{{\kappa }_{s}}{1 - {\kappa }_{s}}$ , and $v = {\mathbf{a}}_{h} + {\mathbf{b}}_{g}$ .
|
| 238 |
+
|
| 239 |
+
Proof: Design a Lyapunov function $V\left( t\right) = \parallel \mathbf{e}\left( t\right) \parallel$ .
|
| 240 |
+
|
| 241 |
+
For $t \in {\mathfrak{c}}_{\rho },\rho \in \mathbb{N}$ , it derives from (16a) that
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack = \frac{2{\mathbf{e}}^{\mathrm{T}}\left( t\right) \mathbf{F}\left( {\mathbf{e},\theta ,\mathbf{r},{\mathbf{c}}_{\sigma }}\right) }{{2V}\left( t\right) }. \tag{19}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
It follows from $\left( {\mathbf{A}}_{1}\right)$ and Cauchy-Schwarz inequality that
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
{\mathbf{e}}^{\top }\left( t\right) \mathcal{A}\mathbf{r}\left( t\right) \leq {\mathbf{e}}^{\top }\left( t\right) {\mathcal{A}}_{D}^{h}\mathbf{e}\left( t\right) + {\mathbf{a}}_{h}\parallel \mathbf{e}\left( t\right) \parallel , \tag{20}
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
{\mathbf{e}}^{\top }\left( t\right) \mathcal{B}\mathbf{c}\left( {t - \sigma \left( t\right) }\right) \leq \left( {b\parallel \mathbf{e}\left( {t - \sigma \left( t\right) }\right) \parallel + {\mathbf{b}}_{h}}\right) \parallel \mathbf{e}\left( t\right) \parallel . \tag{21}
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
The ETM (14) means $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq {\widetilde{\kappa }}_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}$ and
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
{\theta }^{\top }\left( t\right) {\Psi \theta }\left( t\right) \leq {\mathbf{e}}^{\top }\left( t\right) \widetilde{\Psi }\mathbf{e}\left( t\right) . \tag{22}
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
Moreover, one has from $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq {\widetilde{\kappa }}_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}$ that
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
{\mathbf{e}}^{\top }\left( t\right) \xi \operatorname{sg}\left( {\mathbf{e}\left( t\right) + \theta \left( t\right) }\right) \geq \mathop{\sum }\limits_{{s = 1}}^{\ell }\frac{{\xi }_{s}\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}\left( {\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix} - \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}}\right) }{\begin{Vmatrix}{e}_{s}\left( t\right) + {\theta }_{s}\left( t\right) \end{Vmatrix}}
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
\geq \mathop{\sum }\limits_{{s = 1}}^{\ell }\frac{{\xi }_{s}\left( {1 - {\widetilde{\kappa }}_{s}}\right) {\begin{Vmatrix}{e}_{s}\left( t\right) \end{Vmatrix}}^{2}}{\left( {1 + {\widetilde{\kappa }}_{s}}\right) \begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix}}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
\geq \left( {v + \eta }\right) \parallel \mathbf{e}\left( t\right) \parallel \text{ . } \tag{23}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
Substituting inequalities (20)-(23) into (19) yields
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq \frac{{\varepsilon }^{\mathrm{T}}\left( t\right) {\Omega \varepsilon }\left( t\right) + {2bV}\left( t\right) V\left( {t - \sigma \left( t\right) }\right) }{{2V}\left( t\right) }
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
- {a}_{1}V\left( t\right) - \eta \tag{24}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
where $\varepsilon \left( t\right) = {\left( {e}^{\top }\left( t\right) ,{\theta }^{\top }\left( t\right) \right) }^{\top }$ . Then, condition (17) and inequality (24) ensure that
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq - {a}_{1}V\left( t\right) + {bV}\left( {t - \sigma \left( t\right) }\right) - \eta . \tag{25}
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
Similarly, for $t \in {\mathfrak{u}}_{\rho },\rho \in \mathbb{N}$ , it has from (16b) and (18) that
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{\mathcal{D}}^{ + }\left\lbrack {V\left( t\right) }\right\rbrack \leq {a}_{2}V\left( t\right) + {bV}\left( {t - \sigma \left( t\right) }\right) + v. \tag{26}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
Then, from Lemma 2 and inequalities (25)-(26), the result of Theorem 1 can be obtained.
|
| 300 |
+
|
| 301 |
+
Remark 3: Based on the novel nonsmooth event-triggered intermittent control (13) and Lemma 2, Theorem 1 presents the complete synchronization criteria for CDDS (11). The result is quite general since Theorem 1 allows that the derivative of $\sigma \left( t\right)$ is less, equal to, greater than 1, or even that $\sigma \left( t\right)$ is nondifferentiable. Specially, when the derivative of the delay $\sigma \left( t\right)$ exceeds 1 or even delay $\sigma \left( t\right)$ is nondifferentiable, the nonsmooth control (13) makes the Lyapunov-Krasovskii functional methods to show limitations in achieving the complete synchronization. The main reason is that many techniques dealing with time delay in the Lyapunov-Krasovskii functional methods only depend on linear controls, which cannot achieve the complete synchronization of CDDS (11). Hence, a new analysis framework of studying the complete synchronization of CDDSs with intermittent control is proposed.
|
| 302 |
+
|
| 303 |
+
Next, let us discuss the Zeno behavior of ETM (14).
|
| 304 |
+
|
| 305 |
+
Theorem 2: Under the assumption and conditions of Theorem 1 the triggering instants generated by ETM (14) can rule out the Zeno behavior.
|
| 306 |
+
|
| 307 |
+
Proof: For $\forall s \in {\mathbb{N}}_{1}^{\ell }$ and $t \in {\mathfrak{c}}_{\rho } \cap \left\lbrack {{t}_{k}^{s,{2\rho }},{t}_{k + 1}^{s,{2\rho }}}\right)$ , it has that
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
{\mathcal{D}}^{ + }\left\lbrack \begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}\right\rbrack \leq \begin{Vmatrix}{{\mathcal{D}}^{ + }\left\lbrack {{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) - {e}_{s}\left( t\right) }\right\rbrack }\end{Vmatrix} = \begin{Vmatrix}{{\dot{e}}_{s}\left( t\right) }\end{Vmatrix}. \tag{27}
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
In view of Theorem 1, it concludes that there is a ${u}_{s} >$ 0 such that $\begin{Vmatrix}{{e}_{s}\left( t\right) }\end{Vmatrix} \leq {\mathrm{u}}_{s}$ . Then, one can obtain from error system(15a), and $\left( {\mathbf{A}}_{1}\right)$ that
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
\begin{Vmatrix}{{\dot{e}}_{s}\left( t\right) }\end{Vmatrix} \leq {\vartheta }_{s} + \begin{Vmatrix}{K}_{s}\end{Vmatrix}\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix}, \tag{28}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
where ${\vartheta }_{s} = \left( {\begin{Vmatrix}{C - {K}_{s}}\end{Vmatrix} + \begin{Vmatrix}{A}_{D}^{h}\end{Vmatrix} + \begin{Vmatrix}{B}_{D}^{g}\end{Vmatrix}}\right) {\mathrm{u}}_{s} + v + {\xi }_{s} +$ $2\left| {u}_{ss}\right| \parallel \Phi \parallel \mathop{\sum }\limits_{{j = 1}}^{\ell }{u}_{j},{A}_{D}^{h} = {\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {a}_{ir}\right| {d}_{rj}^{h}\right) }_{n \times n}$ , and ${B}_{D}^{g} =$ ${\left( \mathop{\sum }\limits_{{r = 1}}^{n}\left| {b}_{ir}\right| {d}_{rj}^{y}\right) }_{n \times n}$ .
|
| 320 |
+
|
| 321 |
+
One has from inequalities (27)-(28) and $\begin{Vmatrix}{{\theta }_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} =$ 0 that $\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} \leq \frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}{{\vartheta }_{s}}\left( {{e}^{\begin{Vmatrix}{K}_{s}\end{Vmatrix}\left( {t - {t}_{k}^{s,{2\rho }}}\right) } - 1}\right)$ , that is, $(t -$ $\left. {t}_{k}^{s,{2\rho }}\right) \geq \frac{1}{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}\ln \left( {\frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}}{{\vartheta }_{s}}\begin{Vmatrix}{{\theta }_{s}\left( t\right) }\end{Vmatrix} + 1}\right)$ . Note that, the next event will not be triggering until $\begin{Vmatrix}{{\theta }_{s}\left( {t}_{k + 1}^{s,{2\rho } - }\right) }\end{Vmatrix} = {\kappa }_{s}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix}$ . Hence, the inequality above implies that $\left( {{t}_{k + 1}^{s,{2\rho } - } - {t}_{k}^{s,{2\rho }}}\right) \geq$ $\frac{\ln \left( {\frac{\begin{Vmatrix}{K}_{s}\end{Vmatrix}{\kappa }_{s}}{{\vartheta }_{s}}\begin{Vmatrix}{{e}_{s}\left( {t}_{k}^{s,{2\rho }}\right) }\end{Vmatrix} + 1}\right) }{\begin{Vmatrix}{K}_{s}\end{Vmatrix}} > 0.$
|
| 322 |
+
|
| 323 |
+
§ IV. NUMERICAL EXAMPLE
|
| 324 |
+
|
| 325 |
+
This section utilizes the Hopfield neural network (HNN) with discontinuous activation functions to verify the effectiveness of our results. The circuit diagram of the HNN is shown in Fig. 1(a) with detailed explanations provided in [23]. By applying Kirchhoff's laws, the HNN can be represented as a DDS (1). Next, the parameters of the HNN, in the form of those in DDS (1), are selected for numerical simulation.
|
| 326 |
+
|
| 327 |
+
Conside a HNN or the DDS (1) with $z\left( t\right) = {\left( {z}_{1}\left( t\right) ,{z}_{2}\left( t\right) \right) }^{\top }$ , $g\left( z\right) = {\left( {g}_{1}\left( {z}_{1}\right) ,{g}_{2}\left( {z}_{2}\right) \right) }^{\top },h\left( z\right) = {\left( {h}_{1}\left( {z}_{1}\right) ,{h}_{2}\left( {z}_{2}\right) \right) }^{\top },\sigma \left( t\right) =$ ${0.65} + {0.35}\left| {\sin \left( t\right) }\right| ,C = \mathrm{{dg}}\left( {-{1.5}, - 1}\right) ,i = 1,2$ ,
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
A = \left( \begin{matrix} 2 & - {0.1} \\ - {4.9} & 3 \end{matrix}\right) ,{g}_{i}\left( {z}_{i}\right) = \left\{ \begin{array}{l} \frac{\left| {{z}_{i} + 1}\right| - \left| {{z}_{i} - 1}\right| }{2} + {0.04},{z}_{i} > 0, \\ \frac{\left| {{z}_{i} + 1}\right| - \left| {{z}_{i} - 1}\right| }{2} - {0.01},{z}_{i} < 0, \end{array}\right.
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
B = \left( \begin{matrix} - {1.5} & {0.1} \\ - {0.5} & - {0.5} \end{matrix}\right) ,{h}_{i}\left( {z}_{i}\right) = \left\{ \begin{array}{l} \tanh \left( {z}_{i}\right) + {0.01},{z}_{i} > 0, \\ \tanh \left( {z}_{i}\right) - {0.02},{z}_{i} < 0. \end{array}\right.
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
It has that $\mathbf{a}\left( \cdot \right) ,\mathbf{a} = \{ h,g\}$ meet $\left( {\mathbf{A}}_{1}\right)$ with ${d}_{11}^{\mathbf{a}} = {d}_{22}^{\mathbf{a}} = 1$ , ${d}_{12}^{\mathrm{a}} = {d}_{21}^{\mathrm{a}} = 0,{\widehat{d}}_{1}^{h} = {\widehat{d}}_{2}^{h} = {0.03}$ , and ${\widehat{d}}_{21}^{g} = {\widehat{d}}_{21}^{g} = {0.05}$ .
|
| 338 |
+
|
| 339 |
+
< g r a p h i c s >
|
| 340 |
+
|
| 341 |
+
Fig. 1: (a) Circuit diagram of the HNN and coupling topology; (b) Trajectories of DDS (1) and CDDS (11) without controller.
|
| 342 |
+
|
| 343 |
+
Now, consider that the coupled system (11) is composed of 3 DDS (1), where $\Phi = \operatorname{dg}\left( {2,1}\right)$ and $U = {\left( {u}_{ij}\right) }_{3 \times 3}$ is the Laplacian matrix of the digraph shown in Fig. 1(a). When the initial values of DDS (1) and CDDS (11) are randomly chosen on $\left\lbrack {-5,5}\right\rbrack ,\forall t \in \left\lbrack {-1,0}\right\rbrack$ , their trajectories are given in Fig. 1(b), from which one can see that the synchronization cannot be realized without the control.
|
| 344 |
+
|
| 345 |
+
By taken ${a}_{1} = {4.6},{a}_{2} = {3.88},{\kappa }_{1} = {0.12},{\kappa }_{2} = {0.17}$ , and ${\kappa }_{3} = {0.15}$ , one gains that $b = {1.603}{\xi }_{1} = {1.197}$ , ${\xi }_{2} = {1.378},{\xi }_{3} = {1.299}$ and $\phi = {0.1002}$ . Solving conditions (17) and (18) obtains ${K}_{1} = \left( \begin{matrix} {11.480} & {3.759} \\ {3.759} & {13.908} \end{matrix}\right) ,{K}_{2} =$ $\left( \begin{matrix} {11.690} & {3.815} \\ {3.815} & {14.139} \end{matrix}\right) ,{K}_{3} = \left( \begin{matrix} {11.744} & {3.854} \\ {3.854} & {14.236} \end{matrix}\right)$ . Hence, Theorem 1 is true, that is, CDDS (11) with controller (13) can be synchronized onto DDS (1). Fig. 2(a) shows the evolution of error trajectories of (11) and (1) when the work intervals of controller (13) are $\lbrack 0,{0.5}) \cup \lbrack {0.5},{0.7}) \cup \lbrack {0.7},{1.6}) \cup \lbrack {1.6},{1.65}) \cup$ $\lbrack {1.65},{2.55}) \cup \lbrack {2.55},{2.68}) \cup \lbrack {2.68},{3.98}) \cup \lbrack {3.98},4)\cdots$ . In addition, the triggering instants and intervals of three subsystems are displayed in Fig. 2(b), respectively. It finds from Fig. 1 (b) and Fig. 2 that the designed event-triggered controller (13) is not only efficient but also resource-efficient.
|
| 346 |
+
|
| 347 |
+
< g r a p h i c s >
|
| 348 |
+
|
| 349 |
+
Fig. 2: (a) Error trajectories of DDS (1) and CDDS (11) with controller (13); (b) Triggering instants and intervals.
|
| 350 |
+
|
| 351 |
+
Comparative Experiment: To prove the novelty 3), a comparative experiment with the ETMs from in [11], [12], [17] is conducted, where average running time (ART) and trigger rate (RT) are the measurement standards. The results are listed in Table I. In the simulation, the time-step size is 0.001, and a total of 12420 control signals are generated for $\left\lbrack {0,{15}}\right\rbrack$ . The experiment code runs on a computer with Windows 10, Intel Core i5-10400, 2.9GHz, and 16GB RAM. It observes from Table I that ETM (14) not only saves ${52.78}\%$ of the running time but also reduces trigger frequency.
|
| 352 |
+
|
| 353 |
+
TABLE I: ${\mathbf{{TR}}}^{1}$ and ${\mathbf{{ART}}}^{2}$ of ETM (14) and [11],[12],[17].
|
| 354 |
+
|
| 355 |
+
max width=
|
| 356 |
+
|
| 357 |
+
Methods 3|c|(14) 3|c|[11], [12], [17]
|
| 358 |
+
|
| 359 |
+
1-7
|
| 360 |
+
Nodes 1 2 3 1 2 3
|
| 361 |
+
|
| 362 |
+
1-7
|
| 363 |
+
TR (%) 27.17 36.43 31.84 39.51 38.93 38.38
|
| 364 |
+
|
| 365 |
+
1-7
|
| 366 |
+
$\mathbf{{ART}}$ (sec) 3|c|0.5214 3|c|0.7966
|
| 367 |
+
|
| 368 |
+
1-7
|
| 369 |
+
|
| 370 |
+
${}^{1}$ TR $= \frac{\text{ The number of trigger releases }}{\text{ Total signals }}$ ; ${}^{2}$ ART is the average obtained from 10 runs of the code.
|
| 371 |
+
|
| 372 |
+
§ V. CONCLUSION
|
| 373 |
+
|
| 374 |
+
This talk has considered the complete synchronization of CDDSs under event-triggered intermittent control. By developing a new stability inequality and a weighted-norm-based Lyapunov function, sufficient synchronization conditions have been derived. Note that, the results of this talk did not impose any restrictions on the derivatives of the delay. Moreover, experiments shown that the novel event-triggered control with a linear ME requires less computing power than existing papers.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,465 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MRBicopter: Modular Reconfigurable Transverse Tilt-rotor Bicopter System
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Qianyao Pan
|
| 4 |
+
|
| 5 |
+
School of Automation
|
| 6 |
+
|
| 7 |
+
Engineering
|
| 8 |
+
|
| 9 |
+
University of Electronic Science
|
| 10 |
+
|
| 11 |
+
and Technology of China
|
| 12 |
+
|
| 13 |
+
Chendu, China
|
| 14 |
+
|
| 15 |
+
panqianyaoupc@163.com
|
| 16 |
+
|
| 17 |
+
${2}^{\text{nd }}$ Xin Lu
|
| 18 |
+
|
| 19 |
+
School of Automation
|
| 20 |
+
|
| 21 |
+
Engineering
|
| 22 |
+
|
| 23 |
+
University of Electronic Science
|
| 24 |
+
|
| 25 |
+
and Technology of China
|
| 26 |
+
|
| 27 |
+
Chendu, China
|
| 28 |
+
|
| 29 |
+
luxin_uestc@163.com
|
| 30 |
+
|
| 31 |
+
${3}^{\text{rd }}$ Weijun Yuan
|
| 32 |
+
|
| 33 |
+
School of Automation
|
| 34 |
+
|
| 35 |
+
Engineering
|
| 36 |
+
|
| 37 |
+
University of Electronic Science
|
| 38 |
+
|
| 39 |
+
and Technology of China
|
| 40 |
+
|
| 41 |
+
Chendu, China
|
| 42 |
+
|
| 43 |
+
ywj861087955@163.com
|
| 44 |
+
|
| 45 |
+
${4}^{\text{th }}$ Fusheng Li*
|
| 46 |
+
|
| 47 |
+
School of Automation
|
| 48 |
+
|
| 49 |
+
Engineering
|
| 50 |
+
|
| 51 |
+
University of Electronic Science
|
| 52 |
+
|
| 53 |
+
and Technology of China
|
| 54 |
+
|
| 55 |
+
Chendu, China
|
| 56 |
+
|
| 57 |
+
lifusheng@uestc.edu.cn
|
| 58 |
+
|
| 59 |
+
Abstract-This paper introduces a modular UAV(MRBicopter) that can realize structural combination reconstruction. Each module contains a rotor tilting structure and an active docking mechanism. By separating and combining submodules, the UAV functions can match the requirements of different flight tasks in real time. First, we designed the mechanical actuator to allow physically connected assembly to fly collaboratively. Secondly, according to different reconstructed structures, we propose two generalized control strategies to realize the independent control of posture through the reassignment of rotor speed and tilt angle.The feasibility of the mechanical design and control method is verified by simulation and ground experiment under ambient wind interference .
|
| 60 |
+
|
| 61 |
+
Keywords—Reconfigurable and modular robots, bicopter, active docking mechanism, rotor tilting, wind interference, simulation.
|
| 62 |
+
|
| 63 |
+
## I. INTRODUCTION
|
| 64 |
+
|
| 65 |
+
In recent years, multi-rotor UAVs have received a lot of attention due to their simplicity, agility and versatility. Research in multi-rotor UAVs has extended to air maneuvering, collective behavior, multi-modal motion, and modular reconfigurable robots[1]-[5]. Among them, the advantages brought by the modular and reconfigurable capabilities of UAVs are increasingly reflected. For example, in the context of disaster relief, modular reconfigurable robots can realize adaptability to different task scenarios through structural reconstruction, such as cooperating in the transportation of large items[16] and completing search and rescue tracking in complex environments[17].
|
| 66 |
+
|
| 67 |
+
In order to improve the stability and safety of modular reconfigurable UAV. Reference[6] designed an airborne self-assembled flying robot, ModQuad, which is composed of flexible flight modules and can easily move in a three-dimensional environment. For airborne real-time separation systems, a new deformable multi-link aerial robot is proposed in reference[7], which consists of a link module of a 1-DOF thrust vector mechanism, and a transformation planning method is proposed to ensure the minimum force/moment by taking into account the 1-DOF thrust vector angle. Design for separation structure; Reference[8] proposes a magnetic-based connection mechanism, which uses a lightweight passive mechanism to dock and unload in mid-air. Aiming at the application scenario of modular UAV, a self-assembly robot based on autonomous module was proposed in literature[9], which can fly together and assemble into rectangular structures in the air. Literature[10] proposes a full-attitude geometric control algorithm for synchronous tilting hexagonal rotorcraft to realize arbitrary Angle flight at the cost of efficiency. In literature[11], a tilt-rotor UAV was designed. The tilt-rotor mechanism can restrain power dissipation and has a wider inclination range. In literature[12], a structure connecting two helicopter modules is designed, which can fly along any Angle of the wall; Literature[13] proposed the idea of splitting quadcopter UAV into two twin-rotor UAV in real time in the air and developed the modular quadcopter(SplitFlyer). Literature[14] developed a combinable and extensible tilt-rotor UAV(CEDTR), which can match different task scenarios by changing the combination and number of unmanned sub-modules. Literature[15] developed an airborne detachable quadrotor UAV suitable for narrow gaps, which improved the environmental adaptability of reconfigurable UAV.
|
| 68 |
+
|
| 69 |
+
In this paper, we design a transverse two-rotor tilting bicopter that can be combined and reconstructe, called modular reconfigurable bicopter(MRBicopter), which can not only realize cooperative flight in single module state, but also can get multi-module combination flight control.The main contributions of this paper are in three aspects:
|
| 70 |
+
|
| 71 |
+
1) Modular reconfigurable bicopter with rotor vector tilting structure and active combination docking mechanism is designed and modeled, which can realize structural reconfiguration to adapt to different task requirements.
|
| 72 |
+
|
| 73 |
+
2) The UAV dynamics model is built and the UAV control distribution and controller design are completed to realize the control of a single module and the full degree of freedom control of the assembly.
|
| 74 |
+
|
| 75 |
+
3) The environmental wind interference module is innovatively introduced in the simulation to make the simulation result more close to the reality.
|
| 76 |
+
|
| 77 |
+
The structure of this paper is as follows: Section II introduces the structure of MRBicopter. Section III describes the modeling of MRBicopter. Section IV shows the control distribution and controller design of MRBicopter. Section V demonstrates the results of simulation and tests. The conclusions are presented in section VI.
|
| 78 |
+
|
| 79 |
+
---
|
| 80 |
+
|
| 81 |
+
*The author is the corresponding author.
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
Fig.1: MRBicopter mechanical structure. (a) rotor vector tilting structure, (b) electromagnet combination butt mechanism, (c) submodule structure.
|
| 88 |
+
|
| 89 |
+
## II. DESIGN
|
| 90 |
+
|
| 91 |
+
## A. Rotor vector tilting structure
|
| 92 |
+
|
| 93 |
+
The rotor propeller axis of the traditional UAV is fixed, which direction of lift force cannot be changed. Here, we adopt the design of rotor vector tilting structure(Fig.1(a)). The rotor can tilt around the arm shaft, and each rotor is separately installed with a servo steering machine to control the tilting angle. This structure increases the input of UAV assembly control quantity, and can realize the full freedom control of MRBicopter assembly.
|
| 94 |
+
|
| 95 |
+
## B. Electromagnet combination butt structure
|
| 96 |
+
|
| 97 |
+
For the docking device between modular reconfigurable MRBicopter, permanent magnets(NdFe) are used in traditional reconfigurable UAVs. This scheme has a slow control response during separation and is easy to cause instability. Therefore, we designed a multi-locking electromagnet combination docking mechanism(Fig.1(b)).It uses a circular electromagnet as the main actuator, and realizes the on-off of the electromagnet by using a program to control the relay. A total of three locking nodes are included, each node can provide $5\mathrm{{KG}}$ of locking suction.
|
| 98 |
+
|
| 99 |
+
## C. Electromagnet combination butt structure
|
| 100 |
+
|
| 101 |
+
MRBicopter consists of two cross-mounted bicopter single modules(Fig.1(c)). The single module can not only realize autonomous cooperative flight, but also complete assembly reconstruction by magnetic attraction.
|
| 102 |
+
|
| 103 |
+
## III. DYNAMICS
|
| 104 |
+
|
| 105 |
+
## A. Establishment of the frame
|
| 106 |
+
|
| 107 |
+
In this section, four different frames are introduced to define the flight attitude of MRBicopter(Fig.2). The rotation frame system as follows.
|
| 108 |
+
|
| 109 |
+
1) World frame ${W}_{E}$ . World frame is fixed coordinate system.
|
| 110 |
+
|
| 111 |
+
2) Assembly frame ${B}_{z}$ . The origin of the ${B}_{z}$ is located at the center of mass of the assembly, and its position relative to the world frame is expressed as ${P}_{w} = \left\{ \begin{array}{lll} {x}_{w} & {y}_{w} & {z}_{w} \end{array}\right\}$ ; Relative velocity is expressed as ${V}_{W} = \left\{ \begin{array}{lll} {V}_{WX} & {V}_{WY} & {V}_{WZ} \end{array}\right\}$ ;The angular velocity of the assembly is expressed as $\Omega = {\left\lbrack \begin{array}{lll} {\omega }_{x} & {\omega }_{y} & {\omega }_{z} \end{array}\right\rbrack }^{T}$ ; The attitude angle is expressed as $\Theta = {\left\{ \begin{array}{lll} \phi & \theta & \psi \end{array}\right\} }^{T}$ , where $\phi$ is the roll angle, $\theta$ is the pitch angle, and $\psi$ is the yaw angle.
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Fig.2: MRBicopter frame system Settings.
|
| 116 |
+
|
| 117 |
+
3) Submodule frame ${B}_{i}$ . The origin of the submodule frame is located at the centroid of the submodule MRBicopter, which is defined as $\left\{ \begin{array}{lll} {X}_{bi} & {Y}_{bi} & {Z}_{bi} \end{array}\right\}$ . The Euler angle in the submodule frame ${B}_{i}$ is expressed as ${\Theta }_{i} = {\left\lbrack \begin{array}{lll} {\phi }_{i} & {\theta }_{i} & {\psi }_{i} \end{array}\right\rbrack }^{T}$ .
|
| 118 |
+
|
| 119 |
+
4) Rotor frame ${P}_{ij}$ . The origin of the rotor frame is located at the position of the rotor motor centroid, the $\mathrm{z}$ axis points to the rotor lift direction, and the $\mathrm{x}$ axis points to the body centroid. The tilt angle of the rotor is set as ${\alpha }_{ij}$ .
|
| 120 |
+
|
| 121 |
+
## B. Derivation of Dynamics and Kinematic Model
|
| 122 |
+
|
| 123 |
+
In this section, we will deduce the attitude dynamics and kinematics equations of MRBicopter, which will eventually be used in the control allocation and controller model construction in section 4. The i-th submodule in the assembly has two rotors, which are distributed on an axis. The rotor speed is expressed as ${\varpi }_{ij}$ . Therefore, the lift force and rotation torque generated by the $j$ -th rotor in the module can be written as:
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{f}_{ij} = {K}_{T}{{\varpi }_{ij}}^{2} \tag{1}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{\tau }_{ij} = {K}_{Q}{\omega }_{ij}^{2} \tag{2}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
Where, ${K}_{T},{K}_{Q}$ is the rotor motor constant.
|
| 134 |
+
|
| 135 |
+
In the Assembly frame ${B}_{z}$ , MRBicopter’s lift force ${F}_{B}$ is as follows.
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{F}_{ij}^{B} = {f}_{ij}{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) E
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{F}_{B} = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B} \tag{3}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
Where $E = {\left\lbrack \begin{array}{lll} 0 & 0 & 1 \end{array}\right\rbrack }^{T}$ is the unit coefficient matrix, ${}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{y}\right\rbrack }\left( {\alpha }_{ij}\right) \in {SO}\left( 3\right)$ represents the rotation matrix from the rotor frame ${P}_{y}$ to the assembly frame ${B}_{z}$ , which satisfied as:
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {R}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) = {}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {B}_{i}\right\rbrack }{}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {R}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) \tag{4}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
Where ${}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack } \in {SO}\left( 3\right)$ represents the rotation matrix from the submodule frame ${B}_{i}$ to the assembly frame ${B}_{z}$ ${}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) \in {SO}\left( 3\right)$ represents the rotation matrix from rotor frame ${P}_{ij}$ to submodule frame ${B}_{i}$ , which satisfied as:
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\left\{ \begin{array}{l} {}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{i1}\right) = R\left( {{\sigma }_{1},{\alpha }_{i1}}\right) \\ {}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{i2}\right) = R\left( {{\sigma }_{2},{\alpha }_{i2}}\right) \end{array}\right. \tag{5}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
R\left( {\sigma ,\alpha }\right)
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
= \left\lbrack \begin{matrix} \cos \left( \sigma \right) & - \sin \left( \sigma \right) \cos \left( \alpha \right) & \sin \left( \alpha \right) \sin \left( \sigma \right) \\ \sin \left( \sigma \right) & \cos \left( \sigma \right) \cos \left( \alpha \right) & - \sin \left( \alpha \right) \cos \left( \sigma \right) \\ 0 & \sin \left( \alpha \right) & \cos \left( \alpha \right) \end{matrix}\right\rbrack \tag{6}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
Where $\sigma$ is the angle between the arm axis and the X-axis. According to the structure of the transverse twin-rotor UAV, it can be seen that ${\sigma }_{1} = - \pi /2,{\sigma }_{2} = \pi /2$ .
|
| 166 |
+
|
| 167 |
+
In the assembly frame ${B}_{z}$ , the rotor torque ${\tau }_{a}$ of MRBicopter is shown as follows.
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
{\tau }_{a} = \mathop{\sum }\limits_{{ij}}{}^{\left\{ {B}_{z}\right\} }{p}_{\left\lbrack {P}_{j}\right\rbrack } \times {F}_{ij}^{B} \tag{7}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
Due to the action of air resistance, the yaw moment $Q$ generated by the rotor propeller is shown as follows.
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
{Q}_{ij} = {\left( -1\right) }^{j - 1}{C}_{t}{\varpi }_{ij}E
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
Q = \mathop{\sum }\limits_{{ij}}{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) {Q}_{ij} \tag{8}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
Finally, the MRBicopter’s body torque $\tau$ can be written as:
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\tau = {\tau }_{a} + Q \tag{9}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
The dynamics equation of MRBicopter is established by using Newton-Euler equation.
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\tau = {J}_{S}\dot{\Omega } + \Omega \times {J}_{S}\Omega
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
\mathop{\sum }\limits_{i}{m}_{i}^{\left\{ {W}_{E}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack }{\dot{V}}_{W} = {}^{\left\{ {W}_{E}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack }{F}_{B} - \mathop{\sum }\limits_{i}{m}_{i}{gE} \tag{10}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
Where ${m}_{i}$ is the mass of the submodule and ${J}_{S}$ is the total inertia matrix of the assembly. At the same time, a kinematic
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+
Fig.3: MRBicopter submodule(mode 1) and assembly(mode 2).
|
| 204 |
+
|
| 205 |
+
model is established on this basis, in which the position kinematic equation is expressed as:
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
{\dot{P}}_{W} = {V}_{W} \tag{11}
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
The attitude kinematics equation is expressed as:
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\dot{\Theta } = {W}_{R} \cdot \Omega \tag{12}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
## IV. CONTROL
|
| 218 |
+
|
| 219 |
+
Section IV introduces the controller design of MRBicopter single module and assembly(Fig.3), and introduces the control distribution mode of two flight modes and the feedforward Angle design of assembly [18].
|
| 220 |
+
|
| 221 |
+
## A. Controller design
|
| 222 |
+
|
| 223 |
+
Fig. 5 shows the structural block diagram of the MRBicopter controller. The architecture of the controller is based on the cascade double closed-loop PID control law, with the position controller as the outer ring and the attitude controller as the inner ring. As shown in Fig.4(a), the MRBicopter submodule (mode 1) in-flight control system is an underactuated system, so we adopt the controller architecture similar to the traditional bicopter[19]. The MRBicopter assembly (mode 2) control system is an overdrive system that can achieve hovering flight at any pitch angle(Fig.4(b)).
|
| 224 |
+
|
| 225 |
+
The flight controller can be divided into four channels and output four control quantities ${T}_{1},{T}_{2},{T}_{3},{T}_{4}$ , which can not only control the linear displacement and angular motion of the UAV dynamics model, but also be used for decoupling the linear displacement and angular motion. The controller takes the expected position ${P}_{\text{des }} = {\left\lbrack \begin{array}{lll} X & Y & Z \end{array}\right\rbrack }^{T}$ and the expected yaw angle $\psi$ as the target control inputs respectively. ${K}_{P}^{P},{K}_{I}^{P},{K}_{D}^{P}$ is the proportion coefficient, differential coefficient and integral coefficient of the position ring respectively. Where the position controller meets:
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
\ddot{X} = {K}_{P}^{P}\left( {P - {P}_{des}}\right) + {K}_{I}^{P}{\int }_{0}^{t}\left( {P - {P}_{des}}\right) + {K}_{D}^{P}\frac{d\left( {\dot{P} - {\dot{P}}_{des}}\right) }{dt} \tag{13}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
The attitude controller takes the expected attitude angle $\cdot$ ${\Theta }_{des} = {\left\lbrack \begin{array}{lll} \phi & \theta & \psi \end{array}\right\rbrack }^{T}$ as input and the control quantity $T = {\left\lbrack \begin{array}{lll} {T}_{2} & {T}_{3} & {T}_{4} \end{array}\right\rbrack }^{T}$ as output, ${K}_{P}^{\Theta },{K}_{I}^{\Theta },{K}_{D}^{\Theta }$ are the proportion coefficient, differential coefficient and integral coefficient of the attitude ring respectively, which are satisfied as follows:
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
T = {K}_{P}^{\Theta }\left( {\Theta - {\Theta }_{des}}\right) + {K}_{I}^{\Theta }{\int }_{0}^{t}\left( {\Theta - {\Theta }_{des}}\right) + {K}_{D}^{\Theta }\frac{d\left( {\dot{\Theta } - {\dot{\Theta }}_{des}}\right) }{dt} \tag{14}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+
Fig.4: MRBicopter structural block diagram of flight controller.
|
| 240 |
+
|
| 241 |
+
## B. Tilt angle feedforward initialization calculate
|
| 242 |
+
|
| 243 |
+
The main function of feedforward initial value calculation is to solve the approximate value of rotor tilt angle when MRBicopter assembly is hovering at any pitch angle, which can effectively reduce the overshoot and response time of position control. Here, it is assumed that all rotor propellers have the same lift when the assembly hovers at any pitch angle, the hover angle is $\mathbf{\theta }$ , and the initial feedforward value of the tilt angle is ${\alpha }_{\text{offset }}$ . As shown in Fig.5, we can establish the following force balance equation:
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\mathop{\sum }\limits_{i}{m}_{i}g\cos \theta = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B}\cos \left( {\alpha }_{\text{offset }}^{ij}\right)
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
\mathop{\sum }\limits_{i}{m}_{i}g\sin \theta = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B}\sin \left( {\alpha }_{\text{offset }}^{ij}\right) \tag{15}
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
Since the resultant force in the $\mathrm{x}$ and $\mathrm{y}$ directions is zero, when the MRBicopter hovers, the initial feedforward value of the tilt angle can be obtained as:
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
{\alpha }_{\text{offset }}^{ij} = \theta \tag{16}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
## C. Control distribution
|
| 260 |
+
|
| 261 |
+
The control distribution module can assign the throttle speed of the rotor and the tilt angle of the rotor in real time according
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+
Fig.5: MRBicopter hover force analysis diagram with pitch angle.
|
| 266 |
+
|
| 267 |
+
to the mode and flight condition of the UAV, so as to achieve the purpose of controlling the attitude of the UAV.
|
| 268 |
+
|
| 269 |
+
## 1) Submodule control distribution
|
| 270 |
+
|
| 271 |
+
The MRBicopter submodule can be regarded as a transverse twin-rotor bicopter, with the rotor tilt axis located in the same straight line and the rotors symmetrical. Literature [20] proposed a cross-row dual-rotor UAV control method, so the control distribution mode can be transferred to the MRBicopter submodule, and the rotational speed of the left and right rotors can be expressed as:
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
{\varpi }_{L} = \sqrt{\frac{{T}_{1}}{2{K}_{T}} + {T}_{2}} \tag{17}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
{\varpi }_{R} = \sqrt{\frac{{T}_{1}}{2{K}_{T}} - {T}_{2}}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
The tilt angle of the left and right rotors can be expressed as:
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
{\alpha }_{L} = {C}_{1}{T}_{3} + {C}_{2}{T}_{4} \tag{18}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
{\alpha }_{R} = {C}_{1}{T}_{3} - {C}_{2}{T}_{4}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
In equation, ${C}_{1},{C}_{2}$ are constants.
|
| 292 |
+
|
| 293 |
+
## 2) Assembly control distribution
|
| 294 |
+
|
| 295 |
+
Taking MRBicopter assembly mass center ${B}_{z}$ as the center, ${X}_{W},{Y}_{W}$ can be used to divide the rotor into four parts (Fig.6): top left rotor: ${P}_{k}\left( {k = 1,2,\cdots , n}\right)$ ; Lower left rotor: ${P}_{k}\left( {k = n + 1,\cdots ,{2n}}\right)$ ;top right rotor: ${P}_{k}\left( {k = {2n} + 1,\cdots ,{3n}}\right)$ ; lower right rotor: ${P}_{k}\left( {k = {3n} + 1,\cdots ,{4n}}\right)$ .
|
| 296 |
+
|
| 297 |
+
Literature[20] proposes a mechanism for connecting two twin rotor modules, each of which combines two of the four propellers into a group, similar to the MRBicopter assembly structure. Therefore, the control distribution mode can be extended here. The rotor speed control distribution of the four parts can be write as follows:
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
{\varpi }_{i}^{1} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} + {T}_{3} + {T}_{2}}\left( {i = 1,\cdots , n}\right)
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
{\varpi }_{i}^{2} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} - {T}_{3} + {T}_{2}}\left( {i = n + 1,\cdots ,{2n}}\right)
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
(19)
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
{\varpi }_{i}^{3} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} + {T}_{3} - {T}_{2}}\left( {i = {2n} + 1,\cdots ,{3n}}\right)
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
{\varpi }_{i}^{4} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} - {T}_{3} - {T}_{2}}\left( {i = {3n} + 1,\cdots ,{4n}}\right)
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
|
| 319 |
+
Fig.6: Mechanism model of MRBicopte.
|
| 320 |
+
|
| 321 |
+
The MRBicopte assembly uses the ${X}_{W}$ axis to divide the left and right rotor tilt angles using different control distributions:
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
{\alpha }_{i}^{1} = {\alpha }_{\text{offset }} + {C}_{1}{T}_{4} + {C}_{2}\frac{{F}_{Y}}{4n}\left( {i = 1,2,\cdots ,{2n}}\right) \tag{20}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
{\alpha }_{i}^{2} = {\alpha }_{\text{offset }} - {C}_{1}{T}_{4} + {C}_{2}\frac{{F}_{Y}}{4n}\left( {i = {2n} + 1,\cdots ,{4n}}\right)
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
In equation, ${C}_{1},{C}_{2}$ are constants.
|
| 332 |
+
|
| 333 |
+
## Simulation&Experment
|
| 334 |
+
|
| 335 |
+
Section V mainly introduces MRBicopte submodule and assembly simulation and ground test. In order to make the simulation more realistic, we introduce the ambient wind interference model here, which can verify the robustness of the MRBicopter controller in the face of ambient wind interference.
|
| 336 |
+
|
| 337 |
+
## A. Environmental wind model
|
| 338 |
+
|
| 339 |
+
In order to simulate the mathematical model of the atmospheric wind field as much as possible, we divide the environmental wind into constant wind, gust wind, gradient wind and random wind four parts.
|
| 340 |
+
|
| 341 |
+
Constant wind: The wind power of constant wind is a constant value $\delta$ , the wind speed does not change. Its mathematical model of wind speed is expressed as follows:
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
{V}_{f1} = \delta \tag{21}
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
Gust wind: Gust wind is a kind of periodic change of wind speed in atmospheric motion, which is characterized by the sudden increase of wind speed at a certain moment and the self-weakening after a period of time. Its mathematical model can be expressed as a piecewise function:
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
{V}_{f2} = \left\{ \begin{matrix} 0 & \left( {x < 0}\right) \\ \frac{{V}_{m}}{2}\left( {1 - \cos \left( \frac{\pi x}{{d}_{m}}\right) }\right) & \left( {0 \leq x \leq {d}_{m}}\right) \\ {V}_{m} & \left( {x > {d}_{m}}\right) \end{matrix}\right. \tag{22}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
Where ${V}_{m}$ is the gust amplitude, ${d}_{m}$ is the gust length, $\mathrm{x}$ is the gust travel distance.
|
| 354 |
+
|
| 355 |
+
Gradient wind: Gradient wind refers to the ambient wind whose wind speed increases from zero to a certain value over time. Its mathematical model expression is as follows:
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
{V}_{f3} = \frac{t - {t}_{1}}{{t}_{2} - {t}_{1}}{V}_{f - \max } \tag{23}
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
Where ${V}_{{f}_{-\max }}$ represents the peak of the gradual wind speed, ${t}_{1},{t}_{2}$ represent the beginning and end of the gradual wind, respectively.
|
| 362 |
+
|
| 363 |
+
Random wind: Random wind refers to the air disturbance generated by random changes in the atmosphere. Here, we use random number generator to build a mathematical model of random wind:
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
{V}_{f4} = {V}_{{f4}\_ \max }\pi \left( {-{10} \sim {10}}\right) \cos \left( {{\alpha t} + \beta }\right) \tag{24}
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
Where ${V}_{{f4}\_ \max }$ represents the theoretical peak of random wind; It is a random number generated by a random number generator, and its range is $- {10} \sim {10}.\alpha$ represents the average frequency of random wind speed fluctuation, with the value ranging ${0.5} \sim 2\mathrm{{rad}}/\mathrm{s}.\beta$ indicates the offset of random wind speed. The value ranges from ${0.1\pi r} \sim {2\pi r}$ .
|
| 370 |
+
|
| 371 |
+
Therefore, if the total wind speed of the ambient wind field is ${V}_{F}$ , it can be obtained as:
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
{V}_{F} = {V}_{f1} + {V}_{f2} + {V}_{f3} + {V}_{f4} \tag{25}
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
In order to simplify the calculation, the wind speed direction is taken as the opposite of the MRBicopter's flight direction, so the air resistance generated by ambient wind field interference can be calculated:
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
{F}_{w} = \frac{1}{2}{C\rho S}{\left( {V}_{F} + {v}_{UAV}\right) }^{2} \tag{26}
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
Where $C$ represents the air resistance coefficient, the value is ${0.31};\rho$ indicates the air density, which is ${1.29}\mathrm{\;{kg}}/{\mathrm{m}}^{3}$ . $S$ represents the windward area of MRBicopte, which is ${31}{\mathrm{\;{cm}}}^{3}$ . ${v}_{UAV}$ stands for flight speed.
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
|
| 387 |
+
Fig.7: Simulation of MRBicopter hover under ambient wind interference, (a) MRBicopter single module; (b) MRBicopter assembly.
|
| 388 |
+
|
| 389 |
+
## B. Simulation
|
| 390 |
+
|
| 391 |
+
Fig. 7 shows the simulation diagram of the three-axis attitude angle of the two MRBicopter structures in hovering state under the presence of ambient wind interference. The blue line represents the roll angle tracking curve, the red line represents the pitch angle tracking curve, and the green line represents the yaw angle tracking curve.
|
| 392 |
+
|
| 393 |
+
## 1) Submodule
|
| 394 |
+
|
| 395 |
+
This experiment is a hover simulation experiment of MRBicopter submodule in the presence of ambient wind interference. The average ambient wind speed is set at ${10.5}\mathrm{\;m}/\mathrm{s}$ . The simulation experiment results are shown in Fig.7(a): the hover attitude angle oscillation of a single module does not exceed 0.05rad, which meets the design requirements.
|
| 396 |
+
|
| 397 |
+
## 2) Assembly
|
| 398 |
+
|
| 399 |
+
This experiment is a hover simulation experiment of the MRBicopter assembly in the presence of ambient wind interference. The simulation results are shown in Fig.7(b): instantaneous oscillation of $> {0.4}$ rad occurs in the pitch and roll angle of the assembly at ${0.3}\mathrm{\;s}$ , and the adjustment is completed within ${0.2}\mathrm{\;s}$ , and the subsequent oscillation amplitude does not exceed 0.1rad. It shows that the combination controller has a strong ability to suppress the environmental wind interference.
|
| 400 |
+
|
| 401 |
+
## C. Ground experiment
|
| 402 |
+
|
| 403 |
+
In order to ensure the safety of the test, the experiment was carried out on the indoor aircraft test platform, and 1/6HP650 pneumatic industrial fan was used as the ambient wind source. The MRBicopter flight control module uses STM32F427VIT6 as the main processor; The power supply adopts LiPo(4S1P:14.8V,3000mAh); The combination docking module uses ZigBee serial communication to receive the control signal and convert it into analog PWM signal to control the on-off of the relay. The ${2.4}\mathrm{{GHz}}{14}$ channel communication module is used for signal sending and receiving. The experimental results are shown in Fig.8.
|
| 404 |
+
|
| 405 |
+

|
| 406 |
+
|
| 407 |
+
Fig.8: MRBicopter ground experiment under ambient wind interference.
|
| 408 |
+
|
| 409 |
+
## 1) Submodule experiment
|
| 410 |
+
|
| 411 |
+
Two MRBicopter submodules are built here, and one of them is selected for experiment. The experimental results are shown in Fig.8(a): when there is wind interference, the average oscillation amplitude of pitch angle and roll angle of the submodule is $\pm {4.98}^{ \circ }$ and the average oscillation amplitude of yaw Angle is $\pm {7.91}^{ \circ }$ , which meets the stability requirements.
|
| 412 |
+
|
| 413 |
+
## 2) Assembly experiment
|
| 414 |
+
|
| 415 |
+
The MRBicopter assembly is composed of two submodules. The experimental results are shown in Fig.8(b): when there is wind interference, the average oscillation amplitude of pitch and roll angle of the assembly is $\pm {5.12}^{ \circ }$ , and the average oscillation amplitude of yaw angle is $\pm {7.33}^{ \circ }$ , which meets the stability requirements.
|
| 416 |
+
|
| 417 |
+
## CONCLUSION
|
| 418 |
+
|
| 419 |
+
In this paper, a modular and reconfigurable multi-UAV platform MRBicopter is proposed, in which the transverse twin rotor submodule can realize structural reconstruction through the electromagnet combination docking structure, and can realize different flight states by changing the motor speed and tilt angle to meet the needs of different tasks. In order to further improve the controllability of MRBicopter and expand its application fields, improvements will be made in the following aspects in the future:
|
| 420 |
+
|
| 421 |
+
1) The fuzzy PID control algorithm is proposed to further improve the interference compensation capability of MRBicopter and improve the stability of the flight process of the assembly.
|
| 422 |
+
|
| 423 |
+
2) Structurally, further mount relevant computing units on the UAV, such as Lidar, airborne computer NUC, etc., to expand the application scenario of the MRBicopter.
|
| 424 |
+
|
| 425 |
+
## REFERENCES
|
| 426 |
+
|
| 427 |
+
[1] B. Mu and P. Chirarattananon, "Universal flying objects: Modular multirotor system for flight of rigid objects," IEEE Transactions on Robotics, 2019.
|
| 428 |
+
|
| 429 |
+
[2] D. Saldana, B. Gabrich, G. Li, M. Yim, and V. Kumar, "Modquad: The flying modular structure that self-assembles in midair," in 2018 IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2018, pp. 691-698.
|
| 430 |
+
|
| 431 |
+
[3] T. Anzai, M. Zhao, M. Murooka, F. Shi, K. Okada, and M. Inaba, "Design, modeling and control of fully actuated 2d transformable aerial robot with 1 dof thrust vectorable link module," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 2820-2826.K. Elissa, "Title of paper if known," unpublished.
|
| 432 |
+
|
| 433 |
+
[4] S. K. H. Win, L. S. T. Win, D. Sufiyan, G. S. Soh, and S. Foong, "Dynamics and control of a collaborative and separating descent of samara autorotating wings," IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 3067-3074, 2019.
|
| 434 |
+
|
| 435 |
+
[5] H. Jia et al, "A Quadrotor With a Passively Reconfigurable Airframe for Hybrid Terrestrial Locomotion," in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4741-4751, Dec. 2022, doi: 10.1109/TMECH.2022.3164929.
|
| 436 |
+
|
| 437 |
+
[6] D. Saldaña, B. Gabrich, G. Li, M. Yim and V. Kumar, "ModQuad: The Flying Modular Structure that Self-Assembles in Midair," 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 2018, pp. 691-698, doi: 10.1109/ICRA.2018.8461014.
|
| 438 |
+
|
| 439 |
+
[7] Zhao M, Anzai T, Shi F, Chen X, Okada K, Inaba M. Design, modeling, and control of an aerial robot dragon: A dual-rotor-embedded multilink robot with the ability of multi-degree-offreedom aerial transformation. IEEE Robotics and Automation Letters. 2018 Jan 15;3(2):117683.
|
| 440 |
+
|
| 441 |
+
[8] D. Saldaña, P. M. Gupta and V. Kumar, "Design and Control of Aerial Modules for Inflight Self-Disassembly," in IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3410-3417, Oct. 2019, doi: 10.1109/LRA.2019.2926680.
|
| 442 |
+
|
| 443 |
+
[9] H. Yang, S. Park, J. Lee, J. Ahn, D. Son, and D. Lee, "Lasdra: Largesize aerial skeleton system with distributed rotor actuation," in 2018 IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2018, pp. 7017-7023.
|
| 444 |
+
|
| 445 |
+
[10] M. Ryll, D. Bicego, and A. Franchi, "Modeling and control of fast-hex: A fully-actuated by synchronized-tilting hexarotor," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016, pp. 1689-1694.
|
| 446 |
+
|
| 447 |
+
[11] A. Oosedo, S. Abiko, S. Narasaki, A. Kuno, A. Konno, and M. Uchiyama, "Large attitude change flight of a quad tilt rotor unmanned aerial vehicle," Advanced Robotics, vol. 30, no. 5, pp. 326-337, 2016.
|
| 448 |
+
|
| 449 |
+
[12] K. Kawasaki, Y. Motegi, M. Zhao, K. Okada, and M. Inaba, "Dual connected bi-copter with new wall trace locomotion feasibility that can fly at arbitrary tilt angle," in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015, pp. 524-531.
|
| 450 |
+
|
| 451 |
+
[13] Songnan Bai Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, 2022 IEEE/ASME Transactions on Mechatronics SplitFlyer: SplitFlyer Air: A Modular Quadcopter that Disassembles into Two Bicopters Mid-Air.
|
| 452 |
+
|
| 453 |
+
[14] Z. Wu et al, "Design, Modeling and Control of a Composable and Extensible Drone with Tilting Rotors," 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 2022, pp. 12682-12689, doi: 10.1109/IROS47612.2022.9982090.
|
| 454 |
+
|
| 455 |
+
[15] S. Li, F. Liu, Y. Gao, J. Xiang, Z. Tu and D. Li, "AirTwins: Modular Bi-Copters Capable of Splitting From Their Combined Quadcopter in Midair," in IEEE Robotics and Automation Letters, vol. 8, no. 9, pp. 6068- 6075, Sept. 2023, doi: 10.1109/LRA.2023.3301776.
|
| 456 |
+
|
| 457 |
+
[16] M. Zhao, K. Kawasaki, X. Chen, S. Noda, K. Okada, and M. Inaba, "Whole-body aerial manipulation by transformable multirotor with twodimensional multilinks," in Proc. IEEE Int. Conf. Robot. Automat., 2017, pp. 5175-5182.
|
| 458 |
+
|
| 459 |
+
[17] B. Gabrich, D. Saldaña, V. Kumar, and M. Yim, "A flying gripper based on cuboid modular robots," in Proc. IEEE Int. Conf. Robot. Automat., 2018, pp. 7024-7030.
|
| 460 |
+
|
| 461 |
+
[18] J. Zhang et al., "Design and Control of Rapid In-Air Reconfiguration for Modular Quadrotors With Full Controllable Degrees of Freedom," in IEEE Robotics and Automation Letters, vol. 9, no. 8, pp. 6920-6927, Aug. 2024, doi: 10.1109/LRA.2024.3416797.
|
| 462 |
+
|
| 463 |
+
[19] Ö. B. Albayrak, Y. Ersan, A. S. Bağbaşı, A. Turgut Başaranoğlu and K. B. Arikan, "Design of a Robotic Bicopter," 2019 7th International Conference on Control, Mechatronics and Automation (ICCMA), Delft, Netherlands,2019, pp. 98-103, doi: 10.1109/ICCMA46720.2019.8988694.
|
| 464 |
+
|
| 465 |
+
[20] K. Kawasaki, Y. Motegi, M. Zhao, K. Okada and M. Inaba, "Dual connected Bi-Copter with new wall trace locomotion feasibility that can fly at arbitrary tilt angle," 2015 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 524-531, doi: 10.1109/IROS.2015.7353422.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/C84NGKXzwB/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,419 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ MRBICOPTER: MODULAR RECONFIGURABLE TRANSVERSE TILT-ROTOR BICOPTER SYSTEM
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Qianyao Pan
|
| 4 |
+
|
| 5 |
+
School of Automation
|
| 6 |
+
|
| 7 |
+
Engineering
|
| 8 |
+
|
| 9 |
+
University of Electronic Science
|
| 10 |
+
|
| 11 |
+
and Technology of China
|
| 12 |
+
|
| 13 |
+
Chendu, China
|
| 14 |
+
|
| 15 |
+
panqianyaoupc@163.com
|
| 16 |
+
|
| 17 |
+
${2}^{\text{ nd }}$ Xin Lu
|
| 18 |
+
|
| 19 |
+
School of Automation
|
| 20 |
+
|
| 21 |
+
Engineering
|
| 22 |
+
|
| 23 |
+
University of Electronic Science
|
| 24 |
+
|
| 25 |
+
and Technology of China
|
| 26 |
+
|
| 27 |
+
Chendu, China
|
| 28 |
+
|
| 29 |
+
luxin_uestc@163.com
|
| 30 |
+
|
| 31 |
+
${3}^{\text{ rd }}$ Weijun Yuan
|
| 32 |
+
|
| 33 |
+
School of Automation
|
| 34 |
+
|
| 35 |
+
Engineering
|
| 36 |
+
|
| 37 |
+
University of Electronic Science
|
| 38 |
+
|
| 39 |
+
and Technology of China
|
| 40 |
+
|
| 41 |
+
Chendu, China
|
| 42 |
+
|
| 43 |
+
ywj861087955@163.com
|
| 44 |
+
|
| 45 |
+
${4}^{\text{ th }}$ Fusheng Li*
|
| 46 |
+
|
| 47 |
+
School of Automation
|
| 48 |
+
|
| 49 |
+
Engineering
|
| 50 |
+
|
| 51 |
+
University of Electronic Science
|
| 52 |
+
|
| 53 |
+
and Technology of China
|
| 54 |
+
|
| 55 |
+
Chendu, China
|
| 56 |
+
|
| 57 |
+
lifusheng@uestc.edu.cn
|
| 58 |
+
|
| 59 |
+
Abstract-This paper introduces a modular UAV(MRBicopter) that can realize structural combination reconstruction. Each module contains a rotor tilting structure and an active docking mechanism. By separating and combining submodules, the UAV functions can match the requirements of different flight tasks in real time. First, we designed the mechanical actuator to allow physically connected assembly to fly collaboratively. Secondly, according to different reconstructed structures, we propose two generalized control strategies to realize the independent control of posture through the reassignment of rotor speed and tilt angle.The feasibility of the mechanical design and control method is verified by simulation and ground experiment under ambient wind interference .
|
| 60 |
+
|
| 61 |
+
Keywords—Reconfigurable and modular robots, bicopter, active docking mechanism, rotor tilting, wind interference, simulation.
|
| 62 |
+
|
| 63 |
+
§ I. INTRODUCTION
|
| 64 |
+
|
| 65 |
+
In recent years, multi-rotor UAVs have received a lot of attention due to their simplicity, agility and versatility. Research in multi-rotor UAVs has extended to air maneuvering, collective behavior, multi-modal motion, and modular reconfigurable robots[1]-[5]. Among them, the advantages brought by the modular and reconfigurable capabilities of UAVs are increasingly reflected. For example, in the context of disaster relief, modular reconfigurable robots can realize adaptability to different task scenarios through structural reconstruction, such as cooperating in the transportation of large items[16] and completing search and rescue tracking in complex environments[17].
|
| 66 |
+
|
| 67 |
+
In order to improve the stability and safety of modular reconfigurable UAV. Reference[6] designed an airborne self-assembled flying robot, ModQuad, which is composed of flexible flight modules and can easily move in a three-dimensional environment. For airborne real-time separation systems, a new deformable multi-link aerial robot is proposed in reference[7], which consists of a link module of a 1-DOF thrust vector mechanism, and a transformation planning method is proposed to ensure the minimum force/moment by taking into account the 1-DOF thrust vector angle. Design for separation structure; Reference[8] proposes a magnetic-based connection mechanism, which uses a lightweight passive mechanism to dock and unload in mid-air. Aiming at the application scenario of modular UAV, a self-assembly robot based on autonomous module was proposed in literature[9], which can fly together and assemble into rectangular structures in the air. Literature[10] proposes a full-attitude geometric control algorithm for synchronous tilting hexagonal rotorcraft to realize arbitrary Angle flight at the cost of efficiency. In literature[11], a tilt-rotor UAV was designed. The tilt-rotor mechanism can restrain power dissipation and has a wider inclination range. In literature[12], a structure connecting two helicopter modules is designed, which can fly along any Angle of the wall; Literature[13] proposed the idea of splitting quadcopter UAV into two twin-rotor UAV in real time in the air and developed the modular quadcopter(SplitFlyer). Literature[14] developed a combinable and extensible tilt-rotor UAV(CEDTR), which can match different task scenarios by changing the combination and number of unmanned sub-modules. Literature[15] developed an airborne detachable quadrotor UAV suitable for narrow gaps, which improved the environmental adaptability of reconfigurable UAV.
|
| 68 |
+
|
| 69 |
+
In this paper, we design a transverse two-rotor tilting bicopter that can be combined and reconstructe, called modular reconfigurable bicopter(MRBicopter), which can not only realize cooperative flight in single module state, but also can get multi-module combination flight control.The main contributions of this paper are in three aspects:
|
| 70 |
+
|
| 71 |
+
1) Modular reconfigurable bicopter with rotor vector tilting structure and active combination docking mechanism is designed and modeled, which can realize structural reconfiguration to adapt to different task requirements.
|
| 72 |
+
|
| 73 |
+
2) The UAV dynamics model is built and the UAV control distribution and controller design are completed to realize the control of a single module and the full degree of freedom control of the assembly.
|
| 74 |
+
|
| 75 |
+
3) The environmental wind interference module is innovatively introduced in the simulation to make the simulation result more close to the reality.
|
| 76 |
+
|
| 77 |
+
The structure of this paper is as follows: Section II introduces the structure of MRBicopter. Section III describes the modeling of MRBicopter. Section IV shows the control distribution and controller design of MRBicopter. Section V demonstrates the results of simulation and tests. The conclusions are presented in section VI.
|
| 78 |
+
|
| 79 |
+
*The author is the corresponding author.
|
| 80 |
+
|
| 81 |
+
< g r a p h i c s >
|
| 82 |
+
|
| 83 |
+
Fig.1: MRBicopter mechanical structure. (a) rotor vector tilting structure, (b) electromagnet combination butt mechanism, (c) submodule structure.
|
| 84 |
+
|
| 85 |
+
§ II. DESIGN
|
| 86 |
+
|
| 87 |
+
§ A. ROTOR VECTOR TILTING STRUCTURE
|
| 88 |
+
|
| 89 |
+
The rotor propeller axis of the traditional UAV is fixed, which direction of lift force cannot be changed. Here, we adopt the design of rotor vector tilting structure(Fig.1(a)). The rotor can tilt around the arm shaft, and each rotor is separately installed with a servo steering machine to control the tilting angle. This structure increases the input of UAV assembly control quantity, and can realize the full freedom control of MRBicopter assembly.
|
| 90 |
+
|
| 91 |
+
§ B. ELECTROMAGNET COMBINATION BUTT STRUCTURE
|
| 92 |
+
|
| 93 |
+
For the docking device between modular reconfigurable MRBicopter, permanent magnets(NdFe) are used in traditional reconfigurable UAVs. This scheme has a slow control response during separation and is easy to cause instability. Therefore, we designed a multi-locking electromagnet combination docking mechanism(Fig.1(b)).It uses a circular electromagnet as the main actuator, and realizes the on-off of the electromagnet by using a program to control the relay. A total of three locking nodes are included, each node can provide $5\mathrm{{KG}}$ of locking suction.
|
| 94 |
+
|
| 95 |
+
§ C. ELECTROMAGNET COMBINATION BUTT STRUCTURE
|
| 96 |
+
|
| 97 |
+
MRBicopter consists of two cross-mounted bicopter single modules(Fig.1(c)). The single module can not only realize autonomous cooperative flight, but also complete assembly reconstruction by magnetic attraction.
|
| 98 |
+
|
| 99 |
+
§ III. DYNAMICS
|
| 100 |
+
|
| 101 |
+
§ A. ESTABLISHMENT OF THE FRAME
|
| 102 |
+
|
| 103 |
+
In this section, four different frames are introduced to define the flight attitude of MRBicopter(Fig.2). The rotation frame system as follows.
|
| 104 |
+
|
| 105 |
+
1) World frame ${W}_{E}$ . World frame is fixed coordinate system.
|
| 106 |
+
|
| 107 |
+
2) Assembly frame ${B}_{z}$ . The origin of the ${B}_{z}$ is located at the center of mass of the assembly, and its position relative to the world frame is expressed as ${P}_{w} = \left\{ \begin{array}{lll} {x}_{w} & {y}_{w} & {z}_{w} \end{array}\right\}$ ; Relative velocity is expressed as ${V}_{W} = \left\{ \begin{array}{lll} {V}_{WX} & {V}_{WY} & {V}_{WZ} \end{array}\right\}$ ;The angular velocity of the assembly is expressed as $\Omega = {\left\lbrack \begin{array}{lll} {\omega }_{x} & {\omega }_{y} & {\omega }_{z} \end{array}\right\rbrack }^{T}$ ; The attitude angle is expressed as $\Theta = {\left\{ \begin{array}{lll} \phi & \theta & \psi \end{array}\right\} }^{T}$ , where $\phi$ is the roll angle, $\theta$ is the pitch angle, and $\psi$ is the yaw angle.
|
| 108 |
+
|
| 109 |
+
< g r a p h i c s >
|
| 110 |
+
|
| 111 |
+
Fig.2: MRBicopter frame system Settings.
|
| 112 |
+
|
| 113 |
+
3) Submodule frame ${B}_{i}$ . The origin of the submodule frame is located at the centroid of the submodule MRBicopter, which is defined as $\left\{ \begin{array}{lll} {X}_{bi} & {Y}_{bi} & {Z}_{bi} \end{array}\right\}$ . The Euler angle in the submodule frame ${B}_{i}$ is expressed as ${\Theta }_{i} = {\left\lbrack \begin{array}{lll} {\phi }_{i} & {\theta }_{i} & {\psi }_{i} \end{array}\right\rbrack }^{T}$ .
|
| 114 |
+
|
| 115 |
+
4) Rotor frame ${P}_{ij}$ . The origin of the rotor frame is located at the position of the rotor motor centroid, the $\mathrm{z}$ axis points to the rotor lift direction, and the $\mathrm{x}$ axis points to the body centroid. The tilt angle of the rotor is set as ${\alpha }_{ij}$ .
|
| 116 |
+
|
| 117 |
+
§ B. DERIVATION OF DYNAMICS AND KINEMATIC MODEL
|
| 118 |
+
|
| 119 |
+
In this section, we will deduce the attitude dynamics and kinematics equations of MRBicopter, which will eventually be used in the control allocation and controller model construction in section 4. The i-th submodule in the assembly has two rotors, which are distributed on an axis. The rotor speed is expressed as ${\varpi }_{ij}$ . Therefore, the lift force and rotation torque generated by the $j$ -th rotor in the module can be written as:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
{f}_{ij} = {K}_{T}{{\varpi }_{ij}}^{2} \tag{1}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{\tau }_{ij} = {K}_{Q}{\omega }_{ij}^{2} \tag{2}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
Where, ${K}_{T},{K}_{Q}$ is the rotor motor constant.
|
| 130 |
+
|
| 131 |
+
In the Assembly frame ${B}_{z}$ , MRBicopter’s lift force ${F}_{B}$ is as follows.
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
{F}_{ij}^{B} = {f}_{ij}{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) E
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{F}_{B} = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B} \tag{3}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
Where $E = {\left\lbrack \begin{array}{lll} 0 & 0 & 1 \end{array}\right\rbrack }^{T}$ is the unit coefficient matrix, ${}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{y}\right\rbrack }\left( {\alpha }_{ij}\right) \in {SO}\left( 3\right)$ represents the rotation matrix from the rotor frame ${P}_{y}$ to the assembly frame ${B}_{z}$ , which satisfied as:
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {R}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) = {}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {B}_{i}\right\rbrack }{}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {R}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) \tag{4}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
Where ${}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack } \in {SO}\left( 3\right)$ represents the rotation matrix from the submodule frame ${B}_{i}$ to the assembly frame ${B}_{z}$ ${}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) \in {SO}\left( 3\right)$ represents the rotation matrix from rotor frame ${P}_{ij}$ to submodule frame ${B}_{i}$ , which satisfied as:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\left\{ \begin{array}{l} {}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{i1}\right) = R\left( {{\sigma }_{1},{\alpha }_{i1}}\right) \\ {}^{\left\{ {B}_{i}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{i2}\right) = R\left( {{\sigma }_{2},{\alpha }_{i2}}\right) \end{array}\right. \tag{5}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
R\left( {\sigma ,\alpha }\right)
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
= \left\lbrack \begin{matrix} \cos \left( \sigma \right) & - \sin \left( \sigma \right) \cos \left( \alpha \right) & \sin \left( \alpha \right) \sin \left( \sigma \right) \\ \sin \left( \sigma \right) & \cos \left( \sigma \right) \cos \left( \alpha \right) & - \sin \left( \alpha \right) \cos \left( \sigma \right) \\ 0 & \sin \left( \alpha \right) & \cos \left( \alpha \right) \end{matrix}\right\rbrack \tag{6}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
Where $\sigma$ is the angle between the arm axis and the X-axis. According to the structure of the transverse twin-rotor UAV, it can be seen that ${\sigma }_{1} = - \pi /2,{\sigma }_{2} = \pi /2$ .
|
| 162 |
+
|
| 163 |
+
In the assembly frame ${B}_{z}$ , the rotor torque ${\tau }_{a}$ of MRBicopter is shown as follows.
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
{\tau }_{a} = \mathop{\sum }\limits_{{ij}}{}^{\left\{ {B}_{z}\right\} }{p}_{\left\lbrack {P}_{j}\right\rbrack } \times {F}_{ij}^{B} \tag{7}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
Due to the action of air resistance, the yaw moment $Q$ generated by the rotor propeller is shown as follows.
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
{Q}_{ij} = {\left( -1\right) }^{j - 1}{C}_{t}{\varpi }_{ij}E
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
Q = \mathop{\sum }\limits_{{ij}}{}^{\left\{ {B}_{z}\right\} }{R}_{\left\lbrack {P}_{ij}\right\rbrack }\left( {\alpha }_{ij}\right) {Q}_{ij} \tag{8}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
Finally, the MRBicopter’s body torque $\tau$ can be written as:
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\tau = {\tau }_{a} + Q \tag{9}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
The dynamics equation of MRBicopter is established by using Newton-Euler equation.
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\tau = {J}_{S}\dot{\Omega } + \Omega \times {J}_{S}\Omega
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\mathop{\sum }\limits_{i}{m}_{i}^{\left\{ {W}_{E}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack }{\dot{V}}_{W} = {}^{\left\{ {W}_{E}\right\} }{R}_{\left\lbrack {B}_{z}\right\rbrack }{F}_{B} - \mathop{\sum }\limits_{i}{m}_{i}{gE} \tag{10}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
Where ${m}_{i}$ is the mass of the submodule and ${J}_{S}$ is the total inertia matrix of the assembly. At the same time, a kinematic
|
| 196 |
+
|
| 197 |
+
< g r a p h i c s >
|
| 198 |
+
|
| 199 |
+
Fig.3: MRBicopter submodule(mode 1) and assembly(mode 2).
|
| 200 |
+
|
| 201 |
+
model is established on this basis, in which the position kinematic equation is expressed as:
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
{\dot{P}}_{W} = {V}_{W} \tag{11}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
The attitude kinematics equation is expressed as:
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
\dot{\Theta } = {W}_{R} \cdot \Omega \tag{12}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
§ IV. CONTROL
|
| 214 |
+
|
| 215 |
+
Section IV introduces the controller design of MRBicopter single module and assembly(Fig.3), and introduces the control distribution mode of two flight modes and the feedforward Angle design of assembly [18].
|
| 216 |
+
|
| 217 |
+
§ A. CONTROLLER DESIGN
|
| 218 |
+
|
| 219 |
+
Fig. 5 shows the structural block diagram of the MRBicopter controller. The architecture of the controller is based on the cascade double closed-loop PID control law, with the position controller as the outer ring and the attitude controller as the inner ring. As shown in Fig.4(a), the MRBicopter submodule (mode 1) in-flight control system is an underactuated system, so we adopt the controller architecture similar to the traditional bicopter[19]. The MRBicopter assembly (mode 2) control system is an overdrive system that can achieve hovering flight at any pitch angle(Fig.4(b)).
|
| 220 |
+
|
| 221 |
+
The flight controller can be divided into four channels and output four control quantities ${T}_{1},{T}_{2},{T}_{3},{T}_{4}$ , which can not only control the linear displacement and angular motion of the UAV dynamics model, but also be used for decoupling the linear displacement and angular motion. The controller takes the expected position ${P}_{\text{ des }} = {\left\lbrack \begin{array}{lll} X & Y & Z \end{array}\right\rbrack }^{T}$ and the expected yaw angle $\psi$ as the target control inputs respectively. ${K}_{P}^{P},{K}_{I}^{P},{K}_{D}^{P}$ is the proportion coefficient, differential coefficient and integral coefficient of the position ring respectively. Where the position controller meets:
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
\ddot{X} = {K}_{P}^{P}\left( {P - {P}_{des}}\right) + {K}_{I}^{P}{\int }_{0}^{t}\left( {P - {P}_{des}}\right) + {K}_{D}^{P}\frac{d\left( {\dot{P} - {\dot{P}}_{des}}\right) }{dt} \tag{13}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
The attitude controller takes the expected attitude angle $\cdot$ ${\Theta }_{des} = {\left\lbrack \begin{array}{lll} \phi & \theta & \psi \end{array}\right\rbrack }^{T}$ as input and the control quantity $T = {\left\lbrack \begin{array}{lll} {T}_{2} & {T}_{3} & {T}_{4} \end{array}\right\rbrack }^{T}$ as output, ${K}_{P}^{\Theta },{K}_{I}^{\Theta },{K}_{D}^{\Theta }$ are the proportion coefficient, differential coefficient and integral coefficient of the attitude ring respectively, which are satisfied as follows:
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
T = {K}_{P}^{\Theta }\left( {\Theta - {\Theta }_{des}}\right) + {K}_{I}^{\Theta }{\int }_{0}^{t}\left( {\Theta - {\Theta }_{des}}\right) + {K}_{D}^{\Theta }\frac{d\left( {\dot{\Theta } - {\dot{\Theta }}_{des}}\right) }{dt} \tag{14}
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
< g r a p h i c s >
|
| 234 |
+
|
| 235 |
+
Fig.4: MRBicopter structural block diagram of flight controller.
|
| 236 |
+
|
| 237 |
+
§ B. TILT ANGLE FEEDFORWARD INITIALIZATION CALCULATE
|
| 238 |
+
|
| 239 |
+
The main function of feedforward initial value calculation is to solve the approximate value of rotor tilt angle when MRBicopter assembly is hovering at any pitch angle, which can effectively reduce the overshoot and response time of position control. Here, it is assumed that all rotor propellers have the same lift when the assembly hovers at any pitch angle, the hover angle is $\mathbf{\theta }$ , and the initial feedforward value of the tilt angle is ${\alpha }_{\text{ offset }}$ . As shown in Fig.5, we can establish the following force balance equation:
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
\mathop{\sum }\limits_{i}{m}_{i}g\cos \theta = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B}\cos \left( {\alpha }_{\text{ offset }}^{ij}\right)
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\mathop{\sum }\limits_{i}{m}_{i}g\sin \theta = \mathop{\sum }\limits_{{ij}}{F}_{ij}^{B}\sin \left( {\alpha }_{\text{ offset }}^{ij}\right) \tag{15}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
Since the resultant force in the $\mathrm{x}$ and $\mathrm{y}$ directions is zero, when the MRBicopter hovers, the initial feedforward value of the tilt angle can be obtained as:
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{\alpha }_{\text{ offset }}^{ij} = \theta \tag{16}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
§ C. CONTROL DISTRIBUTION
|
| 256 |
+
|
| 257 |
+
The control distribution module can assign the throttle speed of the rotor and the tilt angle of the rotor in real time according
|
| 258 |
+
|
| 259 |
+
< g r a p h i c s >
|
| 260 |
+
|
| 261 |
+
Fig.5: MRBicopter hover force analysis diagram with pitch angle.
|
| 262 |
+
|
| 263 |
+
to the mode and flight condition of the UAV, so as to achieve the purpose of controlling the attitude of the UAV.
|
| 264 |
+
|
| 265 |
+
§ 1) SUBMODULE CONTROL DISTRIBUTION
|
| 266 |
+
|
| 267 |
+
The MRBicopter submodule can be regarded as a transverse twin-rotor bicopter, with the rotor tilt axis located in the same straight line and the rotors symmetrical. Literature [20] proposed a cross-row dual-rotor UAV control method, so the control distribution mode can be transferred to the MRBicopter submodule, and the rotational speed of the left and right rotors can be expressed as:
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{\varpi }_{L} = \sqrt{\frac{{T}_{1}}{2{K}_{T}} + {T}_{2}} \tag{17}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
{\varpi }_{R} = \sqrt{\frac{{T}_{1}}{2{K}_{T}} - {T}_{2}}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
The tilt angle of the left and right rotors can be expressed as:
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
{\alpha }_{L} = {C}_{1}{T}_{3} + {C}_{2}{T}_{4} \tag{18}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
{\alpha }_{R} = {C}_{1}{T}_{3} - {C}_{2}{T}_{4}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
In equation, ${C}_{1},{C}_{2}$ are constants.
|
| 288 |
+
|
| 289 |
+
§ 2) ASSEMBLY CONTROL DISTRIBUTION
|
| 290 |
+
|
| 291 |
+
Taking MRBicopter assembly mass center ${B}_{z}$ as the center, ${X}_{W},{Y}_{W}$ can be used to divide the rotor into four parts (Fig.6): top left rotor: ${P}_{k}\left( {k = 1,2,\cdots ,n}\right)$ ; Lower left rotor: ${P}_{k}\left( {k = n + 1,\cdots ,{2n}}\right)$ ;top right rotor: ${P}_{k}\left( {k = {2n} + 1,\cdots ,{3n}}\right)$ ; lower right rotor: ${P}_{k}\left( {k = {3n} + 1,\cdots ,{4n}}\right)$ .
|
| 292 |
+
|
| 293 |
+
Literature[20] proposes a mechanism for connecting two twin rotor modules, each of which combines two of the four propellers into a group, similar to the MRBicopter assembly structure. Therefore, the control distribution mode can be extended here. The rotor speed control distribution of the four parts can be write as follows:
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{\varpi }_{i}^{1} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} + {T}_{3} + {T}_{2}}\left( {i = 1,\cdots ,n}\right)
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
{\varpi }_{i}^{2} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} - {T}_{3} + {T}_{2}}\left( {i = n + 1,\cdots ,{2n}}\right)
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
(19)
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
{\varpi }_{i}^{3} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} + {T}_{3} - {T}_{2}}\left( {i = {2n} + 1,\cdots ,{3n}}\right)
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
{\varpi }_{i}^{4} = \sqrt{\frac{{F}_{z}}{{4n}{K}_{T}} - {T}_{3} - {T}_{2}}\left( {i = {3n} + 1,\cdots ,{4n}}\right)
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
< g r a p h i c s >
|
| 314 |
+
|
| 315 |
+
Fig.6: Mechanism model of MRBicopte.
|
| 316 |
+
|
| 317 |
+
The MRBicopte assembly uses the ${X}_{W}$ axis to divide the left and right rotor tilt angles using different control distributions:
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
{\alpha }_{i}^{1} = {\alpha }_{\text{ offset }} + {C}_{1}{T}_{4} + {C}_{2}\frac{{F}_{Y}}{4n}\left( {i = 1,2,\cdots ,{2n}}\right) \tag{20}
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
{\alpha }_{i}^{2} = {\alpha }_{\text{ offset }} - {C}_{1}{T}_{4} + {C}_{2}\frac{{F}_{Y}}{4n}\left( {i = {2n} + 1,\cdots ,{4n}}\right)
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
In equation, ${C}_{1},{C}_{2}$ are constants.
|
| 328 |
+
|
| 329 |
+
§ SIMULATION&EXPERMENT
|
| 330 |
+
|
| 331 |
+
Section V mainly introduces MRBicopte submodule and assembly simulation and ground test. In order to make the simulation more realistic, we introduce the ambient wind interference model here, which can verify the robustness of the MRBicopter controller in the face of ambient wind interference.
|
| 332 |
+
|
| 333 |
+
§ A. ENVIRONMENTAL WIND MODEL
|
| 334 |
+
|
| 335 |
+
In order to simulate the mathematical model of the atmospheric wind field as much as possible, we divide the environmental wind into constant wind, gust wind, gradient wind and random wind four parts.
|
| 336 |
+
|
| 337 |
+
Constant wind: The wind power of constant wind is a constant value $\delta$ , the wind speed does not change. Its mathematical model of wind speed is expressed as follows:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
{V}_{f1} = \delta \tag{21}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
Gust wind: Gust wind is a kind of periodic change of wind speed in atmospheric motion, which is characterized by the sudden increase of wind speed at a certain moment and the self-weakening after a period of time. Its mathematical model can be expressed as a piecewise function:
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
{V}_{f2} = \left\{ \begin{matrix} 0 & \left( {x < 0}\right) \\ \frac{{V}_{m}}{2}\left( {1 - \cos \left( \frac{\pi x}{{d}_{m}}\right) }\right) & \left( {0 \leq x \leq {d}_{m}}\right) \\ {V}_{m} & \left( {x > {d}_{m}}\right) \end{matrix}\right. \tag{22}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
Where ${V}_{m}$ is the gust amplitude, ${d}_{m}$ is the gust length, $\mathrm{x}$ is the gust travel distance.
|
| 350 |
+
|
| 351 |
+
Gradient wind: Gradient wind refers to the ambient wind whose wind speed increases from zero to a certain value over time. Its mathematical model expression is as follows:
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
{V}_{f3} = \frac{t - {t}_{1}}{{t}_{2} - {t}_{1}}{V}_{f - \max } \tag{23}
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
Where ${V}_{{f}_{-\max }}$ represents the peak of the gradual wind speed, ${t}_{1},{t}_{2}$ represent the beginning and end of the gradual wind, respectively.
|
| 358 |
+
|
| 359 |
+
Random wind: Random wind refers to the air disturbance generated by random changes in the atmosphere. Here, we use random number generator to build a mathematical model of random wind:
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
{V}_{f4} = {V}_{{f4}\_ \max }\pi \left( {-{10} \sim {10}}\right) \cos \left( {{\alpha t} + \beta }\right) \tag{24}
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
Where ${V}_{{f4}\_ \max }$ represents the theoretical peak of random wind; It is a random number generated by a random number generator, and its range is $- {10} \sim {10}.\alpha$ represents the average frequency of random wind speed fluctuation, with the value ranging ${0.5} \sim 2\mathrm{{rad}}/\mathrm{s}.\beta$ indicates the offset of random wind speed. The value ranges from ${0.1\pi r} \sim {2\pi r}$ .
|
| 366 |
+
|
| 367 |
+
Therefore, if the total wind speed of the ambient wind field is ${V}_{F}$ , it can be obtained as:
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
{V}_{F} = {V}_{f1} + {V}_{f2} + {V}_{f3} + {V}_{f4} \tag{25}
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
In order to simplify the calculation, the wind speed direction is taken as the opposite of the MRBicopter's flight direction, so the air resistance generated by ambient wind field interference can be calculated:
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
{F}_{w} = \frac{1}{2}{C\rho S}{\left( {V}_{F} + {v}_{UAV}\right) }^{2} \tag{26}
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
Where $C$ represents the air resistance coefficient, the value is ${0.31};\rho$ indicates the air density, which is ${1.29}\mathrm{\;{kg}}/{\mathrm{m}}^{3}$ . $S$ represents the windward area of MRBicopte, which is ${31}{\mathrm{\;{cm}}}^{3}$ . ${v}_{UAV}$ stands for flight speed.
|
| 380 |
+
|
| 381 |
+
< g r a p h i c s >
|
| 382 |
+
|
| 383 |
+
Fig.7: Simulation of MRBicopter hover under ambient wind interference, (a) MRBicopter single module; (b) MRBicopter assembly.
|
| 384 |
+
|
| 385 |
+
§ B. SIMULATION
|
| 386 |
+
|
| 387 |
+
Fig. 7 shows the simulation diagram of the three-axis attitude angle of the two MRBicopter structures in hovering state under the presence of ambient wind interference. The blue line represents the roll angle tracking curve, the red line represents the pitch angle tracking curve, and the green line represents the yaw angle tracking curve.
|
| 388 |
+
|
| 389 |
+
§ 1) SUBMODULE
|
| 390 |
+
|
| 391 |
+
This experiment is a hover simulation experiment of MRBicopter submodule in the presence of ambient wind interference. The average ambient wind speed is set at ${10.5}\mathrm{\;m}/\mathrm{s}$ . The simulation experiment results are shown in Fig.7(a): the hover attitude angle oscillation of a single module does not exceed 0.05rad, which meets the design requirements.
|
| 392 |
+
|
| 393 |
+
§ 2) ASSEMBLY
|
| 394 |
+
|
| 395 |
+
This experiment is a hover simulation experiment of the MRBicopter assembly in the presence of ambient wind interference. The simulation results are shown in Fig.7(b): instantaneous oscillation of $> {0.4}$ rad occurs in the pitch and roll angle of the assembly at ${0.3}\mathrm{\;s}$ , and the adjustment is completed within ${0.2}\mathrm{\;s}$ , and the subsequent oscillation amplitude does not exceed 0.1rad. It shows that the combination controller has a strong ability to suppress the environmental wind interference.
|
| 396 |
+
|
| 397 |
+
§ C. GROUND EXPERIMENT
|
| 398 |
+
|
| 399 |
+
In order to ensure the safety of the test, the experiment was carried out on the indoor aircraft test platform, and 1/6HP650 pneumatic industrial fan was used as the ambient wind source. The MRBicopter flight control module uses STM32F427VIT6 as the main processor; The power supply adopts LiPo(4S1P:14.8V,3000mAh); The combination docking module uses ZigBee serial communication to receive the control signal and convert it into analog PWM signal to control the on-off of the relay. The ${2.4}\mathrm{{GHz}}{14}$ channel communication module is used for signal sending and receiving. The experimental results are shown in Fig.8.
|
| 400 |
+
|
| 401 |
+
< g r a p h i c s >
|
| 402 |
+
|
| 403 |
+
Fig.8: MRBicopter ground experiment under ambient wind interference.
|
| 404 |
+
|
| 405 |
+
§ 1) SUBMODULE EXPERIMENT
|
| 406 |
+
|
| 407 |
+
Two MRBicopter submodules are built here, and one of them is selected for experiment. The experimental results are shown in Fig.8(a): when there is wind interference, the average oscillation amplitude of pitch angle and roll angle of the submodule is $\pm {4.98}^{ \circ }$ and the average oscillation amplitude of yaw Angle is $\pm {7.91}^{ \circ }$ , which meets the stability requirements.
|
| 408 |
+
|
| 409 |
+
§ 2) ASSEMBLY EXPERIMENT
|
| 410 |
+
|
| 411 |
+
The MRBicopter assembly is composed of two submodules. The experimental results are shown in Fig.8(b): when there is wind interference, the average oscillation amplitude of pitch and roll angle of the assembly is $\pm {5.12}^{ \circ }$ , and the average oscillation amplitude of yaw angle is $\pm {7.33}^{ \circ }$ , which meets the stability requirements.
|
| 412 |
+
|
| 413 |
+
§ CONCLUSION
|
| 414 |
+
|
| 415 |
+
In this paper, a modular and reconfigurable multi-UAV platform MRBicopter is proposed, in which the transverse twin rotor submodule can realize structural reconstruction through the electromagnet combination docking structure, and can realize different flight states by changing the motor speed and tilt angle to meet the needs of different tasks. In order to further improve the controllability of MRBicopter and expand its application fields, improvements will be made in the following aspects in the future:
|
| 416 |
+
|
| 417 |
+
1) The fuzzy PID control algorithm is proposed to further improve the interference compensation capability of MRBicopter and improve the stability of the flight process of the assembly.
|
| 418 |
+
|
| 419 |
+
2) Structurally, further mount relevant computing units on the UAV, such as Lidar, airborne computer NUC, etc., to expand the application scenario of the MRBicopter.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,483 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dynamical analysis of rumor propagation model considering media refutation and individual refutation*
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Wenqi Pan
|
| 4 |
+
|
| 5 |
+
College of Marine Electrical Engineering
|
| 6 |
+
|
| 7 |
+
Dalian Maritime University
|
| 8 |
+
|
| 9 |
+
Dalian, China
|
| 10 |
+
|
| 11 |
+
panwenqi07@163.com
|
| 12 |
+
|
| 13 |
+
${2}^{\text{nd }}$ Li-Ying Hao*
|
| 14 |
+
|
| 15 |
+
College of Marine Electrical Engineering
|
| 16 |
+
|
| 17 |
+
Dalian Maritime University
|
| 18 |
+
|
| 19 |
+
Dalian, China
|
| 20 |
+
|
| 21 |
+
haoliying_0305@163.com
|
| 22 |
+
|
| 23 |
+
Abstract-The factor of refutation significantly impacts the spread of rumors. Common methods of refuting rumors include media intervention and individual efforts. While many scholars have explored the effects of these factors separately, few studies have comprehensively examined both simultaneously. This model integrates the influence of both media and individual refutation on the rumor propagation process. We propose a novel two-tier network model for rumor spread. We demonstrated the existence and stability of equilibrium points within the model. Theoretical analysis demonstrates that authoritative media refutations exert a broader and more substantial influence on rumor dissemination compared to individual refutations.
|
| 24 |
+
|
| 25 |
+
Index Terms-rumor propagation, rumor refuting medias, rumor refuters, stability
|
| 26 |
+
|
| 27 |
+
## I. INTRODUCTION
|
| 28 |
+
|
| 29 |
+
Rumor refers to the speech fabricated without corresponding factual basis and with a certain purpose and promoted its dissemination by some means. With the exponential growth of technology and the widespread adoption of internet-based social networks, misinformation and harmful rumors have the potential to swiftly propagate across online platforms, posing significant threats to social cohesion, stability, as well as disrupting people's daily lives and productive activities. For example, the panic of buying salt caused by the Fukushima Daiichi Nuclear Disaster [1], there was a rumor that SHL-C could prevent COVID-19, which caused great harm to the public's psychology and body, and seriously disturbed the normal order of the society.
|
| 30 |
+
|
| 31 |
+
The propagate of rumors has attracted the attention and research of many scholars. Some scholars compared the disseminate of rumors with the propagate of infectious diseases in humans, and applied the infectious disease model to the disseminate of rumors [2]-[5]. Considering the influence of different propagation mechanisms on rumor propagation, many scholars have studied cross propagation mechanism [7] and education mechanism [6]. Komi [8] established rumor propagation model based on population education and forgetting mechanism, and found that educated ignorant people are less likely to be transformed into disseminators and more likely to be transformed into suppressors than uneducated ignorant people.
|
| 32 |
+
|
| 33 |
+
At the same time, many scholars considered the influence of different function methods [9]-[11] in the research process. Zhu et al. [14] proposed a rumor propagation model in homogeneous and heterogeneous networks, and comprehensively studied the influence of forced silence function, time delay and network topology on rumor propagation in social networks. The influence of time delay on the propagation process has also been studied by many scholars [15]-[18] on rumor propagation process in existing research. Cheng et al. [21] established an improved ${XY} - {ISR}$ rumor propagation model on the basis of interactive system, comprehensively discussed the influence of different delays on rumor propagation, further proposed control strategies such as deleting posts, popular science education and immunotherapy.
|
| 34 |
+
|
| 35 |
+
With the complexity of the network environment, some scholars have comprehensively considered the influence of various factors on rumor propagation on the complex network [22]-[24]. Considering the reaction of the ignorant when hearing the rumor for the first time, Huo et al. [25] divided the individuals in the network into four groups: the ignorant, the trustworthy, the spreader and the uninterested, and proposed ${SIbInIu}$ rumor propagation model in the complex network. The theoretical analysis and simulation results show that the loss rate and suppression rate have a negative impact on the final rumor spread scale.
|
| 36 |
+
|
| 37 |
+
In the existing literature, it is not common to comprehensively consider the impact of media refutation and individual refutation on the two-tier network rumor propagation model. Based on the actual assumptions, we believe that the rumor refutation effect of comprehensive consideration of the two is better than that of single consideration. This paper mainly make a dynamic analysis on the rumor propagation considering the rumor refutation effect of these two factors.
|
| 38 |
+
|
| 39 |
+
The rest of this paper is distributed as follows. We propose a two-tier network rumor propagation model in section 1. Section 2 describes a two-tier network rumor propagation model considering both rumor refuting media and rumor refuter groups. In section 3, we discuss the existence and stability conditions of the equilibrium points. Finally, the feasibility of the results presented in this paper was confirmed through numerical simulations.
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
This work was funded by the National Natural Science Foundation of China (51939001, 52171292), Dalian Outstanding Young Talents Program (2022RJ05).
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
## II. TWO-TIER NETWORK RUMOR PROPAGATION MODEL
|
| 48 |
+
|
| 49 |
+
In the two-tier rumor propagation model constructed in this paper, the media network model is composed of networks with $M$ media websites, and the personal friendship network model is composed of networks with $N$ personal friendship websites.
|
| 50 |
+
|
| 51 |
+
In the network layer of media websites, media can be divided into three states: vulnerable media without rumor information (represented by $X$ ), affected media with rumor information (represented by $Y$ ) and rumor refuting media with rumor refuting information (represented by $Z$ ). When communicators visit the vulnerable media, they will release or leave rumors on the media network, so that the vulnerable media will be affected and become the affected media. When the rumor refuters visit the affected media, they will release or leave rumor refutation information on the media network to make the affected media become rumor refutation media.
|
| 52 |
+
|
| 53 |
+
In the personal network layer, individuals are categorized into four distinct groups: those who have never heard of the rumors (denoted by $S$ ), those who actively spread rumors (denoted by $I$ ), those who do not believe in the rumors but disseminate refutation information (denoted by $D$ ), and those who neither believe in nor propagate any information (denoted by $R$ ). In the process of network node interaction, after visiting the vulnerable media, the disseminator spreads rumor information on the media website, so that the vulnerable media is infected and evolved into the affected media. When an ignorant person visits the affected media, affected by the rumor information, the ignorant person becomes a disseminator with a certain probability. Thus, rumors can be spread not only between people, but also between individuals and online media. The basic assumptions of this paper are as follows:
|
| 54 |
+
|
| 55 |
+
Hypothesis 1: In the media network layer, considering that the media website has a certain registration rate and cancellation rate, the number of vulnerable media entering the communication system per unit time is ${\Lambda }_{1}$ . Moreover, there will be benign competition among the media. The three types of media websites $X, Y$ and $Z$ may move out of the communication system with a certain probability ${\mu }_{1}$ . When communicators visit vulnerable media and publish their own views and comments, the rate of conversion to affected media is $\lambda$ . When the rumor refuter visits the affected media and publishes rumor refutation information on it, the affected media will change into rumor refutation media with a certain probability $\eta$ .
|
| 56 |
+
|
| 57 |
+
Hypothesis 2: In the personal interpersonal network layer, assume that the rate at which individuals who are unaware of rumors enter the communication system is ${\Lambda }_{2}$ . Those who question the rumor but neither spread rumor information nor disseminate refutation will transition to an immune state at a rate of ${\xi }_{2}$ . Individuals who initially spread rumors but later find the information untrue may become rumor disclaimers with probability $\delta$ . If these communicators lose interest in rumors and cease both rumor propagation and refutation, they will transition to an immune state with probability $\theta$ . Rumor disclaimers affected by the environment or who lose interest in refutation will also become immune with probability $\phi$ . Additionally, individual groups may exit the rumor spreading network due to migration at a rate ${\mu }_{2}$ .
|
| 58 |
+
|
| 59 |
+
Hypothesis 3: In the interaction of offline individuals, the ignorant will become the disseminator at a certain rate $\alpha$ after contacting the disseminator. If ignorant person believe and propagate rumors after visiting the affected media, they will become disseminators at a certain rate $\beta$ . It is assumed that after the unknown person contacts the rumor information (including contact with people and knowing the rumor information from the media), they realize that the rumor information is untrue due to them own experience or discrimination ability. If an individual who is initially unaware of the rumors chooses to disseminate rumor refutation information, they will transition to the status of a rumor refuter at a rate of ${\xi }_{1}$ .
|
| 60 |
+
|
| 61 |
+
Based on the above analysis, the rumor propagation process of ${XYZ} - {SIDR}$ model established in this paper is shown in Fig. 1.
|
| 62 |
+
|
| 63 |
+
The meanings of symbols in Fig. 1 are shown in the following table. I.
|
| 64 |
+
|
| 65 |
+
TABLE I
|
| 66 |
+
|
| 67 |
+
DESCRIPTION OF PARAMETERS IN THE MODEL
|
| 68 |
+
|
| 69 |
+
<table><tr><td>$\mathbf{{Parameter}}$</td><td>Description</td></tr><tr><td>${\Lambda }_{1}$</td><td>The number of susceptible media entering the communication system per unit time.</td></tr><tr><td>${\Lambda }_{2}$</td><td>The number of ignorant individuals entering the communication system per unit time.</td></tr><tr><td>$\lambda$</td><td>The contact rate of susceptible medias with spreaders.</td></tr><tr><td>$\eta$</td><td>The probability of affected media becoming rumor refuting media.</td></tr><tr><td>$\alpha$</td><td>Rumor propagation rate of offline personal interaction.</td></tr><tr><td>$\beta$</td><td>Rumor propagation rate under two-tier network interaction.</td></tr><tr><td>$\delta$</td><td>The probability of propagating individuals becoming rumor refuting individuals.</td></tr><tr><td>$\theta$</td><td>The probability of propagating individuals becoming immune individuals.</td></tr><tr><td>${\xi }_{1}$</td><td>The rate of ignorant individuals becoming rumor refuting individuals.</td></tr><tr><td>${\xi }_{2}$</td><td>The rate of ignorant individuals becoming immune individuals.</td></tr><tr><td>$\phi$</td><td>The probability of rumor refuting individuals becoming immune individuals.</td></tr><tr><td>${\mu }_{1}$</td><td>The rate at which medias in the network move out of the propagation system.</td></tr><tr><td>${\mu }_{2}$</td><td>Migration rate of individuals in personal friendship network layer.</td></tr></table>
|
| 70 |
+
|
| 71 |
+
Based on the above analysis, we participated in the construction of an ${XYZ} - {SIDR}$ rumor propagation model. Then,
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\left\{ \begin{array}{l} {X}^{\prime } = {\Lambda }_{1} - {\lambda XI} - {\mu }_{1}X, \\ {Y}^{\prime } = {\lambda XI} - {\eta Y} - {\mu }_{1}Y, \\ {Z}^{\prime } = {\eta Y} - {\mu }_{1}Z, \\ {S}^{\prime } = {\Omega }_{2} - {\alpha SY} - {\beta SI} - \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {I + Y}\right) S - {\mu }_{2}S, \\ {I}^{\prime } = {\alpha SY} + {\beta SI} - \left( {\theta + \delta }\right) I - {\mu }_{2}I, \\ {D}^{\prime } = {\xi }_{1}S\left( {I + Y}\right) + {\delta I} - {\phi D} - {\mu }_{2}D, \\ {B}^{\prime } = {\xi }_{2}S\left( {I + Y}\right) + {\theta I} + {\delta D} - {\mu }_{2}B, \end{array}\right. \tag{1}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+

|
| 78 |
+
|
| 79 |
+
Fig 1. Schematic representation of the ${XYZ} - {SIDR}$ rumor spreading model
|
| 80 |
+
|
| 81 |
+
Since the model represents the process of rumor propagation, the parameters involved are non negative, and the initial conditions are met:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
X\left( 0\right) = {X}_{0} \geq 0, Y\left( 0\right) = {Y}_{0} \geq 0, Z\left( 0\right) = {Z}_{0} \geq 0,
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
S\left( 0\right) = {S}_{0} \geq 0, I\left( 0\right) = {I}_{0} \geq 0, D\left( 0\right) = {D}_{0} \geq 0\text{,} \tag{2}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
R\left( 0\right) = {R}_{0} \geq 0\text{.}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
## III. MODEL ANALYSIS AND CALCULATION
|
| 96 |
+
|
| 97 |
+
### A.The basic reproduction number ${R}_{0}$
|
| 98 |
+
|
| 99 |
+
For system (1), the basic regeneration number ${R}_{0}$ is calculated as follows:
|
| 100 |
+
|
| 101 |
+
Let $\mathcal{X} = {\left( I, Y, R, D, S, X, Z\right) }^{T}$ , equation (1) can be written as $\frac{d\mathcal{X}}{dt} = \mathcal{F}\left( \mathcal{X}\right) - \mathcal{V}\left( \mathcal{X}\right)$ .
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\mathcal{F}\left( \mathcal{X}\right) = \left( \begin{matrix} {\alpha SY} + {\beta SI} \\ {\lambda XI} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{matrix}\right) , \tag{3}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\mathcal{V}\left( \mathcal{X}\right) = \left( \begin{matrix} {\theta I} + {\delta I} + {\mu }_{2}I \\ {\eta Y} + {\mu }_{1}Y \\ - {\xi }_{2}{SI} - {\xi }_{2}{SY} - {\theta I} - {\phi D} + {\mu }_{2}R \\ - {\xi }_{1}{SI} - {\xi }_{1}{SY} - {\delta I} + {\phi D} + {\mu }_{2}D \\ {H}_{1} \\ - {\Lambda }_{1} + {\lambda SI} + {\mu }_{1}X \\ - {\eta Y} + {\mu }_{1}Z \end{matrix}\right) \tag{4}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${H}_{1} = - {\Lambda }_{2} + {\alpha SY} + {\beta SI} + {\xi }_{1}{SI} + {\xi }_{1}{SY} + {\xi }_{2}{SI} +$ ${\xi }_{2}{SY} + {\mu }_{2}S$ .
|
| 112 |
+
|
| 113 |
+
Therefore
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
F = \left( \begin{matrix} \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 \\ \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) , \tag{5}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
V = \left( \begin{matrix} \theta + \delta + {\mu }_{2} & 0 & 0 & 0 \\ 0 & \eta + {\mu }_{1} & 0 & 0 \\ - {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} - \theta & - {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & {\mu }_{2} & - \phi \\ - {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} - \delta & - {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & \phi + {\mu }_{2} \end{matrix}\right) \tag{6}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
By calculation we can get
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
F{V}^{-1} = \left( \begin{matrix} \frac{\beta {\Lambda }_{2}}{{\mu }_{2}\left( {\theta + \delta + {\mu }_{2}}\right) } & \frac{\alpha {\Lambda }_{2}}{{\mu }_{2}\left( {\eta + {\mu }_{1}}\right) } & 0 & 0 \\ \frac{\lambda {\Lambda }_{1}}{{\mu }_{1}\left( {\theta + \delta + {\mu }_{2}}\right) } & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \tag{7}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
Hence, according to reference [27], the basic reproduction number of system (1) is the spectral radius of matrix $F{V}^{-1}$ as follows:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{R}_{0} = \frac{\beta {\Lambda }_{2}}{{\mu }_{2}\left( {\theta + \delta + {\mu }_{2}}\right) } \tag{8}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
## B. Existence of equilibrium
|
| 136 |
+
|
| 137 |
+
According to the system dynamics equation (1), we can calculate the equilibrium $E = \left( {X, Y, Z, S, I, D, R}\right)$ . It is easy to see that the positive equilibrium points of system (1) are ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ and ${E}^{ * } =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ , and the rumor free equilibrium point ${E}_{0}$ always exists.
|
| 138 |
+
|
| 139 |
+
Theorem 1 The equilibrium point ${E}^{ * }\; =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ exists if ${R}_{0} > 1$ and $\left( {\theta + \delta + {\mu }_{2}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) > {\beta \lambda }{\Lambda }_{2}.$
|
| 140 |
+
|
| 141 |
+
Proof The rumors about system (1) have a balance point that satisfies:
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\left\{ \begin{array}{l} {\Lambda }_{1} - {\lambda XI} - {\mu }_{1}X = 0, \\ {\lambda XI} - {\eta Y} - {\mu }_{1}Y = 0, \\ {\eta Y} - {\mu }_{1}Z = 0, \\ {\Lambda }_{2} - {\alpha SY} - {\beta SI} - \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {I + Y}\right) S - {\mu }_{2}S = 0, \\ {\alpha SY} + {\beta SI} - \left( {\theta + \delta }\right) I - {\mu }_{2}I = 0, \\ {\xi }_{1}S\left( {I + Y}\right) + {\delta I} - {\phi D} - {\mu }_{2}D = 0, \\ {\xi }_{2}S\left( {I + Y}\right) + {\theta I} + {\phi D} - {\mu }_{2}R = 0. \end{array}\right. \tag{9}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
According to formula (9), ${X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{D}^{ * },{R}^{ * }$ are represented by ${I}^{ * }$ respectively and brought into the fifth equation to get
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
a{I}^{2} + {bI} + c = 0 \tag{10}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
Where
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
a = \lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) ,
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
b = \left( {\theta + \delta + {\mu }_{2}}\right) \left\lbrack {\left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
+ \lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) - {\beta \lambda }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \text{,}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
c = \left( {\eta + {\mu }_{1}}\right) \left\lbrack {{\mu }_{2}\lambda \left( {\theta + \delta + {\mu }_{2}}\right) - {\mu }_{1}\beta {\Lambda }_{2}}\right\rbrack - {\alpha \lambda }{\Lambda }_{1}{\Lambda }_{2}.
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
(11)
|
| 172 |
+
|
| 173 |
+
It can be obtained by calculation that
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\Delta = {b}^{2} - {4ac}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
= {\left\lbrack \lambda {\Lambda }_{1}\left( \alpha + {\xi }_{2}\right) + \left( \eta + {\mu }_{1}\right) \left( {\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda \right) \right\rbrack }^{2}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
* {\left( \theta + \delta + {\mu }_{2}\right) }^{2} + {\left\lbrack \beta \lambda {\Lambda }_{2}\left( \eta + {\mu }_{1}\right) \right\rbrack }^{2} + {4\alpha }{\lambda }^{2}{\Lambda }_{1}{\Lambda }_{2}\left( {\beta + {\xi }_{1}}\right)
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
* \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) - {2\beta }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right)
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
* \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
- {4\lambda }\left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right)
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
* \left\lbrack {{\mu }_{2}\lambda \left( {\theta + \delta + {\mu }_{2}}\right) - \beta {\mu }_{1}{\Lambda }_{2}}\right\rbrack
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
(12)
|
| 204 |
+
|
| 205 |
+
According to the discriminant calculation, when ${R}_{0} > 1$ and
|
| 206 |
+
|
| 207 |
+
$\left( {\theta + \delta + {\mu }_{2}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) > {\beta \lambda }{\Lambda }_{2}$ , the negative solution is omitted:
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
{I}^{ * } = \frac{{\beta \lambda }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) - {H}_{2}\left( {\theta + \delta + {\mu }_{2}}\right) + \sqrt{\Delta }}{{2\lambda }\left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) } \tag{13}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
where ${H}_{2} = \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack$ .
|
| 214 |
+
|
| 215 |
+
Therefore ${E}^{ * } = \left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ , where
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
{X}^{ * } = \frac{{\Lambda }_{1}}{\lambda {I}^{ * } + {\mu }_{1}}, \tag{14}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
{Y}^{ * } = \frac{\lambda {\Lambda }_{1}{I}^{ * }}{\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }, \tag{15}
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
{Z}^{ * } = \frac{{\lambda \eta }{\Lambda }_{1}{I}^{ * }}{{\mu }_{1}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }, \tag{16}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
{S}^{ * } = \frac{{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }{T}, \tag{17}
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
{D}^{ * } = \frac{\lambda {\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) {I}^{*2} + \left\lbrack {\lambda {\Lambda }_{1}{\Lambda }_{2} + {\mu }_{1}\left( {\eta + {\mu }_{1}}\right) }\right\rbrack {I}^{ * }}{\left( {\phi + {\mu }_{2}}\right) T}, \tag{18}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{R}^{ * } = \frac{{\xi }_{2}{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) {H}_{3} + \theta {H}_{4}}{\left( {{\mu }_{2} - \phi }\right) {H}_{4}} \tag{19}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
where ${H}_{3} = \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) \left\lbrack {\lambda {\Lambda }_{1} + \left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }\right\rbrack ,{H}_{4} =$ $\lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) {I}^{*2} + \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + }\right. }\right.$ $\left. \left. {{\mu }_{2}\lambda }\right) \right\rbrack {I}^{ * } + {\mu }_{2}\lambda \left( {\eta + {\mu }_{1}}\right)$ .
|
| 242 |
+
|
| 243 |
+
## C. Stability of equilibrium
|
| 244 |
+
|
| 245 |
+
Theorem 2 The equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is locally asymptotically stable if ${R}_{0} < 1$ . And the equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is unstable if ${R}_{0} > 1$ .
|
| 246 |
+
|
| 247 |
+
Proof The Jacobian matrix of system (1) at
|
| 248 |
+
|
| 249 |
+
${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is
|
| 250 |
+
|
| 251 |
+
$J\left( {E}_{0}\right) =$
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\left( \begin{matrix} - {\mu }_{1} & 0 & 0 & 0 & - \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & - \eta - {\mu }_{1} & 0 & 0 & \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & \eta & - {\mu }_{1} & 0 & 0 & 0 & 0 \\ 0 & {H}_{5} & 0 & - {\mu }_{2} & {H}_{6} & 0 & 0 \\ 0 & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {H}_{7} & 0 & 0 \\ 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} + \theta & - {\mu }_{2} & - {\mu }_{2} \\ 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} & 0 & 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} + \delta & 0 & {H}_{8} \end{matrix}\right)
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
where ${H}_{5} = - \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{H}_{6} = - \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}}$ ,
|
| 258 |
+
|
| 259 |
+
${H}_{7} = \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) ,{H}_{8} = - \left( {\phi + {\mu }_{2}}\right) .$
|
| 260 |
+
|
| 261 |
+
The characteristic equation of matrix $J\left( {E}_{0}\right)$ is
|
| 262 |
+
|
| 263 |
+
$\left| {J\left( {E}_{0}\right) - {hE}}\right| =$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\left. \begin{matrix} {T}_{1} & 0 & 0 & 0 & - \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & {T}_{1} & 0 & 0 & \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & \eta & {T}_{4} & 0 & 0 & 0 & 0 \\ 0 & {T}_{2} & 0 & {T}_{5} & {T}_{3} & 0 & 0 \\ 0 & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {T}_{4} & 0 & 0 \\ 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {T}_{5} & - {\mu }_{2} - h & - {\mu }_{2} \\ 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} & 0 & 0 & {T}_{6} & 0 & - \left( {\phi + {\mu }_{2}}\right) - h \end{matrix}\right|
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$= {\left( {\mu }_{1} + h\right) }^{2}{\left( {\mu }_{2} + h\right) }^{2}\left( {\phi + {\mu }_{2} + h}\right) \left( {\eta + {\mu }_{1} + h}\right) \left\lbrack {\beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - (\theta + }\right.$
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
\left. {\left. {\delta + {\mu }_{2}}\right) - h}\right\rbrack = 0
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
Where ${T}_{1} = - {\mu }_{1} - h,{T}_{2} = - \eta - {\mu }_{1} - h,{T}_{3} =$ $- \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{T}_{4} = - {\mu }_{1} - h,{T}_{5} = - {\mu }_{2} - h,{T}_{6} =$ $- \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{T}_{7} = \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{T}_{8} =$ $\beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{T}_{9} = {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} + \delta .$
|
| 276 |
+
|
| 277 |
+
Therefore, the characteristic root corresponding to the characteristic equation of $J\left( {E}_{0}\right)$ is:
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
{h}_{01} = - {\mu }_{1} < 0,{h}_{02} = - {\mu }_{2} < 0,{h}_{03} = - \left( {\phi + {\mu }_{2}}\right) < 0,
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
{h}_{04} = - \left( {\eta + {\mu }_{1}}\right) < 0,{h}_{05} = \frac{\theta + \delta + {\mu }_{2}}{{\mu }_{2}}\left( {{R}_{0} - 1}\right) < 0
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
(20)
|
| 288 |
+
|
| 289 |
+
According to Routh-Hurwitz stability criterion, the equilibrium point
|
| 290 |
+
|
| 291 |
+
${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is locally asymptotically stable if ${R}_{0} < 1$ .
|
| 292 |
+
|
| 293 |
+
And the equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is unstable if ${R}_{0} > 1$ .
|
| 294 |
+
|
| 295 |
+
Theorem 3 The equilibrium point ${E}^{ * }\; =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ is locally asymptotically stable if ${R}_{0} > 1$ and $\beta {\Lambda }_{2} < {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) \left( {\theta + \delta + {\mu }_{2}}\right)$ , otherwise, the equilibrium point ${E}^{ * }$ is unstable.
|
| 296 |
+
|
| 297 |
+
Proof The Jacobian matrix at ${E}^{ * } =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ is
|
| 298 |
+
|
| 299 |
+
$J\left( {E}^{ * }\right) =$
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
\left( \begin{matrix} {A}_{1} & 0 & 0 & 0 & - \lambda {X}^{ * } & 0 & 0 \\ \lambda {I}^{ * } & {A}_{2} & 0 & 0 & \lambda {X}^{ * } & 0 & 0 \\ 0 & \eta & - {\mu }_{1} & 0 & 0 & 0 & 0 \\ 0 & {A}_{3} & 0 & {A}_{4} & {A}_{8} & 0 & 0 \\ 0 & \alpha {S}^{ * } & 0 & {A}_{5} & {A}_{9} & 0 & 0 \\ 0 & {\xi }_{2}{S}^{ * } & 0 & {A}_{6} & {\xi }_{2}{S}^{ * } + \theta & - {\mu }_{2} & - {\mu }_{2} \\ 0 & {\xi }_{1}{S}^{ * } & 0 & {A}_{7} & {\xi }_{1}{S}^{ * } + \delta & 0 & {A}_{10} \end{matrix}\right)
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
Where ${A}_{1} = \lambda {I}^{ * } - {\mu }_{1},{A}_{2} = - \eta - {\mu }_{1},{A}_{3} =$
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
- \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * },{A}_{4} = - \alpha {Y}^{ * } - \beta {I}^{ * },{A}_{5} = \alpha {Y}^{ * } + \beta {I}^{ * }\text{,}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
{A}_{6} = {\xi }_{2}\left( {{I}^{ * } + {Y}^{ * }}\right) ,\;{A}_{7} = {\xi }_{1}\left( {{I}^{ * } + {Y}^{ * }}\right) ,\;{A}_{8} =
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
- \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * },{A}_{9} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) ,{A}_{10} =
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
$- \left( {\phi + {\mu }_{2}}\right)$ .
|
| 320 |
+
|
| 321 |
+
The characteristic equation of matrix $J\left( {E}^{ * }\right)$ is
|
| 322 |
+
|
| 323 |
+
$\left| {J\left( {E}^{ * }\right) - {hE}}\right| =$
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
\left| \begin{matrix} {B}_{1} & 0 & 0 & 0 & - \lambda {X}^{ * } & 0 & 0 \\ \lambda {I}^{ * } & {B}_{2} & 0 & 0 & \lambda {X}^{ * } & 0 & 0 \\ 0 & \eta & - {\mu }_{1} - h & 0 & 0 & 0 & 0 \\ 0 & {B}_{3} & 0 & {B}_{3} & {B}_{7} & 0 & 0 \\ 0 & \alpha {S}^{ * } & 0 & {B}_{4} & {B}_{8} & 0 & 0 \\ 0 & {\xi }_{2}{S}^{ * } & 0 & {B}_{5} & {\xi }_{2}{S}^{ * } + \theta & {B}_{9} & - {\mu }_{2} \\ 0 & {\xi }_{1}{S}^{ * } & 0 & {B}_{6} & {\xi }_{1}{S}^{ * } + \delta & 0 & {B}_{10} \end{matrix}\right|
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
Where ${B}_{1} = \lambda {I}^{ * } - {\mu }_{1} - h,{B}_{2} = - \eta - {\mu }_{1} - h,{B}_{3} =$ $- \alpha {Y}^{ * } - \beta {I}^{ * } - h,{B}_{4} = \alpha {Y}^{ * } + \beta {I}^{ * },{B}_{5} = {\xi }_{2}\left( {{I}^{ * } + }\right.$ $\left. {Y}^{ * }\right) ,{B}_{6} = {\xi }_{1}\left( {{I}^{ * } + {Y}^{ * }}\right) ,{B}_{7} = - \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * }$ , ${B}_{8} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{B}_{9} = - {\mu }_{2} - h,{B}_{10} =$ $- \left( {\phi + {\mu }_{2}}\right) - h$ .
|
| 330 |
+
|
| 331 |
+
Thus, we can obtain
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\left| {J\left( {E}^{ * }\right) - {hE}}\right| = \left( {{\mu }_{1} + h}\right) \left( {{\mu }_{2} + h}\right) \left( {\phi + {\mu }_{2} + h}\right) \left( {\eta + {\mu }_{1} + }\right.
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
$\left. h\right) \left( {\lambda {I}^{ * } + {\mu }_{1} + h}\right) G$ .
|
| 338 |
+
|
| 339 |
+
Where $G = - \left\lbrack {\alpha {Y}^{ * } + \beta {I}^{ * } + \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {{I}^{ * } + {Y}^{ * }}\right) + {\mu }_{2}}\right\rbrack - h$ .
|
| 340 |
+
|
| 341 |
+
Therefore, the characteristic root corresponding to the characteristic equation of $J\left( {E}^{ * }\right)$ is:
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
{h}_{01} = - {\mu }_{1} < 0,{h}_{02} = - {\mu }_{2} < 0, \tag{21}
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
{h}_{03} = - \left( {\phi + {\mu }_{2}}\right) < 0,{h}_{04} = - \left( {\eta + {\mu }_{1}}\right) < 0, \tag{22}
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
{h}_{05} = - \left\lbrack {\alpha {Y}^{ * } + \beta {I}^{ * } + \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {{I}^{ * } + {Y}^{ * }}\right) + {\mu }_{2}}\right\rbrack < 0, \tag{23}
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
{h}_{06} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) . \tag{24}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
Then, we take ${S}^{ * }$ into ${h}_{06}$ ,
|
| 360 |
+
|
| 361 |
+
${h}_{06} = \frac{\beta {\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }{\lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) {I}^{*2} + {C}_{1} + {\mu }_{2}\lambda \left( {\eta + {\mu }_{1}}\right) } - \left( {\theta + \delta + {\mu }_{2}}\right) ,$
|
| 362 |
+
|
| 363 |
+
where ${C}_{1} = \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack {I}^{ * }$ .
|
| 364 |
+
|
| 365 |
+
## IV. NUMERICAL SIMULATION
|
| 366 |
+
|
| 367 |
+
In this section, we will assign reasonable values to the parameters in system (1) as established in this paper, and verify the results of our theoretical analysis through numerical simulations. On the one hand, we combine some similar examples in reality. On the other hand, the parameter values in relevant literature are referred to.
|
| 368 |
+
|
| 369 |
+
Order ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\lambda = {0.01},\eta = {0.3},\alpha = {0.01},\beta =$ ${0.01},\theta = {0.2},\delta = {0.2},\phi = {0.15},{\xi }_{1} = {0.1},{\xi }_{2} = {0.1},{\mu }_{1} =$ ${0.2},{\mu }_{2} = {0.2}$ . Calculated ${R}_{0} = {0.8333} < 1$ , then the no rumor propagation equilibrium point ${E}_{0}$ is stable.
|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+
Fig 2. Stability of equilibrium point ${E}_{0}$ .
|
| 374 |
+
|
| 375 |
+
Fig. 2 shows when ${R}_{0} = {0.0833} < 1$ , the density of each subclass in the model changes with time. At first, the number of unaffected media and unknowns gradually decreased at a similar rate, and finally stabilized. Due to the limited number of moving in and the large number of moving out, the number of affected media and communicators gradually decreases at a similar rate and finally becomes 0 . The number of rumor refuting media and rumor refuters first increased with the increase of the number of affected media and disseminators. It gradually decreases over time and finally becomes 0 . The number of immunized persons increased with the increase of the number of communicators and rumor refuters. The growth rate gradually slowed down and finally stabilized. Namely, the rumor disappears and reaches a stable equilibrium point, and there is no rumor.
|
| 376 |
+
|
| 377 |
+
Let ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\lambda = {0.2},\eta = {0.3},\alpha = {0.5},\beta =$ ${0.6},\theta = {0.4},\delta = {0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.2},{\mu }_{1} =$ ${0.2},{\mu }_{2} = {0.2}$ and calculate ${R}_{0} = 3 > 1$ , the equilibrium point ${E}^{ * }$ is stable, as shown in Fig. 3.
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+
Fig 3. Stability of equilibrium point ${E}^{ * }$ .
|
| 382 |
+
|
| 383 |
+
In Fig. 3, considering the media network layer, due to the small number of new media entering the communication system and the transformation of some unaffected media into affected media, the number of unaffected media gradually decreases and tends to stabilize after a period of time. Originally, the number of affected media increased due to the transformation of some unaffected media into affected media. Over time, most of the affected media were transformed into rumor refuting media, so the number of affected media decreased and stabilized. As the affected media changed into rumor refuting media, the number of rumor refuting media increased and gradually stabilized.
|
| 384 |
+
|
| 385 |
+
Fig. 3 illustrates that, within the individual interpersonal network layer, the number of ignorant individuals begins to decline. Initially, the low influx of new individuals and a fixed rate of departures contribute to this decrease. Additionally, some ignorant individuals transition to become communicators, while others become immune or rumor refuters. Consequently, the number of communicators increases as ignorant individuals transform into communicators. Over time, as communicators transition to immune individuals or rumor refuters, the number of communicators gradually decreases and eventually stabilizes. As more communicators and ignorant individuals become rumor refuters, the number of rumor refuters rises and stabilizes. Simultaneously, with some ignorant individuals, communicators, and rumor refuters becoming immune, the number of immune individuals significantly increases and gradually stabilizes. Ultimately, the model reaches a steady state, with each groups number stabilizing over time.
|
| 386 |
+
|
| 387 |
+
Fig. 4 to Fig. 7 depict ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\eta = {0.3},\theta =$ ${0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.5},{\mu }_{1} = {0.2},{\mu }_{2} = {0.2}$ , the evolution of the density of $X\left( t\right) , Y\left( t\right)$ and $S\left( t\right)$ with different parameters.
|
| 388 |
+
|
| 389 |
+
Fig. 4 and Fig. 5 describe the effect of parameter $\lambda$ on the density change of $X\left( t\right)$ and $Y\left( t\right)$ respectively. Parameter $\lambda$ represents the probability that the unaffected media will be transformed into the affected media. It can be seen from the figure that the parameter $\lambda$ has a negative correlation with the density of $X\left( t\right)$ and a positive correlation with the density of $Y\left( t\right)$ . That is, with the increase of the parameter $\lambda$ , the rate of transformation from unaffected media to affected media increases. The number of unaffected media decreases gradually, and the number of affected media increases gradually, accelerating the spread of rumors in the media network layer.
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
|
| 393 |
+
Fig 4. Density of $X\left( t\right)$ under the parameter $\lambda$ .
|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+
Fig 5. Density of $Y\left( t\right)$ under the parameter $\lambda$ .
|
| 398 |
+
|
| 399 |
+
Fig. 6 and Fig. 7 describe the influence of parameters $\alpha$ and $\beta$ on the density change of $S\left( t\right)$ respectively. Parameter $\alpha$ represents the probability that the unknown person will become a spreader by accessing the affected media, and parameter $\beta$ represents the probability that the unknown person will become a spreader by contacting spreaders. It can be seen from the figure that the density of $S\left( t\right)$ decreases with the increase of parameters $\alpha$ and $\beta$ . Namely, with the increase of the propagation rate of individual network layer and double-layer network interaction, the number of unknowns gradually decreases, which accelerates the spread of rumors in the double-layer network.
|
| 400 |
+
|
| 401 |
+

|
| 402 |
+
|
| 403 |
+
Fig 6. Density of $S\left( t\right)$ under the parameter $\alpha$ .
|
| 404 |
+
|
| 405 |
+

|
| 406 |
+
|
| 407 |
+
Fig 7. Density of $S\left( t\right)$ under the parameter $\beta$ .
|
| 408 |
+
|
| 409 |
+
Fig. 8 and Fig. 9 describe when ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\eta =$ ${0.3},\theta = {0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.5},{\mu }_{1} = {0.2},{\mu }_{2} =$ 0.2, the evolution of the density of $I\left( t\right)$ with different parameters.
|
| 410 |
+
|
| 411 |
+
Fig. 8 and Fig. 9 describe the influence of parameters $\alpha$ and $\beta$ on the density change of $I\left( t\right)$ respectively. Considering the meaning of parameters $\alpha$ and $\beta$ , it is easy to know that the values of parameters $\alpha$ and $\beta$ are positively correlated with the density of $I\left( t\right)$ . As can be seen from the figure, the density of $I\left( t\right)$ increases with the increase of parameters $\alpha$ and $\beta$ . That is, the increasing number of communicators promotes the expansion of the scale of communication, which is not conducive to the control of rumors.
|
| 412 |
+
|
| 413 |
+
## V. CONCLUSION
|
| 414 |
+
|
| 415 |
+
At present, many scholars have separately studied the influence of media refutation or individual refutation on the spread of rumors. We believe that considering these two effects comprehensively is better than considering one of them alone. This paper integrates both media refutation and individual refutation into the analysis and introduces a novel ${XYZ} - {SIDR}$ two-tier rumor propagation model, further demonstrates the existence and stability of equilibrium points within the model. The research results show that this two-layer network model is more effective in controlling the spread of rumors.
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
|
| 419 |
+
Fig 8. Density of $I\left( t\right)$ under the parameter $\alpha$ .
|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
|
| 423 |
+
Fig 9. Density of $I\left( t\right)$ under the parameter $\beta$ .
|
| 424 |
+
|
| 425 |
+
Theoretical analysis indicates that integrating both media and individual rumor refutation exerts a more significant and broader impact on rumor propagation. We suggest strengthening the dissemination of rumor refutation information through the official media rather than relying solely on individuals to control the spread of rumor. The research conclusion can help relevant departments formulate effective measures to control the spread of rumors. On the other hand, the model established in this paper can also be analogically applied to the study of infectious disease model.
|
| 426 |
+
|
| 427 |
+
## REFERENCES
|
| 428 |
+
|
| 429 |
+
[1] W. Jinling, J. Haijun, H. Cheng, Y. Zhiyong and L. Jiarong,"Stability and Hopf bifurcation analysis of multi-lingual rumor spreading model with nonlinear inhibition mechanism," Chaos, Solitons & Fractals, vol. 153, pp. 111464, December 2021.
|
| 430 |
+
|
| 431 |
+
[2] L. Qiming, L. Tao and S. Meici, "The analysis of an SEIR rumor propagation model on heterogeneous network," Physica A: Statistical Mechanics and its Applications, vol.469, PP. 372-380, March 2017.
|
| 432 |
+
|
| 433 |
+
[3] H. Yuhan, P. Qiuhui, H. Wenbing and H. Mingfeng, "Rumor spreading
|
| 434 |
+
|
| 435 |
+
model considering the proportion of wisemen in the crowd," Physica A: Statistical Mechanics and its Applications, vol. 505, pp. 1084-1094, September 2018.
|
| 436 |
+
|
| 437 |
+
[4] W. Juan, L. Chao and X. Chengyi, "Improved centrality indicators to characterize the nodal spreading capability in complex networks," Applied Mathematics and Computation, vol. 334, pp. 388-400, October 2018.
|
| 438 |
+
|
| 439 |
+
[5] K. Eismann, "Diffusion and persistence of false rumors in social media networks: implications of searchability on rumor self-correction on Twitter," Journal of Business Economics, vol. 91, pp. 1299-1329, February 2021.
|
| 440 |
+
|
| 441 |
+
[6] W. Jinling, J. Haijun, M. ianlong and H. Cheng, "Global dynamics of the multi-lingual SIR rumor spreading model with cross-transmitted mechanism," Chaos, Solitons & Fractals, vol. 126, pp. 148-157, September 2019.
|
| 442 |
+
|
| 443 |
+
[7] D. Xuefan, L. Yijun, W. Chao, L. Ying and T. Daisheng, "A double-identity rumor spreading model," Physica A: Statistical Mechanics and its Applications, vol. 528, pp. 121479 August 2019.
|
| 444 |
+
|
| 445 |
+
[8] K. Afassinou, "Analysis of the impact of education rate on the rumor spreading mechanism," Physica A: Statistical Mechanics and Its Applications, vol. 414, pp. 43-52, November 2014.
|
| 446 |
+
|
| 447 |
+
[9] Z. Linhe, Y. Yang, G. Gui and Z. Zhengdi, "Modeling the dynamics of rumor diffusion over complex networks," Information Sciences, vol. 562, pp. 240-258, July 2021.
|
| 448 |
+
|
| 449 |
+
[10] Z. Linhe and W. Bingxu, "Stability analysis of a SAIR rumor spreading model with control strategies in online social networks," Information Sciences, vol. 526, pp. 1-19, July 2020.
|
| 450 |
+
|
| 451 |
+
[11] A. Abta, H. Laarabi, M. Rachik, H. T. Alaoui and S. Boutayeb, "Optimal control of a delayed rumor propagation model with saturated control functions and ${L}^{1}$ -type objectives," Social Network Analysis and Mining, vol. 10, August 2020.
|
| 452 |
+
|
| 453 |
+
[12] X. Jiuping, T. Weiyao, Z. Yi and W. Fengjuan, "A dynamic dissemination model for recurring online public opinion," Nonlinear Dynamics, vol. 99, pp. 12691293, November 2019.
|
| 454 |
+
|
| 455 |
+
[13] C. Yingying, H. Liangan and Z. Laijun, "Rumor spreading in complex networks under stochastic node activity," Physica A: Statistical Mechanics and its Applications, vol. 559, pp. 125061, December 2020.
|
| 456 |
+
|
| 457 |
+
[14] Z. Linhe, L. Wenshan and Z. Zhengdi, "Delay differential equations modeling of rumor propagation in both homogeneous and heterogeneous networks with a forced silence function," Applied Mathematics and Computation, vol. 370, pp. 124925, April 2020.
|
| 458 |
+
|
| 459 |
+
[15] Y. Fulian , Z. Xiaowei, S. Xueying, X. Xinyu, P. Yanyan and W. Jianhong, "Modeling and quantifying the influence of opinion involving opinion leaders on delayed information propagation dynamics," Applied Mathematics Letters, vol. 121, pp. 107356, November 2021.
|
| 460 |
+
|
| 461 |
+
[16] Z. Hongyong and Z. Linhe, "Dynamic Analysis of a ReactionDiffusion Rumor Propagation Model," International Journal of Bifurcation and Chaos, vol. 26, pp. 1650101, 2016.
|
| 462 |
+
|
| 463 |
+
[17] Z. Linhe, W. Xuewei, A. Zhengdi and S. Shuling, "Global Stability and Bifurcation Analysis of a Rumor Propagation Model with Two Discrete Delays in Social Networks," International Journal of Bifurcation and Chaos, vol. 30, pp. 2050175, 2020.
|
| 464 |
+
|
| 465 |
+
[18] Y. Shuzhen, Y. Zhiyong, J. Haijun and Y. Shuai, "The dynamics and control of 2I2SR rumor spreading models in multilingual online social networks," Information Sciences, vol. 581, pp. 18-41, December 2021.
|
| 466 |
+
|
| 467 |
+
[19] M. Ghosh, S. Das and P. Das , "Dynamics and control of delayed rumor propagation through social networks," Journal of Applied Mathematics and Computing, vol. 68, pp. 1-30, November 2021.
|
| 468 |
+
|
| 469 |
+
[20] Z. Linhe and H. Le, "Pattern formation in a reactiondiffusion rumor propagation system with Allee effect and time delay," Nonlinear Dynamics, vol. 107, pp. 3041-3063, January 2022.
|
| 470 |
+
|
| 471 |
+
[21] C. Yingying, H. Liang'an and Z. Laijun, "Dynamical behaviors and control measures of rumor-spreading model in consideration of the infected media and time delay," Information Sciences, vol. 564, pp. 237- 253, July 2021.
|
| 472 |
+
|
| 473 |
+
[22] L. Jiarong, J. Haijun, Y. Zhiyong and H. Cheng, "Dynamical analysis of rumor spreading model in homogeneous complex networks," Applied Mathematics and Computation, vol. 359, pp. 374-385, October 2019.
|
| 474 |
+
|
| 475 |
+
[23] J. Wenjun, L. Yi, Z. Xiaoqin, Z. Juping and J. Zhen, "A rumor spreading pairwise model on weighted networks," Physica A: Statistical Mechanics and its Applications, vol. 585, pp. 126451, January 2022.
|
| 476 |
+
|
| 477 |
+
[24] Y. Lan, L. Zhiwu and A. Giua , "Containment of rumor spread in complex social networks," Information Sciences, vol. 506, pp. 113-130, January 2020.
|
| 478 |
+
|
| 479 |
+
[25] H. Liangan, D. Fan and C. Yingying, "Dynamic analysis of a S I b I n I u, rumor spreading model in complex social network," Physica A: Statistical Mechanics and its Applications, vol. 523, pp. 924-932, June 2019.
|
| 480 |
+
|
| 481 |
+
[26] P. van den Driessche and J. Watmough, "Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission," Mathematical Biosciences, vol. 180, pp. 2948, December 2002.
|
| 482 |
+
|
| 483 |
+
[27] J. M. Heffernan, R. J. Smith and L. M. Wahl, "Perspectives on the basic reproductive ratio," Journal of the Royal Society Interface, vol. 2, pp. 281293, 2005.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/Cox7GQmwAI/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,464 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DYNAMICAL ANALYSIS OF RUMOR PROPAGATION MODEL CONSIDERING MEDIA REFUTATION AND INDIVIDUAL REFUTATION*
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Wenqi Pan
|
| 4 |
+
|
| 5 |
+
College of Marine Electrical Engineering
|
| 6 |
+
|
| 7 |
+
Dalian Maritime University
|
| 8 |
+
|
| 9 |
+
Dalian, China
|
| 10 |
+
|
| 11 |
+
panwenqi07@163.com
|
| 12 |
+
|
| 13 |
+
${2}^{\text{ nd }}$ Li-Ying Hao*
|
| 14 |
+
|
| 15 |
+
College of Marine Electrical Engineering
|
| 16 |
+
|
| 17 |
+
Dalian Maritime University
|
| 18 |
+
|
| 19 |
+
Dalian, China
|
| 20 |
+
|
| 21 |
+
haoliying_0305@163.com
|
| 22 |
+
|
| 23 |
+
Abstract-The factor of refutation significantly impacts the spread of rumors. Common methods of refuting rumors include media intervention and individual efforts. While many scholars have explored the effects of these factors separately, few studies have comprehensively examined both simultaneously. This model integrates the influence of both media and individual refutation on the rumor propagation process. We propose a novel two-tier network model for rumor spread. We demonstrated the existence and stability of equilibrium points within the model. Theoretical analysis demonstrates that authoritative media refutations exert a broader and more substantial influence on rumor dissemination compared to individual refutations.
|
| 24 |
+
|
| 25 |
+
Index Terms-rumor propagation, rumor refuting medias, rumor refuters, stability
|
| 26 |
+
|
| 27 |
+
§ I. INTRODUCTION
|
| 28 |
+
|
| 29 |
+
Rumor refers to the speech fabricated without corresponding factual basis and with a certain purpose and promoted its dissemination by some means. With the exponential growth of technology and the widespread adoption of internet-based social networks, misinformation and harmful rumors have the potential to swiftly propagate across online platforms, posing significant threats to social cohesion, stability, as well as disrupting people's daily lives and productive activities. For example, the panic of buying salt caused by the Fukushima Daiichi Nuclear Disaster [1], there was a rumor that SHL-C could prevent COVID-19, which caused great harm to the public's psychology and body, and seriously disturbed the normal order of the society.
|
| 30 |
+
|
| 31 |
+
The propagate of rumors has attracted the attention and research of many scholars. Some scholars compared the disseminate of rumors with the propagate of infectious diseases in humans, and applied the infectious disease model to the disseminate of rumors [2]-[5]. Considering the influence of different propagation mechanisms on rumor propagation, many scholars have studied cross propagation mechanism [7] and education mechanism [6]. Komi [8] established rumor propagation model based on population education and forgetting mechanism, and found that educated ignorant people are less likely to be transformed into disseminators and more likely to be transformed into suppressors than uneducated ignorant people.
|
| 32 |
+
|
| 33 |
+
At the same time, many scholars considered the influence of different function methods [9]-[11] in the research process. Zhu et al. [14] proposed a rumor propagation model in homogeneous and heterogeneous networks, and comprehensively studied the influence of forced silence function, time delay and network topology on rumor propagation in social networks. The influence of time delay on the propagation process has also been studied by many scholars [15]-[18] on rumor propagation process in existing research. Cheng et al. [21] established an improved ${XY} - {ISR}$ rumor propagation model on the basis of interactive system, comprehensively discussed the influence of different delays on rumor propagation, further proposed control strategies such as deleting posts, popular science education and immunotherapy.
|
| 34 |
+
|
| 35 |
+
With the complexity of the network environment, some scholars have comprehensively considered the influence of various factors on rumor propagation on the complex network [22]-[24]. Considering the reaction of the ignorant when hearing the rumor for the first time, Huo et al. [25] divided the individuals in the network into four groups: the ignorant, the trustworthy, the spreader and the uninterested, and proposed ${SIbInIu}$ rumor propagation model in the complex network. The theoretical analysis and simulation results show that the loss rate and suppression rate have a negative impact on the final rumor spread scale.
|
| 36 |
+
|
| 37 |
+
In the existing literature, it is not common to comprehensively consider the impact of media refutation and individual refutation on the two-tier network rumor propagation model. Based on the actual assumptions, we believe that the rumor refutation effect of comprehensive consideration of the two is better than that of single consideration. This paper mainly make a dynamic analysis on the rumor propagation considering the rumor refutation effect of these two factors.
|
| 38 |
+
|
| 39 |
+
The rest of this paper is distributed as follows. We propose a two-tier network rumor propagation model in section 1. Section 2 describes a two-tier network rumor propagation model considering both rumor refuting media and rumor refuter groups. In section 3, we discuss the existence and stability conditions of the equilibrium points. Finally, the feasibility of the results presented in this paper was confirmed through numerical simulations.
|
| 40 |
+
|
| 41 |
+
This work was funded by the National Natural Science Foundation of China (51939001, 52171292), Dalian Outstanding Young Talents Program (2022RJ05).
|
| 42 |
+
|
| 43 |
+
§ II. TWO-TIER NETWORK RUMOR PROPAGATION MODEL
|
| 44 |
+
|
| 45 |
+
In the two-tier rumor propagation model constructed in this paper, the media network model is composed of networks with $M$ media websites, and the personal friendship network model is composed of networks with $N$ personal friendship websites.
|
| 46 |
+
|
| 47 |
+
In the network layer of media websites, media can be divided into three states: vulnerable media without rumor information (represented by $X$ ), affected media with rumor information (represented by $Y$ ) and rumor refuting media with rumor refuting information (represented by $Z$ ). When communicators visit the vulnerable media, they will release or leave rumors on the media network, so that the vulnerable media will be affected and become the affected media. When the rumor refuters visit the affected media, they will release or leave rumor refutation information on the media network to make the affected media become rumor refutation media.
|
| 48 |
+
|
| 49 |
+
In the personal network layer, individuals are categorized into four distinct groups: those who have never heard of the rumors (denoted by $S$ ), those who actively spread rumors (denoted by $I$ ), those who do not believe in the rumors but disseminate refutation information (denoted by $D$ ), and those who neither believe in nor propagate any information (denoted by $R$ ). In the process of network node interaction, after visiting the vulnerable media, the disseminator spreads rumor information on the media website, so that the vulnerable media is infected and evolved into the affected media. When an ignorant person visits the affected media, affected by the rumor information, the ignorant person becomes a disseminator with a certain probability. Thus, rumors can be spread not only between people, but also between individuals and online media. The basic assumptions of this paper are as follows:
|
| 50 |
+
|
| 51 |
+
Hypothesis 1: In the media network layer, considering that the media website has a certain registration rate and cancellation rate, the number of vulnerable media entering the communication system per unit time is ${\Lambda }_{1}$ . Moreover, there will be benign competition among the media. The three types of media websites $X,Y$ and $Z$ may move out of the communication system with a certain probability ${\mu }_{1}$ . When communicators visit vulnerable media and publish their own views and comments, the rate of conversion to affected media is $\lambda$ . When the rumor refuter visits the affected media and publishes rumor refutation information on it, the affected media will change into rumor refutation media with a certain probability $\eta$ .
|
| 52 |
+
|
| 53 |
+
Hypothesis 2: In the personal interpersonal network layer, assume that the rate at which individuals who are unaware of rumors enter the communication system is ${\Lambda }_{2}$ . Those who question the rumor but neither spread rumor information nor disseminate refutation will transition to an immune state at a rate of ${\xi }_{2}$ . Individuals who initially spread rumors but later find the information untrue may become rumor disclaimers with probability $\delta$ . If these communicators lose interest in rumors and cease both rumor propagation and refutation, they will transition to an immune state with probability $\theta$ . Rumor disclaimers affected by the environment or who lose interest in refutation will also become immune with probability $\phi$ . Additionally, individual groups may exit the rumor spreading network due to migration at a rate ${\mu }_{2}$ .
|
| 54 |
+
|
| 55 |
+
Hypothesis 3: In the interaction of offline individuals, the ignorant will become the disseminator at a certain rate $\alpha$ after contacting the disseminator. If ignorant person believe and propagate rumors after visiting the affected media, they will become disseminators at a certain rate $\beta$ . It is assumed that after the unknown person contacts the rumor information (including contact with people and knowing the rumor information from the media), they realize that the rumor information is untrue due to them own experience or discrimination ability. If an individual who is initially unaware of the rumors chooses to disseminate rumor refutation information, they will transition to the status of a rumor refuter at a rate of ${\xi }_{1}$ .
|
| 56 |
+
|
| 57 |
+
Based on the above analysis, the rumor propagation process of ${XYZ} - {SIDR}$ model established in this paper is shown in Fig. 1.
|
| 58 |
+
|
| 59 |
+
The meanings of symbols in Fig. 1 are shown in the following table. I.
|
| 60 |
+
|
| 61 |
+
TABLE I
|
| 62 |
+
|
| 63 |
+
DESCRIPTION OF PARAMETERS IN THE MODEL
|
| 64 |
+
|
| 65 |
+
max width=
|
| 66 |
+
|
| 67 |
+
$\mathbf{{Parameter}}$ Description
|
| 68 |
+
|
| 69 |
+
1-2
|
| 70 |
+
${\Lambda }_{1}$ The number of susceptible media entering the communication system per unit time.
|
| 71 |
+
|
| 72 |
+
1-2
|
| 73 |
+
${\Lambda }_{2}$ The number of ignorant individuals entering the communication system per unit time.
|
| 74 |
+
|
| 75 |
+
1-2
|
| 76 |
+
$\lambda$ The contact rate of susceptible medias with spreaders.
|
| 77 |
+
|
| 78 |
+
1-2
|
| 79 |
+
$\eta$ The probability of affected media becoming rumor refuting media.
|
| 80 |
+
|
| 81 |
+
1-2
|
| 82 |
+
$\alpha$ Rumor propagation rate of offline personal interaction.
|
| 83 |
+
|
| 84 |
+
1-2
|
| 85 |
+
$\beta$ Rumor propagation rate under two-tier network interaction.
|
| 86 |
+
|
| 87 |
+
1-2
|
| 88 |
+
$\delta$ The probability of propagating individuals becoming rumor refuting individuals.
|
| 89 |
+
|
| 90 |
+
1-2
|
| 91 |
+
$\theta$ The probability of propagating individuals becoming immune individuals.
|
| 92 |
+
|
| 93 |
+
1-2
|
| 94 |
+
${\xi }_{1}$ The rate of ignorant individuals becoming rumor refuting individuals.
|
| 95 |
+
|
| 96 |
+
1-2
|
| 97 |
+
${\xi }_{2}$ The rate of ignorant individuals becoming immune individuals.
|
| 98 |
+
|
| 99 |
+
1-2
|
| 100 |
+
$\phi$ The probability of rumor refuting individuals becoming immune individuals.
|
| 101 |
+
|
| 102 |
+
1-2
|
| 103 |
+
${\mu }_{1}$ The rate at which medias in the network move out of the propagation system.
|
| 104 |
+
|
| 105 |
+
1-2
|
| 106 |
+
${\mu }_{2}$ Migration rate of individuals in personal friendship network layer.
|
| 107 |
+
|
| 108 |
+
1-2
|
| 109 |
+
|
| 110 |
+
Based on the above analysis, we participated in the construction of an ${XYZ} - {SIDR}$ rumor propagation model. Then,
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\left\{ \begin{array}{l} {X}^{\prime } = {\Lambda }_{1} - {\lambda XI} - {\mu }_{1}X, \\ {Y}^{\prime } = {\lambda XI} - {\eta Y} - {\mu }_{1}Y, \\ {Z}^{\prime } = {\eta Y} - {\mu }_{1}Z, \\ {S}^{\prime } = {\Omega }_{2} - {\alpha SY} - {\beta SI} - \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {I + Y}\right) S - {\mu }_{2}S, \\ {I}^{\prime } = {\alpha SY} + {\beta SI} - \left( {\theta + \delta }\right) I - {\mu }_{2}I, \\ {D}^{\prime } = {\xi }_{1}S\left( {I + Y}\right) + {\delta I} - {\phi D} - {\mu }_{2}D, \\ {B}^{\prime } = {\xi }_{2}S\left( {I + Y}\right) + {\theta I} + {\delta D} - {\mu }_{2}B, \end{array}\right. \tag{1}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
< g r a p h i c s >
|
| 117 |
+
|
| 118 |
+
Fig 1. Schematic representation of the ${XYZ} - {SIDR}$ rumor spreading model
|
| 119 |
+
|
| 120 |
+
Since the model represents the process of rumor propagation, the parameters involved are non negative, and the initial conditions are met:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
X\left( 0\right) = {X}_{0} \geq 0,Y\left( 0\right) = {Y}_{0} \geq 0,Z\left( 0\right) = {Z}_{0} \geq 0,
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
S\left( 0\right) = {S}_{0} \geq 0,I\left( 0\right) = {I}_{0} \geq 0,D\left( 0\right) = {D}_{0} \geq 0\text{ , } \tag{2}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
R\left( 0\right) = {R}_{0} \geq 0\text{ . }
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
§ III. MODEL ANALYSIS AND CALCULATION
|
| 135 |
+
|
| 136 |
+
§ A.THE BASIC REPRODUCTION NUMBER ${R}_{0}$
|
| 137 |
+
|
| 138 |
+
For system (1), the basic regeneration number ${R}_{0}$ is calculated as follows:
|
| 139 |
+
|
| 140 |
+
Let $\mathcal{X} = {\left( I,Y,R,D,S,X,Z\right) }^{T}$ , equation (1) can be written as $\frac{d\mathcal{X}}{dt} = \mathcal{F}\left( \mathcal{X}\right) - \mathcal{V}\left( \mathcal{X}\right)$ .
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\mathcal{F}\left( \mathcal{X}\right) = \left( \begin{matrix} {\alpha SY} + {\beta SI} \\ {\lambda XI} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{matrix}\right) , \tag{3}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
\mathcal{V}\left( \mathcal{X}\right) = \left( \begin{matrix} {\theta I} + {\delta I} + {\mu }_{2}I \\ {\eta Y} + {\mu }_{1}Y \\ - {\xi }_{2}{SI} - {\xi }_{2}{SY} - {\theta I} - {\phi D} + {\mu }_{2}R \\ - {\xi }_{1}{SI} - {\xi }_{1}{SY} - {\delta I} + {\phi D} + {\mu }_{2}D \\ {H}_{1} \\ - {\Lambda }_{1} + {\lambda SI} + {\mu }_{1}X \\ - {\eta Y} + {\mu }_{1}Z \end{matrix}\right) \tag{4}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
where ${H}_{1} = - {\Lambda }_{2} + {\alpha SY} + {\beta SI} + {\xi }_{1}{SI} + {\xi }_{1}{SY} + {\xi }_{2}{SI} +$ ${\xi }_{2}{SY} + {\mu }_{2}S$ .
|
| 151 |
+
|
| 152 |
+
Therefore
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
F = \left( \begin{matrix} \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 \\ \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) , \tag{5}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
V = \left( \begin{matrix} \theta + \delta + {\mu }_{2} & 0 & 0 & 0 \\ 0 & \eta + {\mu }_{1} & 0 & 0 \\ - {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} - \theta & - {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & {\mu }_{2} & - \phi \\ - {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} - \delta & - {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & \phi + {\mu }_{2} \end{matrix}\right) \tag{6}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
By calculation we can get
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
F{V}^{-1} = \left( \begin{matrix} \frac{\beta {\Lambda }_{2}}{{\mu }_{2}\left( {\theta + \delta + {\mu }_{2}}\right) } & \frac{\alpha {\Lambda }_{2}}{{\mu }_{2}\left( {\eta + {\mu }_{1}}\right) } & 0 & 0 \\ \frac{\lambda {\Lambda }_{1}}{{\mu }_{1}\left( {\theta + \delta + {\mu }_{2}}\right) } & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \tag{7}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
Hence, according to reference [27], the basic reproduction number of system (1) is the spectral radius of matrix $F{V}^{-1}$ as follows:
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
{R}_{0} = \frac{\beta {\Lambda }_{2}}{{\mu }_{2}\left( {\theta + \delta + {\mu }_{2}}\right) } \tag{8}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
§ B. EXISTENCE OF EQUILIBRIUM
|
| 175 |
+
|
| 176 |
+
According to the system dynamics equation (1), we can calculate the equilibrium $E = \left( {X,Y,Z,S,I,D,R}\right)$ . It is easy to see that the positive equilibrium points of system (1) are ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ and ${E}^{ * } =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ , and the rumor free equilibrium point ${E}_{0}$ always exists.
|
| 177 |
+
|
| 178 |
+
Theorem 1 The equilibrium point ${E}^{ * }\; =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ exists if ${R}_{0} > 1$ and $\left( {\theta + \delta + {\mu }_{2}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) > {\beta \lambda }{\Lambda }_{2}.$
|
| 179 |
+
|
| 180 |
+
Proof The rumors about system (1) have a balance point that satisfies:
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\left\{ \begin{array}{l} {\Lambda }_{1} - {\lambda XI} - {\mu }_{1}X = 0, \\ {\lambda XI} - {\eta Y} - {\mu }_{1}Y = 0, \\ {\eta Y} - {\mu }_{1}Z = 0, \\ {\Lambda }_{2} - {\alpha SY} - {\beta SI} - \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {I + Y}\right) S - {\mu }_{2}S = 0, \\ {\alpha SY} + {\beta SI} - \left( {\theta + \delta }\right) I - {\mu }_{2}I = 0, \\ {\xi }_{1}S\left( {I + Y}\right) + {\delta I} - {\phi D} - {\mu }_{2}D = 0, \\ {\xi }_{2}S\left( {I + Y}\right) + {\theta I} + {\phi D} - {\mu }_{2}R = 0. \end{array}\right. \tag{9}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
According to formula (9), ${X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{D}^{ * },{R}^{ * }$ are represented by ${I}^{ * }$ respectively and brought into the fifth equation to get
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
a{I}^{2} + {bI} + c = 0 \tag{10}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
Where
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
a = \lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) ,
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
$$
|
| 199 |
+
b = \left( {\theta + \delta + {\mu }_{2}}\right) \left\lbrack {\left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack
|
| 200 |
+
$$
|
| 201 |
+
|
| 202 |
+
$$
|
| 203 |
+
+ \lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) - {\beta \lambda }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \text{ , }
|
| 204 |
+
$$
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
c = \left( {\eta + {\mu }_{1}}\right) \left\lbrack {{\mu }_{2}\lambda \left( {\theta + \delta + {\mu }_{2}}\right) - {\mu }_{1}\beta {\Lambda }_{2}}\right\rbrack - {\alpha \lambda }{\Lambda }_{1}{\Lambda }_{2}.
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
(11)
|
| 211 |
+
|
| 212 |
+
It can be obtained by calculation that
|
| 213 |
+
|
| 214 |
+
$$
|
| 215 |
+
\Delta = {b}^{2} - {4ac}
|
| 216 |
+
$$
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
= {\left\lbrack \lambda {\Lambda }_{1}\left( \alpha + {\xi }_{2}\right) + \left( \eta + {\mu }_{1}\right) \left( {\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda \right) \right\rbrack }^{2}
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
$$
|
| 223 |
+
* {\left( \theta + \delta + {\mu }_{2}\right) }^{2} + {\left\lbrack \beta \lambda {\Lambda }_{2}\left( \eta + {\mu }_{1}\right) \right\rbrack }^{2} + {4\alpha }{\lambda }^{2}{\Lambda }_{1}{\Lambda }_{2}\left( {\beta + {\xi }_{1}}\right)
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
$$
|
| 227 |
+
* \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) - {2\beta }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right)
|
| 228 |
+
$$
|
| 229 |
+
|
| 230 |
+
$$
|
| 231 |
+
* \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack
|
| 232 |
+
$$
|
| 233 |
+
|
| 234 |
+
$$
|
| 235 |
+
- {4\lambda }\left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right)
|
| 236 |
+
$$
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
* \left\lbrack {{\mu }_{2}\lambda \left( {\theta + \delta + {\mu }_{2}}\right) - \beta {\mu }_{1}{\Lambda }_{2}}\right\rbrack
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
(12)
|
| 243 |
+
|
| 244 |
+
According to the discriminant calculation, when ${R}_{0} > 1$ and
|
| 245 |
+
|
| 246 |
+
$\left( {\theta + \delta + {\mu }_{2}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) > {\beta \lambda }{\Lambda }_{2}$ , the negative solution is omitted:
|
| 247 |
+
|
| 248 |
+
$$
|
| 249 |
+
{I}^{ * } = \frac{{\beta \lambda }{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) - {H}_{2}\left( {\theta + \delta + {\mu }_{2}}\right) + \sqrt{\Delta }}{{2\lambda }\left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) \left( {\theta + \delta + {\mu }_{2}}\right) } \tag{13}
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+
where ${H}_{2} = \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack$ .
|
| 253 |
+
|
| 254 |
+
Therefore ${E}^{ * } = \left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ , where
|
| 255 |
+
|
| 256 |
+
$$
|
| 257 |
+
{X}^{ * } = \frac{{\Lambda }_{1}}{\lambda {I}^{ * } + {\mu }_{1}}, \tag{14}
|
| 258 |
+
$$
|
| 259 |
+
|
| 260 |
+
$$
|
| 261 |
+
{Y}^{ * } = \frac{\lambda {\Lambda }_{1}{I}^{ * }}{\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }, \tag{15}
|
| 262 |
+
$$
|
| 263 |
+
|
| 264 |
+
$$
|
| 265 |
+
{Z}^{ * } = \frac{{\lambda \eta }{\Lambda }_{1}{I}^{ * }}{{\mu }_{1}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }, \tag{16}
|
| 266 |
+
$$
|
| 267 |
+
|
| 268 |
+
$$
|
| 269 |
+
{S}^{ * } = \frac{{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }{T}, \tag{17}
|
| 270 |
+
$$
|
| 271 |
+
|
| 272 |
+
$$
|
| 273 |
+
{D}^{ * } = \frac{\lambda {\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) {I}^{*2} + \left\lbrack {\lambda {\Lambda }_{1}{\Lambda }_{2} + {\mu }_{1}\left( {\eta + {\mu }_{1}}\right) }\right\rbrack {I}^{ * }}{\left( {\phi + {\mu }_{2}}\right) T}, \tag{18}
|
| 274 |
+
$$
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
{R}^{ * } = \frac{{\xi }_{2}{\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) {H}_{3} + \theta {H}_{4}}{\left( {{\mu }_{2} - \phi }\right) {H}_{4}} \tag{19}
|
| 278 |
+
$$
|
| 279 |
+
|
| 280 |
+
where ${H}_{3} = \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) \left\lbrack {\lambda {\Lambda }_{1} + \left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }\right\rbrack ,{H}_{4} =$ $\lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) {I}^{*2} + \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + }\right. }\right.$ $\left. \left. {{\mu }_{2}\lambda }\right) \right\rbrack {I}^{ * } + {\mu }_{2}\lambda \left( {\eta + {\mu }_{1}}\right)$ .
|
| 281 |
+
|
| 282 |
+
§ C. STABILITY OF EQUILIBRIUM
|
| 283 |
+
|
| 284 |
+
Theorem 2 The equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is locally asymptotically stable if ${R}_{0} < 1$ . And the equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is unstable if ${R}_{0} > 1$ .
|
| 285 |
+
|
| 286 |
+
Proof The Jacobian matrix of system (1) at
|
| 287 |
+
|
| 288 |
+
${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is
|
| 289 |
+
|
| 290 |
+
$J\left( {E}_{0}\right) =$
|
| 291 |
+
|
| 292 |
+
$$
|
| 293 |
+
\left( \begin{matrix} - {\mu }_{1} & 0 & 0 & 0 & - \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & - \eta - {\mu }_{1} & 0 & 0 & \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & \eta & - {\mu }_{1} & 0 & 0 & 0 & 0 \\ 0 & {H}_{5} & 0 & - {\mu }_{2} & {H}_{6} & 0 & 0 \\ 0 & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {H}_{7} & 0 & 0 \\ 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} + \theta & - {\mu }_{2} & - {\mu }_{2} \\ 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} & 0 & 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} + \delta & 0 & {H}_{8} \end{matrix}\right)
|
| 294 |
+
$$
|
| 295 |
+
|
| 296 |
+
where ${H}_{5} = - \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{H}_{6} = - \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}}$ ,
|
| 297 |
+
|
| 298 |
+
${H}_{7} = \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) ,{H}_{8} = - \left( {\phi + {\mu }_{2}}\right) .$
|
| 299 |
+
|
| 300 |
+
The characteristic equation of matrix $J\left( {E}_{0}\right)$ is
|
| 301 |
+
|
| 302 |
+
$\left| {J\left( {E}_{0}\right) - {hE}}\right| =$
|
| 303 |
+
|
| 304 |
+
$$
|
| 305 |
+
\left. \begin{matrix} {T}_{1} & 0 & 0 & 0 & - \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & {T}_{1} & 0 & 0 & \lambda \frac{{\Lambda }_{1}}{{\mu }_{1}} & 0 & 0 \\ 0 & \eta & {T}_{4} & 0 & 0 & 0 & 0 \\ 0 & {T}_{2} & 0 & {T}_{5} & {T}_{3} & 0 & 0 \\ 0 & \alpha \frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {T}_{4} & 0 & 0 \\ 0 & {\xi }_{2}\frac{{\Lambda }_{2}}{{\mu }_{2}} & 0 & 0 & {T}_{5} & - {\mu }_{2} - h & - {\mu }_{2} \\ 0 & {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{1}} & 0 & 0 & {T}_{6} & 0 & - \left( {\phi + {\mu }_{2}}\right) - h \end{matrix}\right|
|
| 306 |
+
$$
|
| 307 |
+
|
| 308 |
+
$= {\left( {\mu }_{1} + h\right) }^{2}{\left( {\mu }_{2} + h\right) }^{2}\left( {\phi + {\mu }_{2} + h}\right) \left( {\eta + {\mu }_{1} + h}\right) \left\lbrack {\beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - (\theta + }\right.$
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
\left. {\left. {\delta + {\mu }_{2}}\right) - h}\right\rbrack = 0
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
Where ${T}_{1} = - {\mu }_{1} - h,{T}_{2} = - \eta - {\mu }_{1} - h,{T}_{3} =$ $- \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{T}_{4} = - {\mu }_{1} - h,{T}_{5} = - {\mu }_{2} - h,{T}_{6} =$ $- \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) \frac{{\Lambda }_{2}}{{\mu }_{2}},{T}_{7} = \beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{T}_{8} =$ $\beta \frac{{\Lambda }_{2}}{{\mu }_{2}} - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{T}_{9} = {\xi }_{1}\frac{{\Lambda }_{2}}{{\mu }_{2}} + \delta .$
|
| 315 |
+
|
| 316 |
+
Therefore, the characteristic root corresponding to the characteristic equation of $J\left( {E}_{0}\right)$ is:
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
{h}_{01} = - {\mu }_{1} < 0,{h}_{02} = - {\mu }_{2} < 0,{h}_{03} = - \left( {\phi + {\mu }_{2}}\right) < 0,
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
$$
|
| 323 |
+
{h}_{04} = - \left( {\eta + {\mu }_{1}}\right) < 0,{h}_{05} = \frac{\theta + \delta + {\mu }_{2}}{{\mu }_{2}}\left( {{R}_{0} - 1}\right) < 0
|
| 324 |
+
$$
|
| 325 |
+
|
| 326 |
+
(20)
|
| 327 |
+
|
| 328 |
+
According to Routh-Hurwitz stability criterion, the equilibrium point
|
| 329 |
+
|
| 330 |
+
${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is locally asymptotically stable if ${R}_{0} < 1$ .
|
| 331 |
+
|
| 332 |
+
And the equilibrium point ${E}_{0} = \left( {\frac{{\Lambda }_{1}}{{\mu }_{1}},0,0,\frac{{\Lambda }_{2}}{{\mu }_{2}},0,0,0}\right)$ is unstable if ${R}_{0} > 1$ .
|
| 333 |
+
|
| 334 |
+
Theorem 3 The equilibrium point ${E}^{ * }\; =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ is locally asymptotically stable if ${R}_{0} > 1$ and $\beta {\Lambda }_{2} < {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) \left( {\theta + \delta + {\mu }_{2}}\right)$ , otherwise, the equilibrium point ${E}^{ * }$ is unstable.
|
| 335 |
+
|
| 336 |
+
Proof The Jacobian matrix at ${E}^{ * } =$ $\left( {{X}^{ * },{Y}^{ * },{Z}^{ * },{S}^{ * },{I}^{ * },{D}^{ * },{R}^{ * }}\right)$ is
|
| 337 |
+
|
| 338 |
+
$J\left( {E}^{ * }\right) =$
|
| 339 |
+
|
| 340 |
+
$$
|
| 341 |
+
\left( \begin{matrix} {A}_{1} & 0 & 0 & 0 & - \lambda {X}^{ * } & 0 & 0 \\ \lambda {I}^{ * } & {A}_{2} & 0 & 0 & \lambda {X}^{ * } & 0 & 0 \\ 0 & \eta & - {\mu }_{1} & 0 & 0 & 0 & 0 \\ 0 & {A}_{3} & 0 & {A}_{4} & {A}_{8} & 0 & 0 \\ 0 & \alpha {S}^{ * } & 0 & {A}_{5} & {A}_{9} & 0 & 0 \\ 0 & {\xi }_{2}{S}^{ * } & 0 & {A}_{6} & {\xi }_{2}{S}^{ * } + \theta & - {\mu }_{2} & - {\mu }_{2} \\ 0 & {\xi }_{1}{S}^{ * } & 0 & {A}_{7} & {\xi }_{1}{S}^{ * } + \delta & 0 & {A}_{10} \end{matrix}\right)
|
| 342 |
+
$$
|
| 343 |
+
|
| 344 |
+
Where ${A}_{1} = \lambda {I}^{ * } - {\mu }_{1},{A}_{2} = - \eta - {\mu }_{1},{A}_{3} =$
|
| 345 |
+
|
| 346 |
+
$$
|
| 347 |
+
- \left( {\alpha + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * },{A}_{4} = - \alpha {Y}^{ * } - \beta {I}^{ * },{A}_{5} = \alpha {Y}^{ * } + \beta {I}^{ * }\text{ , }
|
| 348 |
+
$$
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
{A}_{6} = {\xi }_{2}\left( {{I}^{ * } + {Y}^{ * }}\right) ,\;{A}_{7} = {\xi }_{1}\left( {{I}^{ * } + {Y}^{ * }}\right) ,\;{A}_{8} =
|
| 352 |
+
$$
|
| 353 |
+
|
| 354 |
+
$$
|
| 355 |
+
- \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * },{A}_{9} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) ,{A}_{10} =
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
$- \left( {\phi + {\mu }_{2}}\right)$ .
|
| 359 |
+
|
| 360 |
+
The characteristic equation of matrix $J\left( {E}^{ * }\right)$ is
|
| 361 |
+
|
| 362 |
+
$\left| {J\left( {E}^{ * }\right) - {hE}}\right| =$
|
| 363 |
+
|
| 364 |
+
$$
|
| 365 |
+
\left| \begin{matrix} {B}_{1} & 0 & 0 & 0 & - \lambda {X}^{ * } & 0 & 0 \\ \lambda {I}^{ * } & {B}_{2} & 0 & 0 & \lambda {X}^{ * } & 0 & 0 \\ 0 & \eta & - {\mu }_{1} - h & 0 & 0 & 0 & 0 \\ 0 & {B}_{3} & 0 & {B}_{3} & {B}_{7} & 0 & 0 \\ 0 & \alpha {S}^{ * } & 0 & {B}_{4} & {B}_{8} & 0 & 0 \\ 0 & {\xi }_{2}{S}^{ * } & 0 & {B}_{5} & {\xi }_{2}{S}^{ * } + \theta & {B}_{9} & - {\mu }_{2} \\ 0 & {\xi }_{1}{S}^{ * } & 0 & {B}_{6} & {\xi }_{1}{S}^{ * } + \delta & 0 & {B}_{10} \end{matrix}\right|
|
| 366 |
+
$$
|
| 367 |
+
|
| 368 |
+
Where ${B}_{1} = \lambda {I}^{ * } - {\mu }_{1} - h,{B}_{2} = - \eta - {\mu }_{1} - h,{B}_{3} =$ $- \alpha {Y}^{ * } - \beta {I}^{ * } - h,{B}_{4} = \alpha {Y}^{ * } + \beta {I}^{ * },{B}_{5} = {\xi }_{2}\left( {{I}^{ * } + }\right.$ $\left. {Y}^{ * }\right) ,{B}_{6} = {\xi }_{1}\left( {{I}^{ * } + {Y}^{ * }}\right) ,{B}_{7} = - \left( {\beta + {\xi }_{1} + {\xi }_{2}}\right) {S}^{ * }$ , ${B}_{8} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) - h,{B}_{9} = - {\mu }_{2} - h,{B}_{10} =$ $- \left( {\phi + {\mu }_{2}}\right) - h$ .
|
| 369 |
+
|
| 370 |
+
Thus, we can obtain
|
| 371 |
+
|
| 372 |
+
$$
|
| 373 |
+
\left| {J\left( {E}^{ * }\right) - {hE}}\right| = \left( {{\mu }_{1} + h}\right) \left( {{\mu }_{2} + h}\right) \left( {\phi + {\mu }_{2} + h}\right) \left( {\eta + {\mu }_{1} + }\right.
|
| 374 |
+
$$
|
| 375 |
+
|
| 376 |
+
$\left. h\right) \left( {\lambda {I}^{ * } + {\mu }_{1} + h}\right) G$ .
|
| 377 |
+
|
| 378 |
+
Where $G = - \left\lbrack {\alpha {Y}^{ * } + \beta {I}^{ * } + \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {{I}^{ * } + {Y}^{ * }}\right) + {\mu }_{2}}\right\rbrack - h$ .
|
| 379 |
+
|
| 380 |
+
Therefore, the characteristic root corresponding to the characteristic equation of $J\left( {E}^{ * }\right)$ is:
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
{h}_{01} = - {\mu }_{1} < 0,{h}_{02} = - {\mu }_{2} < 0, \tag{21}
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
$$
|
| 387 |
+
{h}_{03} = - \left( {\phi + {\mu }_{2}}\right) < 0,{h}_{04} = - \left( {\eta + {\mu }_{1}}\right) < 0, \tag{22}
|
| 388 |
+
$$
|
| 389 |
+
|
| 390 |
+
$$
|
| 391 |
+
{h}_{05} = - \left\lbrack {\alpha {Y}^{ * } + \beta {I}^{ * } + \left( {{\xi }_{1} + {\xi }_{2}}\right) \left( {{I}^{ * } + {Y}^{ * }}\right) + {\mu }_{2}}\right\rbrack < 0, \tag{23}
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
$$
|
| 395 |
+
{h}_{06} = \beta {S}^{ * } - \left( {\theta + \delta + {\mu }_{2}}\right) . \tag{24}
|
| 396 |
+
$$
|
| 397 |
+
|
| 398 |
+
Then, we take ${S}^{ * }$ into ${h}_{06}$ ,
|
| 399 |
+
|
| 400 |
+
${h}_{06} = \frac{\beta {\Lambda }_{2}\left( {\eta + {\mu }_{1}}\right) \left( {\lambda {I}^{ * } + {\mu }_{1}}\right) }{\lambda \left( {\beta + {\xi }_{1}}\right) \left( {\eta + {\mu }_{1}}\right) {I}^{*2} + {C}_{1} + {\mu }_{2}\lambda \left( {\eta + {\mu }_{1}}\right) } - \left( {\theta + \delta + {\mu }_{2}}\right) ,$
|
| 401 |
+
|
| 402 |
+
where ${C}_{1} = \left\lbrack {\lambda {\Lambda }_{1}\left( {\alpha + {\xi }_{2}}\right) + \left( {\eta + {\mu }_{1}}\right) \left( {{\mu }_{1}\beta + {\mu }_{1}{\xi }_{1} + {\mu }_{2}\lambda }\right) }\right\rbrack {I}^{ * }$ .
|
| 403 |
+
|
| 404 |
+
§ IV. NUMERICAL SIMULATION
|
| 405 |
+
|
| 406 |
+
In this section, we will assign reasonable values to the parameters in system (1) as established in this paper, and verify the results of our theoretical analysis through numerical simulations. On the one hand, we combine some similar examples in reality. On the other hand, the parameter values in relevant literature are referred to.
|
| 407 |
+
|
| 408 |
+
Order ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\lambda = {0.01},\eta = {0.3},\alpha = {0.01},\beta =$ ${0.01},\theta = {0.2},\delta = {0.2},\phi = {0.15},{\xi }_{1} = {0.1},{\xi }_{2} = {0.1},{\mu }_{1} =$ ${0.2},{\mu }_{2} = {0.2}$ . Calculated ${R}_{0} = {0.8333} < 1$ , then the no rumor propagation equilibrium point ${E}_{0}$ is stable.
|
| 409 |
+
|
| 410 |
+
< g r a p h i c s >
|
| 411 |
+
|
| 412 |
+
Fig 2. Stability of equilibrium point ${E}_{0}$ .
|
| 413 |
+
|
| 414 |
+
Fig. 2 shows when ${R}_{0} = {0.0833} < 1$ , the density of each subclass in the model changes with time. At first, the number of unaffected media and unknowns gradually decreased at a similar rate, and finally stabilized. Due to the limited number of moving in and the large number of moving out, the number of affected media and communicators gradually decreases at a similar rate and finally becomes 0 . The number of rumor refuting media and rumor refuters first increased with the increase of the number of affected media and disseminators. It gradually decreases over time and finally becomes 0 . The number of immunized persons increased with the increase of the number of communicators and rumor refuters. The growth rate gradually slowed down and finally stabilized. Namely, the rumor disappears and reaches a stable equilibrium point, and there is no rumor.
|
| 415 |
+
|
| 416 |
+
Let ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\lambda = {0.2},\eta = {0.3},\alpha = {0.5},\beta =$ ${0.6},\theta = {0.4},\delta = {0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.2},{\mu }_{1} =$ ${0.2},{\mu }_{2} = {0.2}$ and calculate ${R}_{0} = 3 > 1$ , the equilibrium point ${E}^{ * }$ is stable, as shown in Fig. 3.
|
| 417 |
+
|
| 418 |
+
< g r a p h i c s >
|
| 419 |
+
|
| 420 |
+
Fig 3. Stability of equilibrium point ${E}^{ * }$ .
|
| 421 |
+
|
| 422 |
+
In Fig. 3, considering the media network layer, due to the small number of new media entering the communication system and the transformation of some unaffected media into affected media, the number of unaffected media gradually decreases and tends to stabilize after a period of time. Originally, the number of affected media increased due to the transformation of some unaffected media into affected media. Over time, most of the affected media were transformed into rumor refuting media, so the number of affected media decreased and stabilized. As the affected media changed into rumor refuting media, the number of rumor refuting media increased and gradually stabilized.
|
| 423 |
+
|
| 424 |
+
Fig. 3 illustrates that, within the individual interpersonal network layer, the number of ignorant individuals begins to decline. Initially, the low influx of new individuals and a fixed rate of departures contribute to this decrease. Additionally, some ignorant individuals transition to become communicators, while others become immune or rumor refuters. Consequently, the number of communicators increases as ignorant individuals transform into communicators. Over time, as communicators transition to immune individuals or rumor refuters, the number of communicators gradually decreases and eventually stabilizes. As more communicators and ignorant individuals become rumor refuters, the number of rumor refuters rises and stabilizes. Simultaneously, with some ignorant individuals, communicators, and rumor refuters becoming immune, the number of immune individuals significantly increases and gradually stabilizes. Ultimately, the model reaches a steady state, with each groups number stabilizing over time.
|
| 425 |
+
|
| 426 |
+
Fig. 4 to Fig. 7 depict ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\eta = {0.3},\theta =$ ${0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.5},{\mu }_{1} = {0.2},{\mu }_{2} = {0.2}$ , the evolution of the density of $X\left( t\right) ,Y\left( t\right)$ and $S\left( t\right)$ with different parameters.
|
| 427 |
+
|
| 428 |
+
Fig. 4 and Fig. 5 describe the effect of parameter $\lambda$ on the density change of $X\left( t\right)$ and $Y\left( t\right)$ respectively. Parameter $\lambda$ represents the probability that the unaffected media will be transformed into the affected media. It can be seen from the figure that the parameter $\lambda$ has a negative correlation with the density of $X\left( t\right)$ and a positive correlation with the density of $Y\left( t\right)$ . That is, with the increase of the parameter $\lambda$ , the rate of transformation from unaffected media to affected media increases. The number of unaffected media decreases gradually, and the number of affected media increases gradually, accelerating the spread of rumors in the media network layer.
|
| 429 |
+
|
| 430 |
+
< g r a p h i c s >
|
| 431 |
+
|
| 432 |
+
Fig 4. Density of $X\left( t\right)$ under the parameter $\lambda$ .
|
| 433 |
+
|
| 434 |
+
< g r a p h i c s >
|
| 435 |
+
|
| 436 |
+
Fig 5. Density of $Y\left( t\right)$ under the parameter $\lambda$ .
|
| 437 |
+
|
| 438 |
+
Fig. 6 and Fig. 7 describe the influence of parameters $\alpha$ and $\beta$ on the density change of $S\left( t\right)$ respectively. Parameter $\alpha$ represents the probability that the unknown person will become a spreader by accessing the affected media, and parameter $\beta$ represents the probability that the unknown person will become a spreader by contacting spreaders. It can be seen from the figure that the density of $S\left( t\right)$ decreases with the increase of parameters $\alpha$ and $\beta$ . Namely, with the increase of the propagation rate of individual network layer and double-layer network interaction, the number of unknowns gradually decreases, which accelerates the spread of rumors in the double-layer network.
|
| 439 |
+
|
| 440 |
+
< g r a p h i c s >
|
| 441 |
+
|
| 442 |
+
Fig 6. Density of $S\left( t\right)$ under the parameter $\alpha$ .
|
| 443 |
+
|
| 444 |
+
< g r a p h i c s >
|
| 445 |
+
|
| 446 |
+
Fig 7. Density of $S\left( t\right)$ under the parameter $\beta$ .
|
| 447 |
+
|
| 448 |
+
Fig. 8 and Fig. 9 describe when ${\Lambda }_{1} = 1,{\Lambda }_{2} = 1,\eta =$ ${0.3},\theta = {0.4},\phi = {0.15},{\xi }_{1} = {0.2},{\xi }_{2} = {0.5},{\mu }_{1} = {0.2},{\mu }_{2} =$ 0.2, the evolution of the density of $I\left( t\right)$ with different parameters.
|
| 449 |
+
|
| 450 |
+
Fig. 8 and Fig. 9 describe the influence of parameters $\alpha$ and $\beta$ on the density change of $I\left( t\right)$ respectively. Considering the meaning of parameters $\alpha$ and $\beta$ , it is easy to know that the values of parameters $\alpha$ and $\beta$ are positively correlated with the density of $I\left( t\right)$ . As can be seen from the figure, the density of $I\left( t\right)$ increases with the increase of parameters $\alpha$ and $\beta$ . That is, the increasing number of communicators promotes the expansion of the scale of communication, which is not conducive to the control of rumors.
|
| 451 |
+
|
| 452 |
+
§ V. CONCLUSION
|
| 453 |
+
|
| 454 |
+
At present, many scholars have separately studied the influence of media refutation or individual refutation on the spread of rumors. We believe that considering these two effects comprehensively is better than considering one of them alone. This paper integrates both media refutation and individual refutation into the analysis and introduces a novel ${XYZ} - {SIDR}$ two-tier rumor propagation model, further demonstrates the existence and stability of equilibrium points within the model. The research results show that this two-layer network model is more effective in controlling the spread of rumors.
|
| 455 |
+
|
| 456 |
+
< g r a p h i c s >
|
| 457 |
+
|
| 458 |
+
Fig 8. Density of $I\left( t\right)$ under the parameter $\alpha$ .
|
| 459 |
+
|
| 460 |
+
< g r a p h i c s >
|
| 461 |
+
|
| 462 |
+
Fig 9. Density of $I\left( t\right)$ under the parameter $\beta$ .
|
| 463 |
+
|
| 464 |
+
Theoretical analysis indicates that integrating both media and individual rumor refutation exerts a more significant and broader impact on rumor propagation. We suggest strengthening the dissemination of rumor refutation information through the official media rather than relying solely on individuals to control the spread of rumor. The research conclusion can help relevant departments formulate effective measures to control the spread of rumors. On the other hand, the model established in this paper can also be analogically applied to the study of infectious disease model.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,427 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Asynchronous Thruster Fault Detection for Unmanned Marine Vehicles under DoS Attacks
|
| 2 |
+
|
| 3 |
+
Fuxing Wang
|
| 4 |
+
|
| 5 |
+
School of Automation Engineering
|
| 6 |
+
|
| 7 |
+
University of Electronic Science and Technology of China
|
| 8 |
+
|
| 9 |
+
Chengdu 611731, China
|
| 10 |
+
|
| 11 |
+
wfx614328@163.com
|
| 12 |
+
|
| 13 |
+
Yue Long
|
| 14 |
+
|
| 15 |
+
School of Automation Engineering
|
| 16 |
+
|
| 17 |
+
University of Electronic Science and Technology of China
|
| 18 |
+
|
| 19 |
+
Chengdu 611731, China
|
| 20 |
+
|
| 21 |
+
longyue@uestc.edu.cn
|
| 22 |
+
|
| 23 |
+
Tieshan Li
|
| 24 |
+
|
| 25 |
+
School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
|
| 26 |
+
|
| 27 |
+
tieshanli@126.com
|
| 28 |
+
|
| 29 |
+
Abstract-This paper investigates a thruster fault detection strategy for unmanned marine vehicles (UMVs) subjected to external disturbances and aperiodic Denial of Service (DoS) attacks. To address the challenge of timely detection of DoS attacks, the UMV and the corresponding filters are modeled within the framework of an asynchronous switched system. Sufficient conditions ensuring the system's exponential stability and prescribed performance are derived using model-dependent average dwell time and piecewise Lyapunov functions. Additionally, the tolerable lower bound of the sleep interval and the upper bound of the attack interval for DoS attacks are established. Solvable conditions for the designed fault detection filters are obtained by leveraging decoupling techniques. Finally, simulations conducted on a UMV validate the effectiveness of the proposed methods.
|
| 30 |
+
|
| 31 |
+
Index Terms-Unmanned marine vehicles, asynchronous switched system, DoS attacks, fault detection.
|
| 32 |
+
|
| 33 |
+
## I. INTRODUCTION
|
| 34 |
+
|
| 35 |
+
In recent years, unmanned marine vehicles (UMVs) have attracted significant attention in marine science and technology due to their wide-ranging applications in marine exploration, environmental monitoring, and resource development [1]. Nevertheless, the operational environment for UMVs is inherently complex, and their reliance on wireless communication networks for communication with shore-based centers makes them vulnerable to external disturbances, equipment malfunctions, cyber-attacks, and other disruptions [2]. The unpredictable nature of potential harm caused by these disturbances or faults, combined with the inherent vulnerabilities of cyberspace, renders UMV systems particularly susceptible to cyber-attacks. These risks can result in system failures and potentially catastrophic accidents [3]. As a result, improving the reliability and security of UMVs has emerged as a crucial area of research and development.
|
| 36 |
+
|
| 37 |
+
The unpredictable nature of potential harm caused by disturbances or faults to unmanned marine vehicles (UMVs) underscores the critical need for a real-time fault detection (FD) warning mechanism. The core of fault detection methodology involves comparing system performances to identify fault signals. Current research predominantly focuses on model-based fault detection, which has shown significant success in various systems, including continuous-discrete systems [4], T-S fuzzy systems [5], and Markovian jump systems [6]. The primary approach involves generating residual signals through filters or observers and subsequently establishing a fault warning mechanism. For UMVs, several studies have made noteworthy contributions. [7] has explored the design of controllers and FD filters based on observers for networked UMVs, [8] proposed event-triggered fault detection mechanisms for UMVs in networked environments, and [2] utilized T-S fuzzy systems to model UMV systems, particularly addressing fault detection under replay attacks. Despite these advancements, the scope of fault detection research for UMVs remains relatively narrow and lacks comprehensive coverage [9]. Consequently, further investigation into robust and holistic fault detection strategies for UMVs is imperative to enhance their reliability and operational safety [10].
|
| 38 |
+
|
| 39 |
+
On the other hand, due to the openness of cyberspace, UMV systems are particularly vulnerable to cyber-attacks. Deception attacks and Denial of Service (DoS) attacks are currently common types of attacks [11]. Deception attacks involve sending incorrect or tampered data to the system [12], including replay attacks [13] and false data injection attacks [14]. Compared to deception attacks, DoS attacks cause signal transmission to be unavailable for a period, leaving the system in an open-loop state, which makes it easier to cause severe disruption in system operations. Consequently, numerous studies on DoS attacks have emerged [15], [16].
|
| 40 |
+
|
| 41 |
+
However, most existing research assumes that Denial of Service (DoS) attacks can be detected promptly, suggesting that the switching of filters corresponding to each subsystem happens simultaneously with the subsystem switching [10], [17]. However, in practical applications, detecting DoS attacks in a timely manner proves challenging, leading to delays. This delay implies that the filter often takes additional time to adjust to the appropriate control mode based on the subsystem mode, resulting in asynchronous filter/subsystem switching [18]. As a result, filters designed for synchronous switching may not provide optimal detection performance in real-world scenarios [19]. Thus, incorporating asynchronous switching into thruster fault detection for unmanned marine vehicles (UMVs) under DoS attacks is of substantial practical significance.
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
This work is supported in part by the National Natural Science Foundation of China under Grants 62273072, 51939001. (Corresponding author: Yue Long)
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
Inspired by the previous discussion, this paper investigates thruster fault detection (FD) for unmanned marine vehicles (UMVs) under Denial of Service (DoS) attacks using an asynchronous switched method to enhance reliability and security. Addressing the challenge of timely DoS attack detection, the paper proposes an asynchronous switched filter specifically designed for thruster fault detection. Furthermore, leveraging model-dependent average dwell time (MDADT) and piecewise Lyapunov functions (PLF), the paper establishes the tolerable lower bound of the sleep interval and the upper bound of the attack interval for DoS attacks. The filter parameters are determined based on linear solvability conditions. The effectiveness of the proposed method is ultimately validated through simulation.
|
| 50 |
+
|
| 51 |
+
## II. Problem formulation and modeling
|
| 52 |
+
|
| 53 |
+
### A.UMV Model
|
| 54 |
+
|
| 55 |
+
Consider the UMV and the following body-fixed equations of motion
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
M\dot{\delta }\left( t\right) + {N\delta }\left( t\right) + {R\psi }\left( t\right) = {E\varphi }\left( t\right) , \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\dot{\psi }\left( t\right) = J\left( {\eta \left( t\right) }\right) \delta \left( t\right) ,
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where $\delta \left( t\right) = {\left\lbrack {\delta }_{u}\left( t\right) ,{\delta }_{v}\left( t\right) ,{\delta }_{r}\left( t\right) \right\rbrack }^{T}$ with ${\delta }_{u}\left( t\right) ,{\delta }_{v}\left( t\right) ,{\delta }_{r}\left( t\right)$ representing the surge, sway and yaw velocities, respectively. $\psi \left( t\right) = {\left\lbrack {x}_{p}\left( t\right) ,{y}_{p}\left( t\right) ,\eta \left( t\right) \right\rbrack }^{T}$ with ${x}_{p}\left( t\right)$ and ${y}_{p}\left( t\right)$ are positions and $\eta \left( t\right)$ is the yaw angle. $\varphi \left( t\right)$ is the control input. $M, N, R$ and $E$ denote inertia, damping, mooring forces and configuration matrices, and $M$ is a symmetric positive-definite and invertible matrix that satisfies $M = {M}^{T} > 0$ ,
|
| 66 |
+
|
| 67 |
+
$J\left( {\eta \left( t\right) }\right) = \left\lbrack \begin{matrix} \cos \left( {\eta \left( t\right) }\right) & - \sin \left( {\eta \left( t\right) }\right) & 0 \\ \sin \left( {\eta \left( t\right) }\right) & \cos \left( {\eta \left( t\right) }\right) & 0 \\ 0 & 0 & 1 \end{matrix}\right\rbrack .$
|
| 68 |
+
|
| 69 |
+
Then, by defining $x\left( t\right) = \delta \left( t\right) - {\delta }_{\text{ref }}, A\left( t\right) =$ $- M{\left( t\right) }^{-1}N\left( t\right) ,{B}_{1}\left( t\right) = M{\left( t\right) }^{-1}R$ and ${B}_{2}\left( t\right) = M{\left( t\right) }^{-1}E$ , and taking into account the unavoidable disturbance $\widetilde{d}\left( t\right)$ caused by wind, wave and current, the system (1) can be expressed as
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\left\{ \begin{array}{l} \dot{x}\left( t\right) = {Ax}\left( t\right) + {B}_{1}d\left( t\right) + {B}_{2}\varphi \left( t\right) , \\ y\left( t\right) = {Cx}\left( t\right) , \end{array}\right. \tag{2}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where $d\left( t\right) = {B}_{1}{\left( t\right) }^{-1}{d}^{ * }\left( t\right) - \psi \left( t\right) + {B}_{1}{\left( t\right) }^{-1}A{\delta }_{\text{ref }}$ and $C =$ $\left\lbrack \begin{array}{lll} 0 & 0 & 1 \end{array}\right\rbrack$ denotes the output matrix.
|
| 76 |
+
|
| 77 |
+
Consider thruster fault ${\varphi }^{F}\left( t\right) = {\rho \varphi }\left( t\right) + {\sigma f}\left( t\right)$ and assume control inputs $\varphi \left( t\right) = {Kx}\left( t\right)$ are designed,(2) is represented
|
| 78 |
+
|
| 79 |
+
as
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\left\{ \begin{array}{l} \dot{x}\left( t\right) = \widehat{A}x\left( t\right) + {B}_{1}d\left( t\right) + {B}_{2}\widehat{f}\left( t\right) , \\ y\left( t\right) = {Cx}\left( t\right) , \end{array}\right. \tag{3}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $\widehat{A} = A + {B}_{2}K$ and $\widehat{f}\left( t\right) = - \bar{\rho }\varphi \left( t\right) + {\sigma f}\left( t\right)$ .
|
| 86 |
+
|
| 87 |
+
### B.DoS Attacks Model
|
| 88 |
+
|
| 89 |
+
Consider the aperiodic dos attacks as follows:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{A}_{\text{Dos }} = \left\{ \begin{matrix} 0, & t \in \left\lbrack {{t}_{2l},{t}_{{2l} + 1}}\right) \triangleq {\kappa }_{0,{2l}} \\ 1, & t \in \left\lbrack {{t}_{{2l} + 1},{t}_{2\left( {l + 1}\right) }}\right) \triangleq {\kappa }_{1,{2l}} \end{matrix}\right. \tag{4}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $t \in \left\lbrack {{t}_{2l},{t}_{{2l} + 1}}\right) \triangleq {\kappa }_{0,{2l}}\;\left( {l \in \mathrm{N},{t}_{2l} \geq 0}\right)$ indicates the ${l}^{th}$ sleep interval with the length ${s}_{l} = {t}_{{2l} + 1} - {t}_{2l}$ , and $t \in \left\lbrack {{t}_{{2l} + 1},{t}_{2\left( {l + 1}\right) }}\right) \triangleq {\kappa }_{1,{2l}}$ indicates the ${l}^{th}$ DoS attacks interval with the length ${d}_{l} = {t}_{2\left( {l + 1}\right) } - {t}_{{2l} + 1}$ .
|
| 96 |
+
|
| 97 |
+
Due to the communication disruption caused by DoS attacks, the UMV system (3) can be augmented into the following switched system, which has been discretized. The sleeping interval can be expressed as $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ , and the DoS attacks interval can be expressed as $k \in \left\lbrack {{k}_{{2l} + 1},{k}_{2\left( {l + 1}\right) }}\right)$ .
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\left\{ \begin{array}{l} x\left( {k + 1}\right) = {A}_{id}x\left( k\right) + {B}_{1id}d\left( k\right) + {B}_{2id}\widehat{f}\left( k\right) \\ y\left( k\right) = {C}_{d}x\left( k\right) \end{array}\right. \tag{5}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
## C. Asynchronous Switching Filter
|
| 104 |
+
|
| 105 |
+
In the case of the DoS attacks and thruster faults, the residual signal produced by the switched filter is as follows:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\left\{ {\begin{array}{l} {x}_{f}\left( {k + 1}\right) = {A}_{fi}{x}_{f}\left( k\right) + {B}_{fi}y\left( k\right) \\ r\left( k\right) = {C}_{fi}{x}_{f}\left( k\right) + {D}_{fi}y\left( k\right) \end{array}\left( {i = 0,1}\right) }\right. \tag{6}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${x}_{f}\left( k\right)$ is the state of the filters, $r\left( k\right)$ is the residual signal of the switched system (5). Define $\widetilde{x}\left( k\right) =$ ${\left\lbrack \begin{array}{ll} {x}^{T}\left( k\right) & {x}_{f}^{T}\left( k\right) \end{array}\right\rbrack }^{T},\varpi \left( k\right) = {\left\lbrack \begin{array}{ll} {d}^{T}\left( k\right) & {f}^{T}\left( k\right) \end{array}\right\rbrack }^{T}$ and the residual evaluation signal $e\left( k\right) = r\left( k\right) - \widehat{f}\left( k\right) ,\left( 6\right)$ is rewritten as (7)
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\Phi }_{0} : \left\{ {\begin{array}{l} \widetilde{x}\left( {k + 1}\right) = {\widetilde{A}}_{i}\widetilde{x}\left( k\right) + {\widetilde{B}}_{i}\varpi \left( k\right) \\ e\left( k\right) = {\widetilde{C}}_{i}\widetilde{x}\left( k\right) + {\widetilde{D}}_{i}\varpi \left( k\right) \end{array}, k \in \left\lbrack {{k}_{l} + {\varepsilon }_{l},{k}_{l + 1}}\right) }\right.
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{\Phi }_{1} : \left\{ {\begin{array}{l} \widetilde{x}\left( {k + 1}\right) = {\widetilde{A}}_{ij}\widetilde{x}\left( k\right) + {\widetilde{B}}_{ij}\varpi \left( k\right) \\ e\left( k\right) = {\widetilde{C}}_{ij}\widetilde{x}\left( k\right) + {\widetilde{D}}_{ij}\varpi \left( k\right) \end{array}, k \in \left\lbrack {{k}_{l},{k}_{l} + {\varepsilon }_{l}}\right) }\right.
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where $i \neq j, i \in \{ 0,1\} , j \in \{ 0,1\} ,{\widetilde{A}}_{ij} = \left\lbrack \begin{matrix} {A}_{id} & 0 \\ {B}_{fj}{C}_{d} & {A}_{fj} \end{matrix}\right\rbrack$ , ${\widetilde{B}}_{ij} = \left\lbrack \begin{matrix} {B}_{1i} & {B}_{2i} \\ 0 & 0 \end{matrix}\right\rbrack ,{\widetilde{C}}_{ij} = \left\lbrack \begin{array}{ll} {D}_{fj}{C}_{d} & {C}_{fj} \end{array}\right\rbrack$ and ${\widetilde{D}}_{ij} =$ $\left\lbrack \begin{array}{ll} 0 & - \bar{I} \end{array}\right\rbrack$ .
|
| 122 |
+
|
| 123 |
+
To better set the stage for the next section, the following definitions are presented.
|
| 124 |
+
|
| 125 |
+
Definition 1: For any switching signal $\tau \left( k\right)$ and $0 < {k}_{0} \leq$ $k$ , let ${\mathcal{M}}_{\tau , l}\left( {{k}_{0}, k}\right)$ indicate the number of switching times that the ${l}_{th}$ subsystem is activated over $\left\lbrack {{k}_{0}, k}\right)$ . If
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
{M}_{\tau , l}\left( {{k}_{0}, k}\right) \leq {N}_{{\mathcal{M}}_{0, l}} + \frac{{N}_{l}\left( {{k}_{0}, k}\right) }{{\lambda }_{l}}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
holds for scalar ${\lambda }_{l} > 0$ and integer ${N}_{{M}_{0, l}} \geq 0$ , then ${\lambda }_{l}$ is called model-dependent average dwell time. ${N}_{l}\left( {{k}_{0}, k}\right)$ is the total running time of the ${l}_{th}$ subsystem over $\left\lbrack {{k}_{0}, k}\right)$ .
|
| 132 |
+
|
| 133 |
+
Definition 2: Consider asynchronous switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ , and given scalar $\alpha ,\beta$ , and $\gamma$ satisfying $0 < \alpha < 1$ , $\beta > 0$ and $\gamma > 0$ . Under zero initial condition, if the asynchronous switched system is exponentially stable and satisfies $\mathop{\sum }\limits_{{s = {k}_{0}}}^{\infty }{\left( 1 - \alpha \right) }^{s}{e}^{\mathrm{T}}\left( s\right) e\left( s\right) \leq {\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{\infty }{\varpi }^{\mathrm{T}}\left( s\right) \varpi \left( s\right)$ , it is said that the system exhibits exponential stability and has exponential ${H}_{\infty }$ index $\gamma$ .
|
| 134 |
+
|
| 135 |
+
## III. Main Results
|
| 136 |
+
|
| 137 |
+
In this section, the stability and ${H}_{\infty }$ performance of asynchronous switched systems (7) will be analyzed, and the sufficient and linearly solvable conditions for the designed switched FD filters are given.
|
| 138 |
+
|
| 139 |
+
Theorem 1: Consider the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ under DoS attacks, scalars ${\alpha }_{i},{\beta }_{i},\gamma ,{\mu }_{0}$ and ${\mu }_{1}$ satisfying $0 < {\alpha }_{i} < 1,{\beta }_{i} > 0,\gamma > 0,{\mu }_{0} > 1$ and $0 < {\mu }_{1} < 1$ , if there exist symmetric positive-definite matrices ${\mathcal{P}}_{i}$ satisfying the following conditions
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{\widetilde{A}}_{i}^{T}{\mathcal{P}}_{i}{\widetilde{A}}_{i} - {\mathcal{P}}_{i} + {\alpha }_{i}{\mathcal{P}}_{i} < 0, \tag{8}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
{\widetilde{A}}_{ij}^{T}{\mathcal{P}}_{i}{\widetilde{A}}_{ij} - {\mathcal{P}}_{i} - {\beta }_{i}{\mathcal{P}}_{i} < 0, \tag{9}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
{\mathcal{P}}_{i} \leq {\mu }_{i}{\mathcal{P}}_{j} \tag{10}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
{\tau }_{D} < \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{\ln {\widetilde{\alpha }}_{1}},{\tau }_{F} > - \frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{\ln {\widetilde{\alpha }}_{0}}, \tag{11}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable with the exponential ${H}_{\infty }$ performance, where $i \neq j,{\widetilde{\alpha }}_{i} = 1 - {\alpha }_{i},{\widetilde{\beta }}_{i} = 1 + {\beta }_{i},{\phi }_{i} = \frac{{\widetilde{\beta }}_{i}}{{\widetilde{\alpha }}_{i}}$ and ${\varepsilon }_{M}$ denotes the maximum time that the filter lags the subsystem.
|
| 158 |
+
|
| 159 |
+
Proof: The piecewise Lyapunov function for the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are given as follows
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) = {\widetilde{x}}^{T}\left( k\right) {\mathcal{P}}_{i}\widetilde{x}\left( k\right) . \tag{12}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
When $\varpi \left( k\right) = 0$ and $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ , it can be obtained
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq \left\{ \begin{array}{l} {\widetilde{\alpha }}_{i}^{k - {k}_{2l} - {\varepsilon }_{2l}}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( {{k}_{2l} + {\varepsilon }_{2l}}\right) }\right) , k \in {\Gamma }^{ + } \\ {\widetilde{\beta }}_{i}^{k - {k}_{2l}}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( {k}_{2l}\right) }\right) , k \in {\Gamma }^{ - } \end{array}\right. \tag{13}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
where ${\widetilde{\alpha }}_{i} = 1 - {\alpha }_{i}$ and ${\widetilde{\beta }}_{i} = 1 + {\beta }_{i}$ . And when $k \in {\mathcal{T}}^{ + }\left( {{k}_{2l},{k}_{{2l} + 1}}\right)$ , from (8) and (11), it can be derived
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l} - {\varepsilon }_{2l}}{\mathcal{V}}_{0}\left( {\widetilde{x}\left( {{k}_{2l} + {\varepsilon }_{2l}}\right) }\right)
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l} - {\varepsilon }_{2l}} \cdot {\widetilde{\beta }}_{0}^{{\varepsilon }_{2l}} \cdot {\mathcal{V}}_{0}\left( {\widetilde{x}\left( {k}_{2l}\right) }\right)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
< \cdots
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\leq \theta \exp \left\{ {\max \left( {\frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{{\tau }_{F}} + {v}_{0}, - \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{{\tau }_{D}} + {v}_{1}}\right) }\right.
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\left. \left( {{\Xi }_{F}\left( {{k}_{0}, k}\right) + {\Xi }_{D}\left( {{k}_{0}, k}\right) }\right) \right\} \mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right)
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
(14)
|
| 194 |
+
|
| 195 |
+
where $\theta = \exp \left\lbrack {\left( {{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}\right) {\xi }_{F} - \left( {{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}\right) {\xi }_{D}}\right\rbrack$ , $\omega = \max \left\{ {-\frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{{\tau }_{F}} - \ln {\widetilde{\alpha }}_{0},\frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{{\tau }_{D}} - \ln {\widetilde{\alpha }}_{1}}\right\} ,$ ${\chi }_{0} = {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0},{\chi }_{1} = {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1},{v}_{i} = \ln {\widetilde{\alpha }}_{i}.$
|
| 196 |
+
|
| 197 |
+
From (11), it has $\omega > 0$ . Then, it is clear that $\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right)$ converges to zero when $k \rightarrow \infty$ . Therefore, the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable when (8) and (11) hold.
|
| 198 |
+
|
| 199 |
+
Next, if $\varpi \left( k\right) \neq 0$ for $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ and zero initial conditions, (??) is derived as follows
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\Delta {\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) < \left\{ \begin{array}{l} - {\alpha }_{i}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) - \Upsilon \left( k\right) , k \in {\Gamma }^{ + } \\ {\beta }_{i}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) - \Upsilon \left( k\right) , k \in {\Gamma }^{ - } \end{array}\right. \tag{15}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
where $i = 0,1,\Upsilon \left( k\right) = {e}^{T}\left( k\right) e\left( k\right) - {\gamma }^{2}{\varpi }^{T}\left( k\right) \varpi \left( k\right)$ . When $k \in {\mathcal{T}}^{ + }\left( {{k}_{2l},{k}_{{2l} + 1}}\right)$ , it can have the following inequality in the similar way from (10) and (15)
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\cdots {\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}.
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0}, k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots {\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}.
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0}, k}\right) }\mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right) - {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\ldots
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
{\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0}, k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0}, k}\right) }\mathop{\sum }\limits_{{s = {k}_{0} + {\Delta }_{0}}}^{{{k}_{1} - 1}}{\widetilde{\alpha }}_{0}^{{k}_{1} - s - 1}\Upsilon \left( s\right)
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
- {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\cdots {\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}.
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0}, k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots {\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0}, k}\right) }
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
\mathop{\sum }\limits_{{s = {k}_{0}}}^{{{\hslash }_{0} - 1}}\left( {{\widetilde{\alpha }}^{{k}_{1} - {\hslash }_{0}}{\phi }_{0}^{{\hslash }_{0} - s - 1}\Upsilon \left( s\right) }\right) - \mathop{\sum }\limits_{{s = {\hslash }_{2l}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}\Upsilon \left( s\right)
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
- \mathop{\sum }\limits_{{s = {k}_{2l}}}^{{{\hslash }_{2l} - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\phi }_{0}^{{\hslash }_{2l} - s - 1}\Upsilon \left( s\right)
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
(16)
|
| 244 |
+
|
| 245 |
+
Since ${\varepsilon }_{M} = \max \left\{ {\varepsilon }_{i}\right\}$ and $1 < {\phi }_{0}^{{k}_{2l} + {\varepsilon }_{2l} - s - 1} < {\phi }_{0}^{{\varepsilon }_{M} - 1}$ , under zero initial conditions $\mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right) = 0$ and $\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \geq 0$ and according to the Definition 1, it can get
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\widetilde{\alpha }}_{0}^{{\Xi }_{F}\left( {{k}_{0}, s}\right) }{\widetilde{\alpha }}_{1}^{{\Xi }_{D}\left( {{k}_{0}, s}\right) }{e}^{T}\left( s\right) e\left( s\right) \leq \tag{17}
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}}{\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\theta }_{0}^{{\varepsilon }_{M} - 1}{\varpi }^{T}\left( s\right) \varpi \left( s\right) .
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
The accumulated sum of (17) over $\lbrack k,\infty )$ is given by
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\mathop{\sum }\limits_{{k = {k}_{0}}}^{\infty }\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\widetilde{\alpha }}^{s - {k}_{0}}{e}^{T}\left( s\right) e\left( s\right) \leq {\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}} \tag{18}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
{\gamma }^{2}\mathop{\sum }\limits_{{k = {k}_{0}}}^{\infty }\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\theta }_{0}^{{\varepsilon }_{M} - 1}{\varpi }^{T}\left( s\right) \varpi \left( s\right)
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
which is equivalent to
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}^{s - {k}_{0}}{e}^{T}\left( s\right) e\left( s\right) \leq {\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}} \tag{19}
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
{\theta }_{0}^{{\varepsilon }_{M} - 1}{\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\varpi }^{T}\left( s\right) \varpi \left( s\right) .
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
Thus, the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are finally shown to be exponentially asymptotically stable and satisfy the exponential ${H}_{\infty }$ performance index ${\gamma }_{s} =$
|
| 276 |
+
|
| 277 |
+
$\max \left\{ {\sqrt{{\left( {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0}\right) }^{{\xi }_{F}}{\left( {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1}\right) }^{{\xi }_{D}}{\theta }_{0}^{{\varepsilon }_{M} - 1}} \cdot \gamma }\right\}$ , which completes the proof.
|
| 278 |
+
|
| 279 |
+
Due to the presence of numerous unknown matrix couplings, it is typically difficult to obtain filter gains from Theorem 1. Then, the linear solvability conditions of the designed filters are proposed in Theorem 2.
|
| 280 |
+
|
| 281 |
+
Theorem 2: Consider the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ , under DoS attacks with ${\tau }_{F}$ and ${\tau }_{D}$ , scalar ${\alpha }_{i},{\beta }_{i},\gamma ,{\mu }_{0}$ and ${\mu }_{1}$ satisfying $0 < {\alpha }_{i} < 1,{\beta }_{i} > 0,\gamma > 0,{\mu }_{0} > 1$ and $0 < {\mu }_{1} < 1$ . If there exist symmetric positive-definite matrices ${\mathcal{P}}_{i1},{\mathcal{P}}_{i3}$ , matrices ${\mathcal{P}}_{i2},{\mathcal{G}}_{i},{\mathcal{Q}}_{i},{\mathcal{R}}_{i},{\mathcal{A}}_{Fi},{\mathcal{B}}_{Fi},{\mathcal{C}}_{Fi},{\mathcal{D}}_{Fi}$ , scalar $\gamma , i, j, i \neq j$ satisfying the following conditions
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
\left\lbrack \begin{matrix} {\Pi }_{i}^{11} & {\Pi }_{i}^{12} & 0 & {\Pi }_{i}^{14} & {\mathcal{A}}_{Fi} & {\Pi }_{i}^{16} & {\Pi }_{i}^{17} \\ * & {\Pi }_{i}^{22} & 0 & {\Pi }_{i}^{24} & {\mathcal{A}}_{Fi} & {\Pi }_{i}^{26} & {\Pi }_{i}^{27} \\ * & * & - I & {\Pi }_{i}^{34} & {\mathcal{C}}_{Fi} & 0 & - I \\ * & * & * & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i1} & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i2} & 0 & 0 \\ * & * & * & * & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i3} & 0 & 0 \\ * & * & * & * & * & - {\gamma }^{2}I & 0 \\ * & * & * & * & * & * & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0,
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
(20)
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
\left\lbrack \begin{matrix} {\Pi }_{ij}^{11} & {\Pi }_{ij}^{12} & 0 & {\Pi }_{ij}^{14} & {\mathcal{A}}_{Fj} & {\Pi }_{ij}^{16} & {\Pi }_{ij}^{17} \\ * & {\Pi }_{i}^{22} & 0 & {\Pi }_{ij}^{24} & {\mathcal{A}}_{Fj} & {\Pi }_{ij}^{26} & {\Pi }_{ij}^{27} \\ * & * & - I & {\Pi }_{ij}^{34} & {\mathcal{C}}_{Fj} & 0 & - I \\ * & * & * & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i1} & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i2} & 0 & 0 \\ * & * & * & * & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i3} & 0 & 0 \\ * & * & * & * & * & - {\gamma }^{2}I & 0 \\ * & * & * & * & * & * & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
(21)
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
\left\lbrack \begin{matrix} {\Omega }^{11} & {\Omega }^{12} & {\mathcal{G}}_{i}^{T} & {\mathcal{R}}_{i} \\ & {\Omega }^{22} & {\mathcal{Q}}_{i}^{T} & {\mathcal{R}}_{i} \\ & * & - {\mu }_{i}{\mathcal{P}}_{j1} & - {\mu }_{i}{\mathcal{P}}_{j2} \\ & * & * & - {\mu }_{i}{\mathcal{P}}_{j3} \end{matrix}\right\rbrack \leq 0 \tag{22}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
{\tau }_{D} < \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{\ln {\widetilde{\alpha }}_{1}},{\tau }_{F} > - \frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{\ln {\widetilde{\alpha }}_{0}}, \tag{23}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable and and satisfy the exponential ${H}_{\infty }$ performance index ${\gamma }_{s} = \max \left\{ {\sqrt{{\left( {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0}\right) }^{{\xi }_{F}}{\left( {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1}\right) }^{{\xi }_{D}}{\theta }_{0}^{{\varepsilon }_{M} - 1}} \cdot \gamma }\right\}$ , where $\widetilde{\alpha } = 1 - \alpha ,\widetilde{\beta } = 1 + \beta .{\Pi }_{i}^{11} = {\mathcal{P}}_{i1} - {\dot{\mathcal{G}}}_{i} - {\mathcal{G}}_{i} - {\mathcal{G}}_{i}^{T},$ ${\Pi }_{i}^{12} = {\mathcal{P}}_{i2} - {\mathcal{Q}}_{i} - {\mathcal{R}}_{i},{\Pi }_{i}^{14} = {\mathcal{G}}_{i}{}^{T}{A}_{id} + {\mathcal{B}}_{Fi}{C}_{d},$ ${\Pi }_{i}^{16} = {\mathcal{G}}_{i}{}^{T}{B}_{1i},{\Pi }_{i}^{17} = {\mathcal{G}}_{i}{}^{T}{B}_{2i},{\Pi }_{i}^{22} = {\mathcal{P}}_{i3} - {\mathcal{R}}_{i} - {\mathcal{R}}_{i}{}^{T},$ ${\Pi }_{i}^{24} = {\mathcal{Q}}_{i}{}^{T}{A}_{id} + {\mathcal{B}}_{Fi}{C}_{d},{\Pi }_{i}^{26} = {\mathcal{Q}}_{i}{}^{T}{B}_{1i},{\Pi }_{i}^{27} = {\mathcal{Q}}_{i}{}^{T}{B}_{2i},$ ${\Pi }_{i}^{34} = {\mathcal{D}}_{Fi}{C}_{d},{\Pi }_{ij}^{11} = {\mathcal{P}}_{i1} - {\mathcal{G}}_{j} - {\mathcal{G}}_{j}{}^{T},{\Pi }_{ij}^{12} = {\mathcal{P}}_{i2} - {\mathcal{Q}}_{j} - {\mathcal{R}}_{j},$ ${\Pi }_{ij}^{14} = {\mathcal{G}}_{j}{}^{T}{A}_{id} + {\mathcal{B}}_{Fj}{C}_{d},{\Pi }_{ij}^{16} = {\mathcal{G}}_{j}{}^{T}{B}_{1i},{\Pi }_{ij}^{17} = {\mathcal{G}}_{j}{}^{T}{B}_{2i},$ ${\Pi }_{ij}^{22} = {\mathcal{P}}_{i3} - {\mathcal{R}}_{j} - {\mathcal{R}}_{j}{}^{T},{\Pi }_{ij}^{24} = {\mathcal{Q}}_{j}{}^{T}{A}_{id} + {\mathcal{B}}_{Fj}{C}_{d},$ ${\Pi }_{ij}^{26} = {Q}_{j}{}^{T}{B}_{1i},{\Pi }_{ij}^{27} = {Q}_{j}{}^{T}{B}_{2i},{\Pi }_{ij}^{34} = {D}_{Fj}{C}_{d},$ ${\Omega }^{11} = {\mathcal{P}}_{i1} - {\mu }_{i}\left( {{\mathcal{G}}_{i} + {\mathcal{G}}_{i}{}^{T}}\right) ,{\Omega }^{12} = {\mathcal{P}}_{i2} - {\mu }_{i}{\mathcal{Q}}_{i} -$ ${\mu }_{i}{\mathcal{R}}_{i}{}^{T},{\Omega }^{22} = {\mathcal{P}}_{i3} - {\mu }_{i}\left( {{\mathcal{R}}_{i} + {\mathcal{R}}_{i}{}^{T}}\right) .$
|
| 304 |
+
|
| 305 |
+
In addition, if there is a solution to (20)-(23), then the filter gain can be obtained
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
\left\lbrack \begin{matrix} {\mathcal{A}}_{fi} & {\mathcal{B}}_{fi} \\ {\mathcal{C}}_{fi} & {\mathcal{D}}_{fi} \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} {\mathcal{R}}_{i}{}^{-1} & 0 \\ 0 & I \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {\mathcal{A}}_{Fi} & {\mathcal{B}}_{Fi} \\ {\mathcal{C}}_{Fi} & {\mathcal{D}}_{Fi} \end{matrix}\right\rbrack . \tag{24}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
Proof: Based on the Project Lemma and the Schur complement Lemma, pre- and post-multiplying (8), one can deduce that (8) and (20) are equivalent. Similarly, pre- and postmultiplying (9) implies that (9) and (21) are equivalent. Theorem 2 is proved.
|
| 312 |
+
|
| 313 |
+
For the purpose of fault detection, the residual is obtained from the difference between the measured value and its estimated value. Design the following residual estimation function
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
{\mathcal{J}}_{r}\left( k\right) = \sqrt{\frac{1}{k}\mathop{\sum }\limits_{{s = 1}}^{k}{r}^{T}\left( s\right) r\left( s\right) }. \tag{25}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
And select threshold value of (25) as
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
{\mathcal{J}}_{th} = \mathop{\sup }\limits_{\substack{{d\left( k\right) \in {l}_{2}} \\ {f\left( k\right) = 0} }}{\mathcal{J}}_{r}\left( k\right) . \tag{26}
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
Therefore, the fault detection logical relationship is
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
\left\{ \begin{matrix} \begin{Vmatrix}{{\mathcal{J}}_{r}\left( k\right) }\end{Vmatrix} > {\mathcal{J}}_{th} & \text{ Alarm } \\ \begin{Vmatrix}{{\mathcal{J}}_{r}\left( k\right) }\end{Vmatrix} \leq {\mathcal{J}}_{th} & \text{ No-alarm. } \end{matrix}\right. \tag{27}
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
## IV. Simulation
|
| 332 |
+
|
| 333 |
+
This section intends to demonstrate the effectiveness of asynchronous FD strategy for networked UMV under DoS attacks. By choosing matrices $M, N$ and $R$ in system (1) as [20]. Let ${\alpha }_{0} = {0.09},{\beta }_{0} = {0.05},{\alpha }_{1} = {0.11},{\beta }_{1} = {0.03}$ , ${\mu }_{0} = {1.4},{\mu }_{1} = {0.45},{\varepsilon }_{M} = 2,\sigma = 1$ and $\gamma = {44}$ . Then, from (11) the MDADT satisfies ${\tau }_{D} < {4.34}$ and ${\tau }_{F} > {6.60}$ . The UMV fault detection filter gain under DoS attacks can be calculated by Theorem 2.
|
| 334 |
+
|
| 335 |
+
To demonstrate the practicability of FD filters designed for networked UMV under DoS attacks, the following simulations are performed to verify it. Firstly, UMV are suffered from thruster faults, external disturbances and DoS attacks. One possible sequences of DoS attacks are depicted in Fig. 1, where 1 denotes that attacks have occurred and 0 denotes the sleep state with no attack. Because of the existence of DoS attacks, which in turn leads to asynchronous switching between the filter and the primary system, then the switching sequence between the filter and the subsystem is shown in Fig. 2.
|
| 336 |
+
|
| 337 |
+

|
| 338 |
+
|
| 339 |
+
Fig. 1. DoS attacks sequences.
|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
|
| 343 |
+
Fig. 2. Switching sequences.
|
| 344 |
+
|
| 345 |
+
The external disturbance $d\left( k\right)$ is given as the following form
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
d\left( k\right) = \left\{ {\begin{array}{l} {d}_{1}\left( k\right) = {12}\sin \left( k\right) \exp \left( {-{0.15k}}\right) \\ {d}_{2}\left( k\right) = {15}\sin \left( {0.73k}\right) , k \in \left\lbrack {5,{37}}\right\rbrack \\ {d}_{3}\left( k\right) = 9\sin \left( {0.2k}\right) , k \in \left\lbrack {{11},{45}}\right\rbrack \end{array}.}\right.
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
Case 1: Use DoS attacks sequence 1, and the fault signals ${f}^{1}\left( k\right)$ takes the following form
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
{f}^{1}\left( k\right) = \left\{ {\begin{array}{l} {f}_{1}\left( k\right) = 2\sin \left( {0.2k}\right) \\ {f}_{2}\left( k\right) = \cos \left( {0.1k}\right) \\ {f}_{3}\left( k\right) = {0.8}\sin \left( {0.15k}\right) \end{array}, k \in \left\lbrack {{25},{35}}\right\rbrack .}\right.
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
Under the DoS attack sequence and the faults ${f}^{1}\left( k\right)$ , the curves of the residual signal $\parallel r\left( k\right) {\parallel }_{2}$ and the REF signal are depicted in Fig. 3 and Fig. 4, respectively. In the absence of faults, the threshold value is chosen depending on the maximum value of the REF signal: ${\mathcal{J}}_{th} = {0.215}$ . When $t$ $= {25.11}\mathrm{\;s}$ , the fault signal is detected in time.
|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
|
| 361 |
+
Fig. 3. The residual signal $\parallel r\left( k\right) {\parallel }_{2}$ in Case 1.
|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+
Fig. 4. The REF signal in Case 1.
|
| 366 |
+
|
| 367 |
+
Case 2: In order to further verify the sensitivity of the FD filter to the faults, a fault with a smaller amplitude than case 1 but with the same frequency is selected for verification, and the DoS attack sequence is still used. The fault form of ${f}^{2}\left( k\right)$ is shown as follows
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
{f}^{2}\left( k\right) = \left\{ {\begin{array}{l} {f}_{1}\left( k\right) = {0.4}\sin \left( {0.2k}\right) \\ {f}_{2}\left( k\right) = {0.2}\cos \left( {0.1k}\right) \\ {f}_{3}\left( k\right) = {0.16}\sin \left( {0.15k}\right) \end{array}, k \in \left\lbrack {{25},{35}}\right\rbrack .}\right.
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
Under the DoS attack sequence and the faults ${f}^{2}\left( k\right)$ , the curves of the residual signal $\parallel r\left( k\right) {\parallel }_{2}$ and the REF signal are depicted in Fig. 5 and Fig. 6, respectively. Fig. 6 indicates that the threshold for fault detection becomes smaller than in Case 1: ${\mathcal{J}}_{th} = {0.067}$ . And when $t = {25.27s}$ , the fault signal is detected in time. In contrast to Case 1, the residual amplitude and the REF signal are significantly reduced. This shows that the fault amplitude has a non-negligible effect on the system.
|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
|
| 377 |
+
Fig. 5. The residual signal $\parallel r\left( k\right) {\parallel }_{2}$ in Case 2.
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+
Fig. 6. The REF signal in Case 2.
|
| 382 |
+
|
| 383 |
+
## V. CONCLUSION
|
| 384 |
+
|
| 385 |
+
To solve the problem that DoS attacks cannot be detected in time, this paper designs an exponential convergent ${H}_{\infty }$ filters based on an asynchronous switched method for UMVs under DoS attacks, which solves the issue that the filters' switching frequently lags behind subsystems in practical applications. On the basis of the MDADT and the PLF, one criterion on the tolerability of the MDADT is derived to maintain exponential ${H}_{\infty }$ performance. Sufficient conditions for the designed FD filter to exist are described by LMIs, and the filter gain and the related parameters of MDADT can be derived by solving these LMIs. Finally, the effectiveness of the designed filter is verified by numerical simulation.
|
| 386 |
+
|
| 387 |
+
## REFERENCES
|
| 388 |
+
|
| 389 |
+
[1] L. Ma, Y.-L. Wang, and Q.-L. Han, "Event-triggered dynamic positioning for mass-switched unmanned marin vehicles in network environments," IEEE Transactions on Cybernetics, no. 5, pp. 3159-3171, MAY 2022.
|
| 390 |
+
|
| 391 |
+
[2] Q. Liu, Y. Long, T. Li, J. H. Park, and C. P. Chen, "Fault detection for unmanned marine vehicles under replay attack," IEEE Transactions on Fuzzy Sysems, vol. 31, no. 5, pp. 1716-1728, MAY 2023.
|
| 392 |
+
|
| 393 |
+
[3] B. S. Park and S. J. Yoo, "Fault detection and accommodation of saturated actuators for underactuated surface vessels in the presence of nonlinear uncertainties," Nonlinear Dynamics, vol. 85, no. 2, pp. 1067- 1077, JUL 2016.
|
| 394 |
+
|
| 395 |
+
[4] Z. Duan, F. Ding, J. Liang, and Z. Xiang, "Observer-based fault detection for continuous-discrete systems in T-S fuzzy model," Nonlinear Analysis-Hybrid Systems, vol. 50, p. 101379, NOV 2023.
|
| 396 |
+
|
| 397 |
+
[5] X.-L. Wang, G.-H. Yang, and D. Zhang, "Event-triggered fault detection observer design for T-S fuzzy systems," IEEE Transactions on Fuzzy Systems, vol. 29, no. 9, pp. 2532-2542, SEP 2021.
|
| 398 |
+
|
| 399 |
+
[6] X. Yao, L. Wu, and W. X. Zheng, "Fault detection filter design for Markovian jump singular systems with intermittent measurements," IEEE Transactions on Signal Processing, vol. 59, no. 7, pp. 3099-3109, JUL 2011.
|
| 400 |
+
|
| 401 |
+
[7] Y.-L. Wang and Q.-L. Han, "Network-based fault detection filter and controller coordinated design for unmanned surface vehicles in network environments," IEEE Transactions on Industrial Informatics, vol. 12, no. 5, pp. 1753-1765, OCT 2016.
|
| 402 |
+
|
| 403 |
+
[8] X. Wang, Z. Fei, H. Gao, and J. Yu, "Integral-based event-triggered fault detection filter design for unmanned surface vehicles," IEEE Transactions on Industrial Informatics, vol. 15, no. 10, pp. 5626-5636, OCT 2019.
|
| 404 |
+
|
| 405 |
+
[9] X.-N. Yu, L.-Y. Hao, and X.-L. Wang, "Fault tolerant control for an unmanned surface vessel based on integral sliding mode state feedback control," International Journal of Control Automation and Systems, vol. 20, no. 8, pp. 2514-2522, AUG 2022.
|
| 406 |
+
|
| 407 |
+
[10] N. Wang, H. He, Y. Hou, and B. Han, "Model-free visual servo swarming of manned-unmanned surface vehicles with visibility maintenance and collision avoidance," IEEE Transactions on Intelligent Transportation Systems, SEP 2023.
|
| 408 |
+
|
| 409 |
+
[11] S. Chen, Y. Chen, C. Pan, I. Ali, J. Pan, and W. He, "Distributed adaptive platoon secure control on unmanned vehicles system for lane change under compound attacks," IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 11, pp. 12637-12647, NOV 2023.
|
| 410 |
+
|
| 411 |
+
[12] D. Ding, Z. Wang, Q.-L. Han, and G. Wei, "Security control for discrete-time stochastic nonlinear systems subject to deception attacks," IEEE Transactions on Systems Man and Cybernetics-Systems, vol. 48, no. 5, pp. 779-789, MAY 2018.
|
| 412 |
+
|
| 413 |
+
[13] L. Zhao and G.-H. Yang, "Cooperative adaptive fault-tolerant control for multi-agent systems with deception attacks," Journal of the Franklin Institute-Engineering and Applied Mathematics, vol. 357, no. 6, pp. 3419-3433, APR 2020.
|
| 414 |
+
|
| 415 |
+
[14] Y. Zhao, Z. Chen, C. Zhou, Y.-C. Tian, and Y. Qin, "Passivity-based robust control against quantified false data injection attacks in cyber-physical systems," IEEE-CAA Journal of Automatica Sinica, vol. 8, no. 8, pp. 1440-1450, AUG 2021.
|
| 416 |
+
|
| 417 |
+
[15] Z. Ye, D. Zhang, and Z.-G. Wu, "Adaptive event-based tracking control of unmanned marine vehicle systems with DoS attack," Journal of the Franklin Institute-Engineering and Applied Mathematics, vol. 358, no. 3, pp. 1915-1939, FEB 2021.
|
| 418 |
+
|
| 419 |
+
[16] X. Sun, G. Wang, Y. Fan, D. Mu, and B. Qiu, "A formation autonomous navigation system for unmanned surface vehicles with distributed control strategy," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 5, pp. 2834-2845, MAY 2021.
|
| 420 |
+
|
| 421 |
+
[17] D. Zhang, Z. Ye, P. Chen, and Q.-G. Wang, "Intelligent event-based output feedback control with Q-learning for unmanned marine vehicle systems," Control Engineering Practice, vol. 105, p. 104616, Dec. 2020.
|
| 422 |
+
|
| 423 |
+
[18] M. Liu, J. Yu, and Y. Liu, "Dynamic event-triggered asynchronous fault detection for markov jump systems with partially accessible hidden information and subject to aperiodic DoS attacks," Applied Mathematics and Computation, vol. 431, p. 127317, OCT 15 2022.
|
| 424 |
+
|
| 425 |
+
[19] D. Du, B. Jiang, P. Shi, and H. R. Karimi, "Fault detection for continuous-time switched systems under asynchronous switching," International Journal of Robust and Nonlinear Control, vol. 24, no. 11, pp. 1694-1706, JUL 2014.
|
| 426 |
+
|
| 427 |
+
[20] N. E. Kahveci and P. A. Ioannou, "Adaptive steering control for uncertain ship dynamics and stability analysis," Automatica, vol. 49, no. 3, pp. 685-697, Mar. 2013.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/CxWEOEhqo6/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,381 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ASYNCHRONOUS THRUSTER FAULT DETECTION FOR UNMANNED MARINE VEHICLES UNDER DOS ATTACKS
|
| 2 |
+
|
| 3 |
+
Fuxing Wang
|
| 4 |
+
|
| 5 |
+
School of Automation Engineering
|
| 6 |
+
|
| 7 |
+
University of Electronic Science and Technology of China
|
| 8 |
+
|
| 9 |
+
Chengdu 611731, China
|
| 10 |
+
|
| 11 |
+
wfx614328@163.com
|
| 12 |
+
|
| 13 |
+
Yue Long
|
| 14 |
+
|
| 15 |
+
School of Automation Engineering
|
| 16 |
+
|
| 17 |
+
University of Electronic Science and Technology of China
|
| 18 |
+
|
| 19 |
+
Chengdu 611731, China
|
| 20 |
+
|
| 21 |
+
longyue@uestc.edu.cn
|
| 22 |
+
|
| 23 |
+
Tieshan Li
|
| 24 |
+
|
| 25 |
+
School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
|
| 26 |
+
|
| 27 |
+
tieshanli@126.com
|
| 28 |
+
|
| 29 |
+
Abstract-This paper investigates a thruster fault detection strategy for unmanned marine vehicles (UMVs) subjected to external disturbances and aperiodic Denial of Service (DoS) attacks. To address the challenge of timely detection of DoS attacks, the UMV and the corresponding filters are modeled within the framework of an asynchronous switched system. Sufficient conditions ensuring the system's exponential stability and prescribed performance are derived using model-dependent average dwell time and piecewise Lyapunov functions. Additionally, the tolerable lower bound of the sleep interval and the upper bound of the attack interval for DoS attacks are established. Solvable conditions for the designed fault detection filters are obtained by leveraging decoupling techniques. Finally, simulations conducted on a UMV validate the effectiveness of the proposed methods.
|
| 30 |
+
|
| 31 |
+
Index Terms-Unmanned marine vehicles, asynchronous switched system, DoS attacks, fault detection.
|
| 32 |
+
|
| 33 |
+
§ I. INTRODUCTION
|
| 34 |
+
|
| 35 |
+
In recent years, unmanned marine vehicles (UMVs) have attracted significant attention in marine science and technology due to their wide-ranging applications in marine exploration, environmental monitoring, and resource development [1]. Nevertheless, the operational environment for UMVs is inherently complex, and their reliance on wireless communication networks for communication with shore-based centers makes them vulnerable to external disturbances, equipment malfunctions, cyber-attacks, and other disruptions [2]. The unpredictable nature of potential harm caused by these disturbances or faults, combined with the inherent vulnerabilities of cyberspace, renders UMV systems particularly susceptible to cyber-attacks. These risks can result in system failures and potentially catastrophic accidents [3]. As a result, improving the reliability and security of UMVs has emerged as a crucial area of research and development.
|
| 36 |
+
|
| 37 |
+
The unpredictable nature of potential harm caused by disturbances or faults to unmanned marine vehicles (UMVs) underscores the critical need for a real-time fault detection (FD) warning mechanism. The core of fault detection methodology involves comparing system performances to identify fault signals. Current research predominantly focuses on model-based fault detection, which has shown significant success in various systems, including continuous-discrete systems [4], T-S fuzzy systems [5], and Markovian jump systems [6]. The primary approach involves generating residual signals through filters or observers and subsequently establishing a fault warning mechanism. For UMVs, several studies have made noteworthy contributions. [7] has explored the design of controllers and FD filters based on observers for networked UMVs, [8] proposed event-triggered fault detection mechanisms for UMVs in networked environments, and [2] utilized T-S fuzzy systems to model UMV systems, particularly addressing fault detection under replay attacks. Despite these advancements, the scope of fault detection research for UMVs remains relatively narrow and lacks comprehensive coverage [9]. Consequently, further investigation into robust and holistic fault detection strategies for UMVs is imperative to enhance their reliability and operational safety [10].
|
| 38 |
+
|
| 39 |
+
On the other hand, due to the openness of cyberspace, UMV systems are particularly vulnerable to cyber-attacks. Deception attacks and Denial of Service (DoS) attacks are currently common types of attacks [11]. Deception attacks involve sending incorrect or tampered data to the system [12], including replay attacks [13] and false data injection attacks [14]. Compared to deception attacks, DoS attacks cause signal transmission to be unavailable for a period, leaving the system in an open-loop state, which makes it easier to cause severe disruption in system operations. Consequently, numerous studies on DoS attacks have emerged [15], [16].
|
| 40 |
+
|
| 41 |
+
However, most existing research assumes that Denial of Service (DoS) attacks can be detected promptly, suggesting that the switching of filters corresponding to each subsystem happens simultaneously with the subsystem switching [10], [17]. However, in practical applications, detecting DoS attacks in a timely manner proves challenging, leading to delays. This delay implies that the filter often takes additional time to adjust to the appropriate control mode based on the subsystem mode, resulting in asynchronous filter/subsystem switching [18]. As a result, filters designed for synchronous switching may not provide optimal detection performance in real-world scenarios [19]. Thus, incorporating asynchronous switching into thruster fault detection for unmanned marine vehicles (UMVs) under DoS attacks is of substantial practical significance.
|
| 42 |
+
|
| 43 |
+
This work is supported in part by the National Natural Science Foundation of China under Grants 62273072, 51939001. (Corresponding author: Yue Long)
|
| 44 |
+
|
| 45 |
+
Inspired by the previous discussion, this paper investigates thruster fault detection (FD) for unmanned marine vehicles (UMVs) under Denial of Service (DoS) attacks using an asynchronous switched method to enhance reliability and security. Addressing the challenge of timely DoS attack detection, the paper proposes an asynchronous switched filter specifically designed for thruster fault detection. Furthermore, leveraging model-dependent average dwell time (MDADT) and piecewise Lyapunov functions (PLF), the paper establishes the tolerable lower bound of the sleep interval and the upper bound of the attack interval for DoS attacks. The filter parameters are determined based on linear solvability conditions. The effectiveness of the proposed method is ultimately validated through simulation.
|
| 46 |
+
|
| 47 |
+
§ II. PROBLEM FORMULATION AND MODELING
|
| 48 |
+
|
| 49 |
+
§ A.UMV MODEL
|
| 50 |
+
|
| 51 |
+
Consider the UMV and the following body-fixed equations of motion
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
M\dot{\delta }\left( t\right) + {N\delta }\left( t\right) + {R\psi }\left( t\right) = {E\varphi }\left( t\right) , \tag{1}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\dot{\psi }\left( t\right) = J\left( {\eta \left( t\right) }\right) \delta \left( t\right) ,
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where $\delta \left( t\right) = {\left\lbrack {\delta }_{u}\left( t\right) ,{\delta }_{v}\left( t\right) ,{\delta }_{r}\left( t\right) \right\rbrack }^{T}$ with ${\delta }_{u}\left( t\right) ,{\delta }_{v}\left( t\right) ,{\delta }_{r}\left( t\right)$ representing the surge, sway and yaw velocities, respectively. $\psi \left( t\right) = {\left\lbrack {x}_{p}\left( t\right) ,{y}_{p}\left( t\right) ,\eta \left( t\right) \right\rbrack }^{T}$ with ${x}_{p}\left( t\right)$ and ${y}_{p}\left( t\right)$ are positions and $\eta \left( t\right)$ is the yaw angle. $\varphi \left( t\right)$ is the control input. $M,N,R$ and $E$ denote inertia, damping, mooring forces and configuration matrices, and $M$ is a symmetric positive-definite and invertible matrix that satisfies $M = {M}^{T} > 0$ ,
|
| 62 |
+
|
| 63 |
+
$J\left( {\eta \left( t\right) }\right) = \left\lbrack \begin{matrix} \cos \left( {\eta \left( t\right) }\right) & - \sin \left( {\eta \left( t\right) }\right) & 0 \\ \sin \left( {\eta \left( t\right) }\right) & \cos \left( {\eta \left( t\right) }\right) & 0 \\ 0 & 0 & 1 \end{matrix}\right\rbrack .$
|
| 64 |
+
|
| 65 |
+
Then, by defining $x\left( t\right) = \delta \left( t\right) - {\delta }_{\text{ ref }},A\left( t\right) =$ $- M{\left( t\right) }^{-1}N\left( t\right) ,{B}_{1}\left( t\right) = M{\left( t\right) }^{-1}R$ and ${B}_{2}\left( t\right) = M{\left( t\right) }^{-1}E$ , and taking into account the unavoidable disturbance $\widetilde{d}\left( t\right)$ caused by wind, wave and current, the system (1) can be expressed as
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\left\{ \begin{array}{l} \dot{x}\left( t\right) = {Ax}\left( t\right) + {B}_{1}d\left( t\right) + {B}_{2}\varphi \left( t\right) , \\ y\left( t\right) = {Cx}\left( t\right) , \end{array}\right. \tag{2}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where $d\left( t\right) = {B}_{1}{\left( t\right) }^{-1}{d}^{ * }\left( t\right) - \psi \left( t\right) + {B}_{1}{\left( t\right) }^{-1}A{\delta }_{\text{ ref }}$ and $C =$ $\left\lbrack \begin{array}{lll} 0 & 0 & 1 \end{array}\right\rbrack$ denotes the output matrix.
|
| 72 |
+
|
| 73 |
+
Consider thruster fault ${\varphi }^{F}\left( t\right) = {\rho \varphi }\left( t\right) + {\sigma f}\left( t\right)$ and assume control inputs $\varphi \left( t\right) = {Kx}\left( t\right)$ are designed,(2) is represented
|
| 74 |
+
|
| 75 |
+
as
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\left\{ \begin{array}{l} \dot{x}\left( t\right) = \widehat{A}x\left( t\right) + {B}_{1}d\left( t\right) + {B}_{2}\widehat{f}\left( t\right) , \\ y\left( t\right) = {Cx}\left( t\right) , \end{array}\right. \tag{3}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $\widehat{A} = A + {B}_{2}K$ and $\widehat{f}\left( t\right) = - \bar{\rho }\varphi \left( t\right) + {\sigma f}\left( t\right)$ .
|
| 82 |
+
|
| 83 |
+
§ B.DOS ATTACKS MODEL
|
| 84 |
+
|
| 85 |
+
Consider the aperiodic dos attacks as follows:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{A}_{\text{ Dos }} = \left\{ \begin{matrix} 0, & t \in \left\lbrack {{t}_{2l},{t}_{{2l} + 1}}\right) \triangleq {\kappa }_{0,{2l}} \\ 1, & t \in \left\lbrack {{t}_{{2l} + 1},{t}_{2\left( {l + 1}\right) }}\right) \triangleq {\kappa }_{1,{2l}} \end{matrix}\right. \tag{4}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where $t \in \left\lbrack {{t}_{2l},{t}_{{2l} + 1}}\right) \triangleq {\kappa }_{0,{2l}}\;\left( {l \in \mathrm{N},{t}_{2l} \geq 0}\right)$ indicates the ${l}^{th}$ sleep interval with the length ${s}_{l} = {t}_{{2l} + 1} - {t}_{2l}$ , and $t \in \left\lbrack {{t}_{{2l} + 1},{t}_{2\left( {l + 1}\right) }}\right) \triangleq {\kappa }_{1,{2l}}$ indicates the ${l}^{th}$ DoS attacks interval with the length ${d}_{l} = {t}_{2\left( {l + 1}\right) } - {t}_{{2l} + 1}$ .
|
| 92 |
+
|
| 93 |
+
Due to the communication disruption caused by DoS attacks, the UMV system (3) can be augmented into the following switched system, which has been discretized. The sleeping interval can be expressed as $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ , and the DoS attacks interval can be expressed as $k \in \left\lbrack {{k}_{{2l} + 1},{k}_{2\left( {l + 1}\right) }}\right)$ .
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\left\{ \begin{array}{l} x\left( {k + 1}\right) = {A}_{id}x\left( k\right) + {B}_{1id}d\left( k\right) + {B}_{2id}\widehat{f}\left( k\right) \\ y\left( k\right) = {C}_{d}x\left( k\right) \end{array}\right. \tag{5}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
§ C. ASYNCHRONOUS SWITCHING FILTER
|
| 100 |
+
|
| 101 |
+
In the case of the DoS attacks and thruster faults, the residual signal produced by the switched filter is as follows:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\left\{ {\begin{array}{l} {x}_{f}\left( {k + 1}\right) = {A}_{fi}{x}_{f}\left( k\right) + {B}_{fi}y\left( k\right) \\ r\left( k\right) = {C}_{fi}{x}_{f}\left( k\right) + {D}_{fi}y\left( k\right) \end{array}\left( {i = 0,1}\right) }\right. \tag{6}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where ${x}_{f}\left( k\right)$ is the state of the filters, $r\left( k\right)$ is the residual signal of the switched system (5). Define $\widetilde{x}\left( k\right) =$ ${\left\lbrack \begin{array}{ll} {x}^{T}\left( k\right) & {x}_{f}^{T}\left( k\right) \end{array}\right\rbrack }^{T},\varpi \left( k\right) = {\left\lbrack \begin{array}{ll} {d}^{T}\left( k\right) & {f}^{T}\left( k\right) \end{array}\right\rbrack }^{T}$ and the residual evaluation signal $e\left( k\right) = r\left( k\right) - \widehat{f}\left( k\right) ,\left( 6\right)$ is rewritten as (7)
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{\Phi }_{0} : \left\{ {\begin{array}{l} \widetilde{x}\left( {k + 1}\right) = {\widetilde{A}}_{i}\widetilde{x}\left( k\right) + {\widetilde{B}}_{i}\varpi \left( k\right) \\ e\left( k\right) = {\widetilde{C}}_{i}\widetilde{x}\left( k\right) + {\widetilde{D}}_{i}\varpi \left( k\right) \end{array},k \in \left\lbrack {{k}_{l} + {\varepsilon }_{l},{k}_{l + 1}}\right) }\right.
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\Phi }_{1} : \left\{ {\begin{array}{l} \widetilde{x}\left( {k + 1}\right) = {\widetilde{A}}_{ij}\widetilde{x}\left( k\right) + {\widetilde{B}}_{ij}\varpi \left( k\right) \\ e\left( k\right) = {\widetilde{C}}_{ij}\widetilde{x}\left( k\right) + {\widetilde{D}}_{ij}\varpi \left( k\right) \end{array},k \in \left\lbrack {{k}_{l},{k}_{l} + {\varepsilon }_{l}}\right) }\right.
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where $i \neq j,i \in \{ 0,1\} ,j \in \{ 0,1\} ,{\widetilde{A}}_{ij} = \left\lbrack \begin{matrix} {A}_{id} & 0 \\ {B}_{fj}{C}_{d} & {A}_{fj} \end{matrix}\right\rbrack$ , ${\widetilde{B}}_{ij} = \left\lbrack \begin{matrix} {B}_{1i} & {B}_{2i} \\ 0 & 0 \end{matrix}\right\rbrack ,{\widetilde{C}}_{ij} = \left\lbrack \begin{array}{ll} {D}_{fj}{C}_{d} & {C}_{fj} \end{array}\right\rbrack$ and ${\widetilde{D}}_{ij} =$ $\left\lbrack \begin{array}{ll} 0 & - \bar{I} \end{array}\right\rbrack$ .
|
| 118 |
+
|
| 119 |
+
To better set the stage for the next section, the following definitions are presented.
|
| 120 |
+
|
| 121 |
+
Definition 1: For any switching signal $\tau \left( k\right)$ and $0 < {k}_{0} \leq$ $k$ , let ${\mathcal{M}}_{\tau ,l}\left( {{k}_{0},k}\right)$ indicate the number of switching times that the ${l}_{th}$ subsystem is activated over $\left\lbrack {{k}_{0},k}\right)$ . If
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{M}_{\tau ,l}\left( {{k}_{0},k}\right) \leq {N}_{{\mathcal{M}}_{0,l}} + \frac{{N}_{l}\left( {{k}_{0},k}\right) }{{\lambda }_{l}}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
holds for scalar ${\lambda }_{l} > 0$ and integer ${N}_{{M}_{0,l}} \geq 0$ , then ${\lambda }_{l}$ is called model-dependent average dwell time. ${N}_{l}\left( {{k}_{0},k}\right)$ is the total running time of the ${l}_{th}$ subsystem over $\left\lbrack {{k}_{0},k}\right)$ .
|
| 128 |
+
|
| 129 |
+
Definition 2: Consider asynchronous switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ , and given scalar $\alpha ,\beta$ , and $\gamma$ satisfying $0 < \alpha < 1$ , $\beta > 0$ and $\gamma > 0$ . Under zero initial condition, if the asynchronous switched system is exponentially stable and satisfies $\mathop{\sum }\limits_{{s = {k}_{0}}}^{\infty }{\left( 1 - \alpha \right) }^{s}{e}^{\mathrm{T}}\left( s\right) e\left( s\right) \leq {\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{\infty }{\varpi }^{\mathrm{T}}\left( s\right) \varpi \left( s\right)$ , it is said that the system exhibits exponential stability and has exponential ${H}_{\infty }$ index $\gamma$ .
|
| 130 |
+
|
| 131 |
+
§ III. MAIN RESULTS
|
| 132 |
+
|
| 133 |
+
In this section, the stability and ${H}_{\infty }$ performance of asynchronous switched systems (7) will be analyzed, and the sufficient and linearly solvable conditions for the designed switched FD filters are given.
|
| 134 |
+
|
| 135 |
+
Theorem 1: Consider the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ under DoS attacks, scalars ${\alpha }_{i},{\beta }_{i},\gamma ,{\mu }_{0}$ and ${\mu }_{1}$ satisfying $0 < {\alpha }_{i} < 1,{\beta }_{i} > 0,\gamma > 0,{\mu }_{0} > 1$ and $0 < {\mu }_{1} < 1$ , if there exist symmetric positive-definite matrices ${\mathcal{P}}_{i}$ satisfying the following conditions
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{\widetilde{A}}_{i}^{T}{\mathcal{P}}_{i}{\widetilde{A}}_{i} - {\mathcal{P}}_{i} + {\alpha }_{i}{\mathcal{P}}_{i} < 0, \tag{8}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{\widetilde{A}}_{ij}^{T}{\mathcal{P}}_{i}{\widetilde{A}}_{ij} - {\mathcal{P}}_{i} - {\beta }_{i}{\mathcal{P}}_{i} < 0, \tag{9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
{\mathcal{P}}_{i} \leq {\mu }_{i}{\mathcal{P}}_{j} \tag{10}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
{\tau }_{D} < \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{\ln {\widetilde{\alpha }}_{1}},{\tau }_{F} > - \frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{\ln {\widetilde{\alpha }}_{0}}, \tag{11}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable with the exponential ${H}_{\infty }$ performance, where $i \neq j,{\widetilde{\alpha }}_{i} = 1 - {\alpha }_{i},{\widetilde{\beta }}_{i} = 1 + {\beta }_{i},{\phi }_{i} = \frac{{\widetilde{\beta }}_{i}}{{\widetilde{\alpha }}_{i}}$ and ${\varepsilon }_{M}$ denotes the maximum time that the filter lags the subsystem.
|
| 154 |
+
|
| 155 |
+
Proof: The piecewise Lyapunov function for the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are given as follows
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) = {\widetilde{x}}^{T}\left( k\right) {\mathcal{P}}_{i}\widetilde{x}\left( k\right) . \tag{12}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
When $\varpi \left( k\right) = 0$ and $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ , it can be obtained
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq \left\{ \begin{array}{l} {\widetilde{\alpha }}_{i}^{k - {k}_{2l} - {\varepsilon }_{2l}}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( {{k}_{2l} + {\varepsilon }_{2l}}\right) }\right) ,k \in {\Gamma }^{ + } \\ {\widetilde{\beta }}_{i}^{k - {k}_{2l}}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( {k}_{2l}\right) }\right) ,k \in {\Gamma }^{ - } \end{array}\right. \tag{13}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where ${\widetilde{\alpha }}_{i} = 1 - {\alpha }_{i}$ and ${\widetilde{\beta }}_{i} = 1 + {\beta }_{i}$ . And when $k \in {\mathcal{T}}^{ + }\left( {{k}_{2l},{k}_{{2l} + 1}}\right)$ , from (8) and (11), it can be derived
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l} - {\varepsilon }_{2l}}{\mathcal{V}}_{0}\left( {\widetilde{x}\left( {{k}_{2l} + {\varepsilon }_{2l}}\right) }\right)
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l} - {\varepsilon }_{2l}} \cdot {\widetilde{\beta }}_{0}^{{\varepsilon }_{2l}} \cdot {\mathcal{V}}_{0}\left( {\widetilde{x}\left( {k}_{2l}\right) }\right)
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
< \cdots
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\leq \theta \exp \left\{ {\max \left( {\frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{{\tau }_{F}} + {v}_{0}, - \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{{\tau }_{D}} + {v}_{1}}\right) }\right.
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\left. \left( {{\Xi }_{F}\left( {{k}_{0},k}\right) + {\Xi }_{D}\left( {{k}_{0},k}\right) }\right) \right\} \mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right)
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
(14)
|
| 190 |
+
|
| 191 |
+
where $\theta = \exp \left\lbrack {\left( {{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}\right) {\xi }_{F} - \left( {{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}\right) {\xi }_{D}}\right\rbrack$ , $\omega = \max \left\{ {-\frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{{\tau }_{F}} - \ln {\widetilde{\alpha }}_{0},\frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{{\tau }_{D}} - \ln {\widetilde{\alpha }}_{1}}\right\} ,$ ${\chi }_{0} = {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0},{\chi }_{1} = {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1},{v}_{i} = \ln {\widetilde{\alpha }}_{i}.$
|
| 192 |
+
|
| 193 |
+
From (11), it has $\omega > 0$ . Then, it is clear that $\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right)$ converges to zero when $k \rightarrow \infty$ . Therefore, the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable when (8) and (11) hold.
|
| 194 |
+
|
| 195 |
+
Next, if $\varpi \left( k\right) \neq 0$ for $k \in \left\lbrack {{k}_{2l},{k}_{{2l} + 1}}\right)$ and zero initial conditions, (??) is derived as follows
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\Delta {\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) < \left\{ \begin{array}{l} - {\alpha }_{i}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) - \Upsilon \left( k\right) ,k \in {\Gamma }^{ + } \\ {\beta }_{i}{\mathcal{V}}_{i}\left( {\widetilde{x}\left( k\right) }\right) - \Upsilon \left( k\right) ,k \in {\Gamma }^{ - } \end{array}\right. \tag{15}
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
where $i = 0,1,\Upsilon \left( k\right) = {e}^{T}\left( k\right) e\left( k\right) - {\gamma }^{2}{\varpi }^{T}\left( k\right) \varpi \left( k\right)$ . When $k \in {\mathcal{T}}^{ + }\left( {{k}_{2l},{k}_{{2l} + 1}}\right)$ , it can have the following inequality in the similar way from (10) and (15)
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \leq {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\cdots {\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}.
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0},k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots {\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}.
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0},k}\right) }\mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right) - {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\ldots
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
{\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0},k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
{\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0},k}\right) }\mathop{\sum }\limits_{{s = {k}_{0} + {\Delta }_{0}}}^{{{k}_{1} - 1}}{\widetilde{\alpha }}_{0}^{{k}_{1} - s - 1}\Upsilon \left( s\right)
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
- {\widetilde{\alpha }}_{0}^{k - {k}_{2l}}{\widetilde{\alpha }}_{0}^{{k}_{{2l} - 1} - {k}_{{2l} - 2}}\cdots {\widetilde{\alpha }}_{0}^{{k}_{1} - {k}_{0}}{\phi }_{0}^{{\varepsilon }_{2l}}{\phi }_{0}^{{\varepsilon }_{{2l} - 2}}\cdots {\phi }_{0}^{{\varepsilon }_{0}}.
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
{\mu }_{0}^{{\mathrm{M}}_{F}\left( {{k}_{0},k}\right) }{\widetilde{\alpha }}_{1}^{{k}_{2l} - {k}_{{2l} - 1}}\cdots {\widetilde{\alpha }}_{1}^{{k}_{2} - {k}_{1}}{\phi }_{1}^{{\varepsilon }_{{2l} - 1}}\cdots {\phi }_{1}^{{\varepsilon }_{1}}{\mu }_{1}^{{\mathrm{M}}_{D}\left( {{k}_{0},k}\right) }
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\mathop{\sum }\limits_{{s = {k}_{0}}}^{{{\hslash }_{0} - 1}}\left( {{\widetilde{\alpha }}^{{k}_{1} - {\hslash }_{0}}{\phi }_{0}^{{\hslash }_{0} - s - 1}\Upsilon \left( s\right) }\right) - \mathop{\sum }\limits_{{s = {\hslash }_{2l}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}\Upsilon \left( s\right)
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
- \mathop{\sum }\limits_{{s = {k}_{2l}}}^{{{\hslash }_{2l} - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\phi }_{0}^{{\hslash }_{2l} - s - 1}\Upsilon \left( s\right)
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
(16)
|
| 240 |
+
|
| 241 |
+
Since ${\varepsilon }_{M} = \max \left\{ {\varepsilon }_{i}\right\}$ and $1 < {\phi }_{0}^{{k}_{2l} + {\varepsilon }_{2l} - s - 1} < {\phi }_{0}^{{\varepsilon }_{M} - 1}$ , under zero initial conditions $\mathcal{V}\left( {\widetilde{x}\left( {k}_{0}\right) }\right) = 0$ and $\mathcal{V}\left( {\widetilde{x}\left( k\right) }\right) \geq 0$ and according to the Definition 1, it can get
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\widetilde{\alpha }}_{0}^{{\Xi }_{F}\left( {{k}_{0},s}\right) }{\widetilde{\alpha }}_{1}^{{\Xi }_{D}\left( {{k}_{0},s}\right) }{e}^{T}\left( s\right) e\left( s\right) \leq \tag{17}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
{\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}}{\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\theta }_{0}^{{\varepsilon }_{M} - 1}{\varpi }^{T}\left( s\right) \varpi \left( s\right) .
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
The accumulated sum of (17) over $\lbrack k,\infty )$ is given by
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\mathop{\sum }\limits_{{k = {k}_{0}}}^{\infty }\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\widetilde{\alpha }}^{s - {k}_{0}}{e}^{T}\left( s\right) e\left( s\right) \leq {\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}} \tag{18}
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
{\gamma }^{2}\mathop{\sum }\limits_{{k = {k}_{0}}}^{\infty }\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}_{0}^{k - s - 1}{\theta }_{0}^{{\varepsilon }_{M} - 1}{\varpi }^{T}\left( s\right) \varpi \left( s\right)
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
which is equivalent to
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\widetilde{\alpha }}^{s - {k}_{0}}{e}^{T}\left( s\right) e\left( s\right) \leq {\chi }_{0}^{{\xi }_{F}}{\chi }_{1}^{{\xi }_{D}} \tag{19}
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
{\theta }_{0}^{{\varepsilon }_{M} - 1}{\gamma }^{2}\mathop{\sum }\limits_{{s = {k}_{0}}}^{{k - 1}}{\varpi }^{T}\left( s\right) \varpi \left( s\right) .
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
Thus, the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are finally shown to be exponentially asymptotically stable and satisfy the exponential ${H}_{\infty }$ performance index ${\gamma }_{s} =$
|
| 272 |
+
|
| 273 |
+
$\max \left\{ {\sqrt{{\left( {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0}\right) }^{{\xi }_{F}}{\left( {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1}\right) }^{{\xi }_{D}}{\theta }_{0}^{{\varepsilon }_{M} - 1}} \cdot \gamma }\right\}$ , which completes the proof.
|
| 274 |
+
|
| 275 |
+
Due to the presence of numerous unknown matrix couplings, it is typically difficult to obtain filter gains from Theorem 1. Then, the linear solvability conditions of the designed filters are proposed in Theorem 2.
|
| 276 |
+
|
| 277 |
+
Theorem 2: Consider the switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ , under DoS attacks with ${\tau }_{F}$ and ${\tau }_{D}$ , scalar ${\alpha }_{i},{\beta }_{i},\gamma ,{\mu }_{0}$ and ${\mu }_{1}$ satisfying $0 < {\alpha }_{i} < 1,{\beta }_{i} > 0,\gamma > 0,{\mu }_{0} > 1$ and $0 < {\mu }_{1} < 1$ . If there exist symmetric positive-definite matrices ${\mathcal{P}}_{i1},{\mathcal{P}}_{i3}$ , matrices ${\mathcal{P}}_{i2},{\mathcal{G}}_{i},{\mathcal{Q}}_{i},{\mathcal{R}}_{i},{\mathcal{A}}_{Fi},{\mathcal{B}}_{Fi},{\mathcal{C}}_{Fi},{\mathcal{D}}_{Fi}$ , scalar $\gamma ,i,j,i \neq j$ satisfying the following conditions
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
\left\lbrack \begin{matrix} {\Pi }_{i}^{11} & {\Pi }_{i}^{12} & 0 & {\Pi }_{i}^{14} & {\mathcal{A}}_{Fi} & {\Pi }_{i}^{16} & {\Pi }_{i}^{17} \\ * & {\Pi }_{i}^{22} & 0 & {\Pi }_{i}^{24} & {\mathcal{A}}_{Fi} & {\Pi }_{i}^{26} & {\Pi }_{i}^{27} \\ * & * & - I & {\Pi }_{i}^{34} & {\mathcal{C}}_{Fi} & 0 & - I \\ * & * & * & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i1} & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i2} & 0 & 0 \\ * & * & * & * & - {\widetilde{\alpha }}_{i}{\mathcal{P}}_{i3} & 0 & 0 \\ * & * & * & * & * & - {\gamma }^{2}I & 0 \\ * & * & * & * & * & * & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0,
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
(20)
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
\left\lbrack \begin{matrix} {\Pi }_{ij}^{11} & {\Pi }_{ij}^{12} & 0 & {\Pi }_{ij}^{14} & {\mathcal{A}}_{Fj} & {\Pi }_{ij}^{16} & {\Pi }_{ij}^{17} \\ * & {\Pi }_{i}^{22} & 0 & {\Pi }_{ij}^{24} & {\mathcal{A}}_{Fj} & {\Pi }_{ij}^{26} & {\Pi }_{ij}^{27} \\ * & * & - I & {\Pi }_{ij}^{34} & {\mathcal{C}}_{Fj} & 0 & - I \\ * & * & * & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i1} & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i2} & 0 & 0 \\ * & * & * & * & - {\widetilde{\beta }}_{i}{\mathcal{P}}_{i3} & 0 & 0 \\ * & * & * & * & * & - {\gamma }^{2}I & 0 \\ * & * & * & * & * & * & - {\gamma }^{2}I \end{matrix}\right\rbrack < 0
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
(21)
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
\left\lbrack \begin{matrix} {\Omega }^{11} & {\Omega }^{12} & {\mathcal{G}}_{i}^{T} & {\mathcal{R}}_{i} \\ & {\Omega }^{22} & {\mathcal{Q}}_{i}^{T} & {\mathcal{R}}_{i} \\ & * & - {\mu }_{i}{\mathcal{P}}_{j1} & - {\mu }_{i}{\mathcal{P}}_{j2} \\ & * & * & - {\mu }_{i}{\mathcal{P}}_{j3} \end{matrix}\right\rbrack \leq 0 \tag{22}
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{\tau }_{D} < \frac{{\varepsilon }_{M}\ln {\phi }_{1} + \ln {\mu }_{1}}{\ln {\widetilde{\alpha }}_{1}},{\tau }_{F} > - \frac{{\varepsilon }_{M}\ln {\phi }_{0} + \ln {\mu }_{0}}{\ln {\widetilde{\alpha }}_{0}}, \tag{23}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
the closed-loop switched subsystems ${\Phi }_{0}$ and ${\Phi }_{1}$ are exponentially asymptotically stable and and satisfy the exponential ${H}_{\infty }$ performance index ${\gamma }_{s} = \max \left\{ {\sqrt{{\left( {\theta }_{0}^{{\varepsilon }_{M}}{\mu }_{0}\right) }^{{\xi }_{F}}{\left( {\theta }_{1}^{{\varepsilon }_{M}}{\mu }_{1}\right) }^{{\xi }_{D}}{\theta }_{0}^{{\varepsilon }_{M} - 1}} \cdot \gamma }\right\}$ , where $\widetilde{\alpha } = 1 - \alpha ,\widetilde{\beta } = 1 + \beta .{\Pi }_{i}^{11} = {\mathcal{P}}_{i1} - {\dot{\mathcal{G}}}_{i} - {\mathcal{G}}_{i} - {\mathcal{G}}_{i}^{T},$ ${\Pi }_{i}^{12} = {\mathcal{P}}_{i2} - {\mathcal{Q}}_{i} - {\mathcal{R}}_{i},{\Pi }_{i}^{14} = {\mathcal{G}}_{i}{}^{T}{A}_{id} + {\mathcal{B}}_{Fi}{C}_{d},$ ${\Pi }_{i}^{16} = {\mathcal{G}}_{i}{}^{T}{B}_{1i},{\Pi }_{i}^{17} = {\mathcal{G}}_{i}{}^{T}{B}_{2i},{\Pi }_{i}^{22} = {\mathcal{P}}_{i3} - {\mathcal{R}}_{i} - {\mathcal{R}}_{i}{}^{T},$ ${\Pi }_{i}^{24} = {\mathcal{Q}}_{i}{}^{T}{A}_{id} + {\mathcal{B}}_{Fi}{C}_{d},{\Pi }_{i}^{26} = {\mathcal{Q}}_{i}{}^{T}{B}_{1i},{\Pi }_{i}^{27} = {\mathcal{Q}}_{i}{}^{T}{B}_{2i},$ ${\Pi }_{i}^{34} = {\mathcal{D}}_{Fi}{C}_{d},{\Pi }_{ij}^{11} = {\mathcal{P}}_{i1} - {\mathcal{G}}_{j} - {\mathcal{G}}_{j}{}^{T},{\Pi }_{ij}^{12} = {\mathcal{P}}_{i2} - {\mathcal{Q}}_{j} - {\mathcal{R}}_{j},$ ${\Pi }_{ij}^{14} = {\mathcal{G}}_{j}{}^{T}{A}_{id} + {\mathcal{B}}_{Fj}{C}_{d},{\Pi }_{ij}^{16} = {\mathcal{G}}_{j}{}^{T}{B}_{1i},{\Pi }_{ij}^{17} = {\mathcal{G}}_{j}{}^{T}{B}_{2i},$ ${\Pi }_{ij}^{22} = {\mathcal{P}}_{i3} - {\mathcal{R}}_{j} - {\mathcal{R}}_{j}{}^{T},{\Pi }_{ij}^{24} = {\mathcal{Q}}_{j}{}^{T}{A}_{id} + {\mathcal{B}}_{Fj}{C}_{d},$ ${\Pi }_{ij}^{26} = {Q}_{j}{}^{T}{B}_{1i},{\Pi }_{ij}^{27} = {Q}_{j}{}^{T}{B}_{2i},{\Pi }_{ij}^{34} = {D}_{Fj}{C}_{d},$ ${\Omega }^{11} = {\mathcal{P}}_{i1} - {\mu }_{i}\left( {{\mathcal{G}}_{i} + {\mathcal{G}}_{i}{}^{T}}\right) ,{\Omega }^{12} = {\mathcal{P}}_{i2} - {\mu }_{i}{\mathcal{Q}}_{i} -$ ${\mu }_{i}{\mathcal{R}}_{i}{}^{T},{\Omega }^{22} = {\mathcal{P}}_{i3} - {\mu }_{i}\left( {{\mathcal{R}}_{i} + {\mathcal{R}}_{i}{}^{T}}\right) .$
|
| 300 |
+
|
| 301 |
+
In addition, if there is a solution to (20)-(23), then the filter gain can be obtained
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
\left\lbrack \begin{matrix} {\mathcal{A}}_{fi} & {\mathcal{B}}_{fi} \\ {\mathcal{C}}_{fi} & {\mathcal{D}}_{fi} \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} {\mathcal{R}}_{i}{}^{-1} & 0 \\ 0 & I \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {\mathcal{A}}_{Fi} & {\mathcal{B}}_{Fi} \\ {\mathcal{C}}_{Fi} & {\mathcal{D}}_{Fi} \end{matrix}\right\rbrack . \tag{24}
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
Proof: Based on the Project Lemma and the Schur complement Lemma, pre- and post-multiplying (8), one can deduce that (8) and (20) are equivalent. Similarly, pre- and postmultiplying (9) implies that (9) and (21) are equivalent. Theorem 2 is proved.
|
| 308 |
+
|
| 309 |
+
For the purpose of fault detection, the residual is obtained from the difference between the measured value and its estimated value. Design the following residual estimation function
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
{\mathcal{J}}_{r}\left( k\right) = \sqrt{\frac{1}{k}\mathop{\sum }\limits_{{s = 1}}^{k}{r}^{T}\left( s\right) r\left( s\right) }. \tag{25}
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
And select threshold value of (25) as
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
{\mathcal{J}}_{th} = \mathop{\sup }\limits_{\substack{{d\left( k\right) \in {l}_{2}} \\ {f\left( k\right) = 0} }}{\mathcal{J}}_{r}\left( k\right) . \tag{26}
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
Therefore, the fault detection logical relationship is
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\left\{ \begin{matrix} \begin{Vmatrix}{{\mathcal{J}}_{r}\left( k\right) }\end{Vmatrix} > {\mathcal{J}}_{th} & \text{ Alarm } \\ \begin{Vmatrix}{{\mathcal{J}}_{r}\left( k\right) }\end{Vmatrix} \leq {\mathcal{J}}_{th} & \text{ No-alarm. } \end{matrix}\right. \tag{27}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
§ IV. SIMULATION
|
| 328 |
+
|
| 329 |
+
This section intends to demonstrate the effectiveness of asynchronous FD strategy for networked UMV under DoS attacks. By choosing matrices $M,N$ and $R$ in system (1) as [20]. Let ${\alpha }_{0} = {0.09},{\beta }_{0} = {0.05},{\alpha }_{1} = {0.11},{\beta }_{1} = {0.03}$ , ${\mu }_{0} = {1.4},{\mu }_{1} = {0.45},{\varepsilon }_{M} = 2,\sigma = 1$ and $\gamma = {44}$ . Then, from (11) the MDADT satisfies ${\tau }_{D} < {4.34}$ and ${\tau }_{F} > {6.60}$ . The UMV fault detection filter gain under DoS attacks can be calculated by Theorem 2.
|
| 330 |
+
|
| 331 |
+
To demonstrate the practicability of FD filters designed for networked UMV under DoS attacks, the following simulations are performed to verify it. Firstly, UMV are suffered from thruster faults, external disturbances and DoS attacks. One possible sequences of DoS attacks are depicted in Fig. 1, where 1 denotes that attacks have occurred and 0 denotes the sleep state with no attack. Because of the existence of DoS attacks, which in turn leads to asynchronous switching between the filter and the primary system, then the switching sequence between the filter and the subsystem is shown in Fig. 2.
|
| 332 |
+
|
| 333 |
+
< g r a p h i c s >
|
| 334 |
+
|
| 335 |
+
Fig. 1. DoS attacks sequences.
|
| 336 |
+
|
| 337 |
+
< g r a p h i c s >
|
| 338 |
+
|
| 339 |
+
Fig. 2. Switching sequences.
|
| 340 |
+
|
| 341 |
+
The external disturbance $d\left( k\right)$ is given as the following form
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
d\left( k\right) = \left\{ {\begin{array}{l} {d}_{1}\left( k\right) = {12}\sin \left( k\right) \exp \left( {-{0.15k}}\right) \\ {d}_{2}\left( k\right) = {15}\sin \left( {0.73k}\right) ,k \in \left\lbrack {5,{37}}\right\rbrack \\ {d}_{3}\left( k\right) = 9\sin \left( {0.2k}\right) ,k \in \left\lbrack {{11},{45}}\right\rbrack \end{array}.}\right.
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
Case 1: Use DoS attacks sequence 1, and the fault signals ${f}^{1}\left( k\right)$ takes the following form
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
{f}^{1}\left( k\right) = \left\{ {\begin{array}{l} {f}_{1}\left( k\right) = 2\sin \left( {0.2k}\right) \\ {f}_{2}\left( k\right) = \cos \left( {0.1k}\right) \\ {f}_{3}\left( k\right) = {0.8}\sin \left( {0.15k}\right) \end{array},k \in \left\lbrack {{25},{35}}\right\rbrack .}\right.
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
Under the DoS attack sequence and the faults ${f}^{1}\left( k\right)$ , the curves of the residual signal $\parallel r\left( k\right) {\parallel }_{2}$ and the REF signal are depicted in Fig. 3 and Fig. 4, respectively. In the absence of faults, the threshold value is chosen depending on the maximum value of the REF signal: ${\mathcal{J}}_{th} = {0.215}$ . When $t$ $= {25.11}\mathrm{\;s}$ , the fault signal is detected in time.
|
| 354 |
+
|
| 355 |
+
< g r a p h i c s >
|
| 356 |
+
|
| 357 |
+
Fig. 3. The residual signal $\parallel r\left( k\right) {\parallel }_{2}$ in Case 1.
|
| 358 |
+
|
| 359 |
+
< g r a p h i c s >
|
| 360 |
+
|
| 361 |
+
Fig. 4. The REF signal in Case 1.
|
| 362 |
+
|
| 363 |
+
Case 2: In order to further verify the sensitivity of the FD filter to the faults, a fault with a smaller amplitude than case 1 but with the same frequency is selected for verification, and the DoS attack sequence is still used. The fault form of ${f}^{2}\left( k\right)$ is shown as follows
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
{f}^{2}\left( k\right) = \left\{ {\begin{array}{l} {f}_{1}\left( k\right) = {0.4}\sin \left( {0.2k}\right) \\ {f}_{2}\left( k\right) = {0.2}\cos \left( {0.1k}\right) \\ {f}_{3}\left( k\right) = {0.16}\sin \left( {0.15k}\right) \end{array},k \in \left\lbrack {{25},{35}}\right\rbrack .}\right.
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
Under the DoS attack sequence and the faults ${f}^{2}\left( k\right)$ , the curves of the residual signal $\parallel r\left( k\right) {\parallel }_{2}$ and the REF signal are depicted in Fig. 5 and Fig. 6, respectively. Fig. 6 indicates that the threshold for fault detection becomes smaller than in Case 1: ${\mathcal{J}}_{th} = {0.067}$ . And when $t = {25.27s}$ , the fault signal is detected in time. In contrast to Case 1, the residual amplitude and the REF signal are significantly reduced. This shows that the fault amplitude has a non-negligible effect on the system.
|
| 370 |
+
|
| 371 |
+
< g r a p h i c s >
|
| 372 |
+
|
| 373 |
+
Fig. 5. The residual signal $\parallel r\left( k\right) {\parallel }_{2}$ in Case 2.
|
| 374 |
+
|
| 375 |
+
< g r a p h i c s >
|
| 376 |
+
|
| 377 |
+
Fig. 6. The REF signal in Case 2.
|
| 378 |
+
|
| 379 |
+
§ V. CONCLUSION
|
| 380 |
+
|
| 381 |
+
To solve the problem that DoS attacks cannot be detected in time, this paper designs an exponential convergent ${H}_{\infty }$ filters based on an asynchronous switched method for UMVs under DoS attacks, which solves the issue that the filters' switching frequently lags behind subsystems in practical applications. On the basis of the MDADT and the PLF, one criterion on the tolerability of the MDADT is derived to maintain exponential ${H}_{\infty }$ performance. Sufficient conditions for the designed FD filter to exist are described by LMIs, and the filter gain and the related parameters of MDADT can be derived by solving these LMIs. Finally, the effectiveness of the designed filter is verified by numerical simulation.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Research on battery SOC estimation method by combining optimization algorithm and multi-model Kalman filtering
|
| 2 |
+
|
| 3 |
+
Zhi Ming Chen
|
| 4 |
+
|
| 5 |
+
College of Science
|
| 6 |
+
|
| 7 |
+
Liaoning University of Technology
|
| 8 |
+
|
| 9 |
+
Jinzhou, China
|
| 10 |
+
|
| 11 |
+
chenzhiminglab@163.com
|
| 12 |
+
|
| 13 |
+
Chang Qi Zhu
|
| 14 |
+
|
| 15 |
+
Navigation College
|
| 16 |
+
|
| 17 |
+
Dalian Maritime University
|
| 18 |
+
|
| 19 |
+
Dalian, China
|
| 20 |
+
|
| 21 |
+
zhuchangqi_work@163.com
|
| 22 |
+
|
| 23 |
+
Lei Liu
|
| 24 |
+
|
| 25 |
+
College of Science
|
| 26 |
+
|
| 27 |
+
Liaoning University of Technology
|
| 28 |
+
|
| 29 |
+
Jinzhou, China
|
| 30 |
+
|
| 31 |
+
liuleill@live.cn
|
| 32 |
+
|
| 33 |
+
${Abstract}$ -With the rapid growth of electric vehicles and energy storage systems, accurate state of charge (SOC) estimation has become a critical component of battery management systems (BMS), essential for preventing overcharging and over-discharging, enhancing operational safety, and extending battery life. This paper proposes a novel SOC estimation method based on an enhanced self-correcting (ESC) model incorporating a second-order RC circuit, enabling a more accurate simulation of battery response time and dynamic behavior. To improve model reliability, a genetic algorithm-particle swarm optimization (GA-PSO) approach is employed for parameter identification. Additionally, a multi-model adaptive extended Kalman filter (AEKF) algorithm is introduced to achieve precise SOC estimation. MATLAB simulations using constant current discharge and automotive driving cycle data demonstrate that the proposed method outperforms traditional AEKF algorithms, with faster convergence and higher estimation accuracy, particularly in scenarios with varying initial estimation accuracies. The results highlight the potential of this approach to significantly enhance SOC estimation in BMS, contributing to safer operation and prolonged battery life in electric vehicles and energy storage systems.
|
| 34 |
+
|
| 35 |
+
Keywords—SOC estimation, Enhanced Self-Correcting model, parameter identification, GA-PSO, multi-model AEKF.
|
| 36 |
+
|
| 37 |
+
## I. INTRODUCTION
|
| 38 |
+
|
| 39 |
+
As electric vehicles and energy storage systems continue to develop rapidly, the application of batteries as key energy storage devices has become increasingly widespread, highlighting the growing importance of battery management and control [1]. Within battery management systems (BMS), accurately estimating the state of charge (SOC) is a critical task [2]. Precise SOC estimation not only enables more reliable predictions of vehicle range but also improves battery utilization and helps prevent significant reductions in battery lifespan caused by overcharging or deep discharging [3]. However, the nonlinear characteristics, time-varying behavior, and electrochemical reactions of batteries make it impossible to measure their SOC directly with sensors [4]. Instead, SOC must be estimated using indirect measurements such as voltage, current, and temperature. Common SOC estimation methods include approaches based on open-circuit voltage, coulomb counting, data-driven techniques, and model-based estimation methods [5]. Each of these approaches presents distinct advantages and disadvantages [6].
|
| 40 |
+
|
| 41 |
+
Among these methods, model-based estimation achieves a reasonable balance between accuracy, real-time performance, and computational cost by integrating the battery equivalent circuit model (ECM) with state estimation algorithms. The ECM is a key component in this approach. Previous studies have advanced SOC estimation using various models and algorithms. Li et al. [7] utilized a second-order RC model with a stochastic gradient algorithm for parameter identification and developed a multi-innovation extended Kalman filter, validated experimentally. Shi et al. [8] employed Bayesian belief networks and adaptive extended Kalman particle filtering, demonstrating enhanced convergence and accuracy.
|
| 42 |
+
|
| 43 |
+
However, these studies largely overlook the hysteresis effect in battery charging and discharging. Gregory L. Plett [9] addressed this by introducing an Enhanced Self-Correcting (ESC) model that incorporates hysteresis into the ECM. Sk Bittu et al. [10] simulated a first-order RC ESC model with an EKF algorithm for SOC estimation but found that the model struggles with complex polarization dynamics, and the EKF's performance deteriorates with significant measurement errors.
|
| 44 |
+
|
| 45 |
+
Accurate SOC estimation requires precise circuit modeling and effective algorithms. This study incorporates the hysteresis phenomenon using an ESC model with second-order RC characteristics. The GA-PSO algorithm is applied for precise identification of battery model parameters via an optimized fitness function. Additionally, a multi-model AEKF is developed, integrating an adaptive factor into the EKF to refine the gain matrix, thereby improving the capture of the model's dynamic properties. This multi-model approach reduces estimation errors and enhances the robustness, accuracy, and stability of SOC estimation. The main contributions of this paper are as follows:
|
| 46 |
+
|
| 47 |
+
1) Battery parameter estimation: An ESC model with second-order RC characteristics is used for accurate characterization, with GA-PSO employed for parameter identification, validated through model testing.
|
| 48 |
+
|
| 49 |
+
2) Multi-model AEKF algorithm for SOC estimation: A multi-model AEKF algorithm is designed, combining adaptive parameters for process noise with a multi-model approach to improve SOC estimation accuracy.
|
| 50 |
+
|
| 51 |
+
3) Simulation comparative analysis: SOC estimation is analyzed using constant current discharge and automotive driving cycle scenarios, comparing the multi-model AEKF with the traditional AEKF, demonstrating enhanced convergence and accuracy..
|
| 52 |
+
|
| 53 |
+
## II. LITHIUM-ION BATTERY SOC ESTIMATION METHOD
|
| 54 |
+
|
| 55 |
+
This paper presents an ESC model based on a second-order RC equivalent circuit, incorporating the hysteresis phenomenon observed during battery charging and discharging. The model captures the battery's dynamic behavior, static characteristics, and hysteresis effects, as shown in Fig. 1.
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+
Fig. 1. ESC model of second-order RC
|
| 60 |
+
|
| 61 |
+
To identify the unknown parameters in the ESC model, the GA-PSO algorithm, an integration of Genetic Algorithm and Particle Swarm Optimization, is utilized. The process initiates with GA generating an initial population of parameter sets, which are subsequently evaluated by comparing the model's predictions with experimental battery data. GA operations, including selection, crossover, and mutation, are employed to refine these parameters, while PSO dynamically adjusts their search direction. After several iterations, the algorithm converges on the optimal parameter set, facilitating precise SOC estimation.
|
| 62 |
+
|
| 63 |
+
Building on the ESC model and parameter identification, a multi-model AEKF framework is developed to enhance SOC estimation. This framework employs multi-model fusion, integrating the estimates from several models to improve filter performance and robustness. The battery SOC is quantized into discrete sets, with $\mathrm{n}$ AEKF models constructed. The conditional probability of each SOC is calculated using Bayesian rules, and the SOC with the highest probability is selected for each time step. By using conditional probability as the switching rule, the multi-model AEKF adapts to varying operating conditions and improves SOC estimation accuracy and stability. The Bayesian rule used to compute these conditional probabilities is given by the following formula:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
p\left( {{s}_{i} \mid {Y}_{k}}\right) = \frac{p\left( {{y}_{k} \mid {Y}_{k - 1},{s}_{i}}\right) p\left( {{Y}_{k - 1} \mid {s}_{i}}\right) p\left( {s}_{i}\right) }{\mathop{\sum }\limits_{{i = 1}}^{N}p\left( {{y}_{k} \mid {Y}_{k - 1},{s}_{i}}\right) p\left( {{Y}_{k - 1} \mid {s}_{i}}\right) p\left( {s}_{i}\right) } \tag{1}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
where $p\left( {s}_{i}\right)$ denotes the prior probability, reflecting the initial estimate of the state ${s}_{i}$ in the absence of any measurement information. The entire expression delineates the posterior probability of each potential state given all previous measurements ${Y}_{k - 1}$ and the current measurement ${s}_{i}$ .
|
| 70 |
+
|
| 71 |
+
MATLAB simulations were conducted to model constant current discharge and automotive driving cycle discharge scenarios. A comparative experiment was set up between the traditional AEKF and the multi-model AEKF, focusing on evaluating their convergence performance and accuracy under conditions of unstable initial parameters and complex variations in discharge current.
|
| 72 |
+
|
| 73 |
+
## III. CONCLUSION
|
| 74 |
+
|
| 75 |
+
This paper focuses on the estimation performance of SOC in lithium-ion batteries. A second-order RC ESC model is considered, and the battery parameters are identified using the GA-PSO algorithm. Additionally, accurate estimation of battery SOC is achieved through the implementation of a multi-model Adaptive Kalman Filter. To validate the effectiveness of the proposed method, a series of simulation comparisons are conducted. The simulation results demonstrate that the proposed multi-model AEKF algorithm exhibits fast convergence and high estimation accuracy in predicting battery SOC, showcasing its superior performance in SOC estimation.
|
| 76 |
+
|
| 77 |
+
## REFERENCES
|
| 78 |
+
|
| 79 |
+
[1] J. S. Goud, K. R and B. Singh, "An online method of estimating state of health of a Li-ion battery," IEEE Transactions on Energy Conversion, vol. 36, no. 1, pp. 111-119, Mar. 2021.
|
| 80 |
+
|
| 81 |
+
[2] X. Fan, W. Zhang, C. Zhang, A. Chen, and F. An, "SOC estimation of Li-ion battery using convolutional neural network with U-Net architecture". Energy, vol. 256, p. 124612, Oct. 2022.
|
| 82 |
+
|
| 83 |
+
[3] A. Tang, Y. Huang, S. Liu, Q. Yu, W. Shen, Q. Yu, W. Shen, and R. Xiong, "A novel lithium-ion battery state of charge estimation method based on the fusion of neural network and equivalent circuit models," Applied Energy, vol. 348, p. 121578, Oct. 2023.
|
| 84 |
+
|
| 85 |
+
[4] F. Li, W. Zuo, K. Zhou, Q. Li, Y. Huang, and G. Zhang, "State-of-charge estimation of lithium-ion battery based on second order resistor-capacitance circuit-PSO-TCN model," Energy, vol. 289, p.130025, Feb. 2024.
|
| 86 |
+
|
| 87 |
+
[5] H. Yu, L. Zhang, W. Wang, S. Li, S. Chen, S. Yang, J. Li, and X. Liu, "State of charge estimation method by using a simplified electrochemical model in deep learning framework for lithium-ion batteries," Energy, vol. 278, p. 127846, Sep. 2023.
|
| 88 |
+
|
| 89 |
+
[6] H. Zhang, J. Xiong, S. Li, L. Sun, and Y. Zhang, A, "review on estimation strategies of lithium-ion battery state of charge and health for electric vehicle applications," Journal of Power Sources, vol. 356, pp. 11-26, 2017.
|
| 90 |
+
|
| 91 |
+
[7] W. Li, Y. Yang, D. Wang, and S. Yin, "The multi-innovation extended Kalman filter algorithm for battery SOC estimation," Ionics, vol. 26, pp. 6145-6156. Dec. 2020.
|
| 92 |
+
|
| 93 |
+
[8] Q. Shi, Z. Jiang, Z. Wang, and L. He, "State of charge estimation by joint approach with model-based and data-driven algorithm for lithium-ion battery," IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-10, Aug. 2022.
|
| 94 |
+
|
| 95 |
+
[9] G.L. Plett, Battery management systems, Volume I: Battery modeling, Artech House, ch.2, sec.8, p. 44, 2015.
|
| 96 |
+
|
| 97 |
+
[10] S. Bittu, S. Halder, S. Kumar, N. Das, S. Bhattacharjee, and M. Ghosh, "Battery SOC Estimation Using Enhanced Self-Correcting Model-Based Extended Kalman Filter," in 2023 7th International Conference on Computer Applications in Electrical Engineering-Recent Advances (CERA). IEEE. pp. 1-6, 2023.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/DuY2U9TNuJ/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ RESEARCH ON BATTERY SOC ESTIMATION METHOD BY COMBINING OPTIMIZATION ALGORITHM AND MULTI-MODEL KALMAN FILTERING
|
| 2 |
+
|
| 3 |
+
Zhi Ming Chen
|
| 4 |
+
|
| 5 |
+
College of Science
|
| 6 |
+
|
| 7 |
+
Liaoning University of Technology
|
| 8 |
+
|
| 9 |
+
Jinzhou, China
|
| 10 |
+
|
| 11 |
+
chenzhiminglab@163.com
|
| 12 |
+
|
| 13 |
+
Chang Qi Zhu
|
| 14 |
+
|
| 15 |
+
Navigation College
|
| 16 |
+
|
| 17 |
+
Dalian Maritime University
|
| 18 |
+
|
| 19 |
+
Dalian, China
|
| 20 |
+
|
| 21 |
+
zhuchangqi_work@163.com
|
| 22 |
+
|
| 23 |
+
Lei Liu
|
| 24 |
+
|
| 25 |
+
College of Science
|
| 26 |
+
|
| 27 |
+
Liaoning University of Technology
|
| 28 |
+
|
| 29 |
+
Jinzhou, China
|
| 30 |
+
|
| 31 |
+
liuleill@live.cn
|
| 32 |
+
|
| 33 |
+
${Abstract}$ -With the rapid growth of electric vehicles and energy storage systems, accurate state of charge (SOC) estimation has become a critical component of battery management systems (BMS), essential for preventing overcharging and over-discharging, enhancing operational safety, and extending battery life. This paper proposes a novel SOC estimation method based on an enhanced self-correcting (ESC) model incorporating a second-order RC circuit, enabling a more accurate simulation of battery response time and dynamic behavior. To improve model reliability, a genetic algorithm-particle swarm optimization (GA-PSO) approach is employed for parameter identification. Additionally, a multi-model adaptive extended Kalman filter (AEKF) algorithm is introduced to achieve precise SOC estimation. MATLAB simulations using constant current discharge and automotive driving cycle data demonstrate that the proposed method outperforms traditional AEKF algorithms, with faster convergence and higher estimation accuracy, particularly in scenarios with varying initial estimation accuracies. The results highlight the potential of this approach to significantly enhance SOC estimation in BMS, contributing to safer operation and prolonged battery life in electric vehicles and energy storage systems.
|
| 34 |
+
|
| 35 |
+
Keywords—SOC estimation, Enhanced Self-Correcting model, parameter identification, GA-PSO, multi-model AEKF.
|
| 36 |
+
|
| 37 |
+
§ I. INTRODUCTION
|
| 38 |
+
|
| 39 |
+
As electric vehicles and energy storage systems continue to develop rapidly, the application of batteries as key energy storage devices has become increasingly widespread, highlighting the growing importance of battery management and control [1]. Within battery management systems (BMS), accurately estimating the state of charge (SOC) is a critical task [2]. Precise SOC estimation not only enables more reliable predictions of vehicle range but also improves battery utilization and helps prevent significant reductions in battery lifespan caused by overcharging or deep discharging [3]. However, the nonlinear characteristics, time-varying behavior, and electrochemical reactions of batteries make it impossible to measure their SOC directly with sensors [4]. Instead, SOC must be estimated using indirect measurements such as voltage, current, and temperature. Common SOC estimation methods include approaches based on open-circuit voltage, coulomb counting, data-driven techniques, and model-based estimation methods [5]. Each of these approaches presents distinct advantages and disadvantages [6].
|
| 40 |
+
|
| 41 |
+
Among these methods, model-based estimation achieves a reasonable balance between accuracy, real-time performance, and computational cost by integrating the battery equivalent circuit model (ECM) with state estimation algorithms. The ECM is a key component in this approach. Previous studies have advanced SOC estimation using various models and algorithms. Li et al. [7] utilized a second-order RC model with a stochastic gradient algorithm for parameter identification and developed a multi-innovation extended Kalman filter, validated experimentally. Shi et al. [8] employed Bayesian belief networks and adaptive extended Kalman particle filtering, demonstrating enhanced convergence and accuracy.
|
| 42 |
+
|
| 43 |
+
However, these studies largely overlook the hysteresis effect in battery charging and discharging. Gregory L. Plett [9] addressed this by introducing an Enhanced Self-Correcting (ESC) model that incorporates hysteresis into the ECM. Sk Bittu et al. [10] simulated a first-order RC ESC model with an EKF algorithm for SOC estimation but found that the model struggles with complex polarization dynamics, and the EKF's performance deteriorates with significant measurement errors.
|
| 44 |
+
|
| 45 |
+
Accurate SOC estimation requires precise circuit modeling and effective algorithms. This study incorporates the hysteresis phenomenon using an ESC model with second-order RC characteristics. The GA-PSO algorithm is applied for precise identification of battery model parameters via an optimized fitness function. Additionally, a multi-model AEKF is developed, integrating an adaptive factor into the EKF to refine the gain matrix, thereby improving the capture of the model's dynamic properties. This multi-model approach reduces estimation errors and enhances the robustness, accuracy, and stability of SOC estimation. The main contributions of this paper are as follows:
|
| 46 |
+
|
| 47 |
+
1) Battery parameter estimation: An ESC model with second-order RC characteristics is used for accurate characterization, with GA-PSO employed for parameter identification, validated through model testing.
|
| 48 |
+
|
| 49 |
+
2) Multi-model AEKF algorithm for SOC estimation: A multi-model AEKF algorithm is designed, combining adaptive parameters for process noise with a multi-model approach to improve SOC estimation accuracy.
|
| 50 |
+
|
| 51 |
+
3) Simulation comparative analysis: SOC estimation is analyzed using constant current discharge and automotive driving cycle scenarios, comparing the multi-model AEKF with the traditional AEKF, demonstrating enhanced convergence and accuracy..
|
| 52 |
+
|
| 53 |
+
§ II. LITHIUM-ION BATTERY SOC ESTIMATION METHOD
|
| 54 |
+
|
| 55 |
+
This paper presents an ESC model based on a second-order RC equivalent circuit, incorporating the hysteresis phenomenon observed during battery charging and discharging. The model captures the battery's dynamic behavior, static characteristics, and hysteresis effects, as shown in Fig. 1.
|
| 56 |
+
|
| 57 |
+
< g r a p h i c s >
|
| 58 |
+
|
| 59 |
+
Fig. 1. ESC model of second-order RC
|
| 60 |
+
|
| 61 |
+
To identify the unknown parameters in the ESC model, the GA-PSO algorithm, an integration of Genetic Algorithm and Particle Swarm Optimization, is utilized. The process initiates with GA generating an initial population of parameter sets, which are subsequently evaluated by comparing the model's predictions with experimental battery data. GA operations, including selection, crossover, and mutation, are employed to refine these parameters, while PSO dynamically adjusts their search direction. After several iterations, the algorithm converges on the optimal parameter set, facilitating precise SOC estimation.
|
| 62 |
+
|
| 63 |
+
Building on the ESC model and parameter identification, a multi-model AEKF framework is developed to enhance SOC estimation. This framework employs multi-model fusion, integrating the estimates from several models to improve filter performance and robustness. The battery SOC is quantized into discrete sets, with $\mathrm{n}$ AEKF models constructed. The conditional probability of each SOC is calculated using Bayesian rules, and the SOC with the highest probability is selected for each time step. By using conditional probability as the switching rule, the multi-model AEKF adapts to varying operating conditions and improves SOC estimation accuracy and stability. The Bayesian rule used to compute these conditional probabilities is given by the following formula:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
p\left( {{s}_{i} \mid {Y}_{k}}\right) = \frac{p\left( {{y}_{k} \mid {Y}_{k - 1},{s}_{i}}\right) p\left( {{Y}_{k - 1} \mid {s}_{i}}\right) p\left( {s}_{i}\right) }{\mathop{\sum }\limits_{{i = 1}}^{N}p\left( {{y}_{k} \mid {Y}_{k - 1},{s}_{i}}\right) p\left( {{Y}_{k - 1} \mid {s}_{i}}\right) p\left( {s}_{i}\right) } \tag{1}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
where $p\left( {s}_{i}\right)$ denotes the prior probability, reflecting the initial estimate of the state ${s}_{i}$ in the absence of any measurement information. The entire expression delineates the posterior probability of each potential state given all previous measurements ${Y}_{k - 1}$ and the current measurement ${s}_{i}$ .
|
| 70 |
+
|
| 71 |
+
MATLAB simulations were conducted to model constant current discharge and automotive driving cycle discharge scenarios. A comparative experiment was set up between the traditional AEKF and the multi-model AEKF, focusing on evaluating their convergence performance and accuracy under conditions of unstable initial parameters and complex variations in discharge current.
|
| 72 |
+
|
| 73 |
+
§ III. CONCLUSION
|
| 74 |
+
|
| 75 |
+
This paper focuses on the estimation performance of SOC in lithium-ion batteries. A second-order RC ESC model is considered, and the battery parameters are identified using the GA-PSO algorithm. Additionally, accurate estimation of battery SOC is achieved through the implementation of a multi-model Adaptive Kalman Filter. To validate the effectiveness of the proposed method, a series of simulation comparisons are conducted. The simulation results demonstrate that the proposed multi-model AEKF algorithm exhibits fast convergence and high estimation accuracy in predicting battery SOC, showcasing its superior performance in SOC estimation.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,297 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dynamic Target Pursuit by Multi-UAV Under Communication Coverage: ACO-MATD3 Approach
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Zhuang Cao
|
| 4 |
+
|
| 5 |
+
School of Information and Communication Engineering Hainan University
|
| 6 |
+
|
| 7 |
+
Haikou, Hainan
|
| 8 |
+
|
| 9 |
+
hnucz@hainanu.edu.cn
|
| 10 |
+
|
| 11 |
+
${2}^{\text{nd }}$ Di Wu*
|
| 12 |
+
|
| 13 |
+
School of Information and Communication Engineering
|
| 14 |
+
|
| 15 |
+
Hainan University
|
| 16 |
+
|
| 17 |
+
Haikou, Hainan
|
| 18 |
+
|
| 19 |
+
hainuicaplab@hainanu.edu.cn
|
| 20 |
+
|
| 21 |
+
Abstract-This study proposes a new approach for cooperative pursuit of dynamic targets under communication coverage involving multi-unmanned aerial vehicles (UAVs). This approach combines the ant colony optimization algorithm with the multi-agent twin delay deep deterministic policy gradient, called ACO-MATD3. The ACO-MATD3 algorithm dynamically adjusts hyper-parameters based on varying stages and requirements, greatly enhancing the stability and performance of cooperative multi-UAV pursuit tasks, especially under strong communication coverage. Experimental results demonstrate that the ACO-MATD3 algorithm significantly outperforms other algorithms in terms of mean reward and communication return.
|
| 22 |
+
|
| 23 |
+
Index Terms-Multi-UAV, Pursuit, Communication coverage, Ant colony optimization algorithm, Multi-agent reinforcement learning
|
| 24 |
+
|
| 25 |
+
## I. INTRODUCTION
|
| 26 |
+
|
| 27 |
+
In recent years, multi-unmanned aerial vehicles (UAVs) have found extensive applications in fields like agriculture [1], environmental monitoring [2], and communication [3], [4], due to their flexibility and ease of deployment. As technology progresses, UAVs are tasked with more complex challenges such as pursuing dynamic targets, where UAVs need to consistently pursue and approach a moving target in complex environments through strategic adjustments. This pursuit involves a strategic interaction between the UAVs and the targets, where effective decision-making is vital for success and showcases the UAV's intelligence. Therefore, developing effective pursuit strategies is crucial.
|
| 28 |
+
|
| 29 |
+
Significant research has been conducted on the pursuit of UAVs using traditional methods. For instance, the study in [5] developed a cooperative pursuit-evasion strategy for UAVs in a complex 3D environment, utilizing a heterogeneous system to enhance spatial perception and decision-making. However, this approach encounters challenges related to scalability, computational demands, and robustness in dynamic environments. In [6], the problem of minimizing the time for a UAV to pursuit a moving ground target by optimizing the pursuit strategy using sensor data. Additionally, a hierarchical game structure was proposed in [7] to enhance the cooperative pursuit-evasion capabilities of UAVs in dynamic environments. Despite these advancements, the high computational complexity of these methods and the necessity to predefine the UAVs' flight paths limit their applicability in unknown environments.
|
| 30 |
+
|
| 31 |
+
Fortunately, advancements in deep reinforcement learning (DRL) have introduced new methods for addressing UAV pursuit problems. Techniques such as the deep deterministic policy gradient (DDPG) [8] and twin delay deep deterministic policy gradient (TD3) [9] enable simultaneous learning of value and policy functions, thereby enhancing algorithm efficiency and stability. However, in multi-agent environments, interactions between agents can lead to policy non-convergence when DRL algorithms are applied directly. To address this issue, multi-agent reinforcement learning (MARL) algorithms, including the multi-agent deep deterministic policy gradient (MADDPG) [10] and multi-agent twin delay deep deterministic policy gradient (MATD3) [11], have been developed. The MATD3 is an improvement based on MADDPG. These algorithms improve stability and collaboration among agents by employing a centralized training and decentralized execution (CTDE) mechanism [10].
|
| 32 |
+
|
| 33 |
+
Based on these DRL methods mentioned above, several studies have attempted to utilize DRL to solve UAV pursuit tasks. An approach proposed for UAV pursuit-evasion games utilizes hierarchical maneuvering decision-making with soft actor-critic algorithm [12] to enhance autonomy and strategic flexibility in complex environments. However, this method needs to work on high-dimensional state spaces. Another study [13] proposed a UAV pursuit policy combining DDPG with imitation learning to improve sample exploration efficiency, resulting in better performance and faster convergence than traditional DDPG method. A multi-UAV pursuit-evasion game was also explored in [14], utilizing online motion planning and DRL to enhance UAV interactions in complex environments. However, these studies still do not address the challenge of maintaining communication among UAVs while performing their tasks.
|
| 34 |
+
|
| 35 |
+
Based on the above related research, we propose an algorithm that combines MATD3 and ant colony optimization (ACO) algorithm to address the multi-UAV cooperative pursuit problem under communication coverage, called ACO-MATD3. The algorithm can adaptively select the optimal hyperpa-rameters at different stages during the training process. As a result, the multi-UAV system learns a policy that allows it to pursuit dynamic targets in the airspace without prior knowledge, while maintaining strong communication coverage from base stations (BSs). The main contributions of this paper are as follows:
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
This work is partly distributed under the "South China Sea Rising Star" Education Platform Foundation of Hainan Province (JYNHXX2023-17G), the Natural Science Foundation of Hainan Province (624MS036), the Postgraduate Innovation Projects in Hainan Province (Qhys2023-290).
|
| 40 |
+
|
| 41 |
+
Corresponding author: Di Wu.
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
(1) In contrast to non-learning based approaches [5], [6], [7], the problem of multi-UAV cooperative pursuit problem under communication coverage is formulated as a Markov game. Each UAV operates as an independent agent while cooperating with others to maximize cumulative rewards and optimize their policies.
|
| 46 |
+
|
| 47 |
+
(2) Differently from other DRL-based approaches [12], [13], [14], this study investigates the communication connectivity between multi-UAV and BSs during pursuit tasks, and considers the effect of noise in the environment on communication.
|
| 48 |
+
|
| 49 |
+
(3) Compared with the MATD3 [11] algorithm, the ACO-MATD3 algorithm proposed in this study can dynamically optimize the hyperparameters according to the training stage, reduce the impact of hyperparameters on performance, and improve training efficiency and effectiveness.
|
| 50 |
+
|
| 51 |
+
The paper is organized as follows: Section 2 provides the problem description and system modeling. Section 3 presents the ACO-MATD3 algorithm proposed in this paper. Section 4 analyses the results of the experiment. Section 5 concludes the paper.
|
| 52 |
+
|
| 53 |
+
## II. Problem Description and System Modeling
|
| 54 |
+
|
| 55 |
+
In this section, we describe the multi-UAV pursuit problem under communication coverage. Then the BS antenna model and the path loss model are introduced. Finally, we illustrate the communication coverage model used in this experiment.
|
| 56 |
+
|
| 57 |
+
## A. Problem Description
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Fig. 1: Communication coverage strength map.
|
| 62 |
+
|
| 63 |
+
This experiment investigates the multi-UAV pursuit problem under communication coverage, consisting of multi-UAV, obstacles and dynamic targets, as shown in Fig. 1. Their initial positions are randomly generated. The BSs support UAV communication, with the blue shading in Fig. 1 indicating the strength of the communication coverage. During the pursuit of dynamic targets, each UAV must avoid collisions with obstacles and maintain strong communication coverage.
|
| 64 |
+
|
| 65 |
+
## B. Antenna Model and Path Loss Model
|
| 66 |
+
|
| 67 |
+
This experiment formulates the antenna model of the BSs through the 3GPP [15] specification. Each BS has the same height ${h}_{BS}$ and divided into three sectors, each vertically placed with a uniform linear array of 8 elements.
|
| 68 |
+
|
| 69 |
+
The radiation pattern of each element is determined by combining its horizontal and vertical radiation patterns, defined as
|
| 70 |
+
|
| 71 |
+
follows:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{AH} = - \min \left\lbrack {{12}{\left( \frac{\phi }{{\phi }_{3dB}}\right) }^{2},{A}_{m}}\right\rbrack \tag{1}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{AV} = - \min \left\lbrack {{12}{\left( \frac{\theta - {90}}{{\theta }_{3dB}}\right) }^{2},{A}_{m}}\right\rbrack \tag{2}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $\phi$ is the azimuth angle indicating the angle of the antenna in the horizontal plane, and $\theta$ is the elevation angle indicating the angle of the antenna in the vertical plane. Both are in degrees, ${\phi }_{3dB}$ and ${\theta }_{3dB}$ are the half-power beamwidths, ${A}_{m}$ is the element gain threshold.
|
| 82 |
+
|
| 83 |
+
The total gain of the antenna elements is expressed in dB as:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{G}_{{el}{e}_{dB}} = {G}_{max} + {A}_{ele} \tag{3}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
= {G}_{\max } + \left\{ {-\min \left\lbrack {-\left( {{AH} + {AV}}\right) ,{A}_{m}}\right\rbrack }\right\}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where ${A}_{ele}$ represents the power gain of the antenna element and ${G}_{\max }$ is the maximum directional gain of the antenna element. For ease of computation, we convert ${G}_{{el}{e}_{dB}}$ to the linear scale of ${G}_{ele}$ .
|
| 94 |
+
|
| 95 |
+
The combined gain of the antenna array is expressed in dB as:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
G = {10} \times {\log }_{10}{\left| {F}_{\text{ele }} \times AF\right| }^{2} \tag{4}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where ${F}_{ele}$ is the arithmetic square root of ${G}_{ele}$ , and ${AF}$ is the antenna array factor.
|
| 102 |
+
|
| 103 |
+
This experiment determines whether the communication link between the UAV and the BS sector is a line of sight (LoS) link or an non-line of sight (NLoS) link by assessing whether buildings in the environment obscure the communication link. The path loss of the LoS link from the UAV to sector $m$ is expressed in $\mathrm{{dB}}$ as:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{h}_{m}^{\mathrm{{LoS}}}\left( t\right) = {28} + {22}{\log }_{10}{d}_{m}\left( t\right) + {20}{\log }_{10}{f}_{c} \tag{5}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where ${d}_{m}\left( t\right)$ represents the distance between the UAV and sector $m$ , and ${f}_{c}$ denotes the carrier frequency.
|
| 110 |
+
|
| 111 |
+
The path loss of the NLoS link between the UAV and sector $m$ is given in $\mathrm{{dB}}$ as:
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{h}_{m}^{\mathrm{{NLoS}}}\left( t\right) = - {17.5} + \left( {{46} - 7{\log }_{10}h\left( t\right) }\right) {\log }_{10}{d}_{m}\left( t\right) \tag{6}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
+ {20}{\log }_{10}\left( {{40\pi }{f}_{c}/3}\right)
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where $h\left( t\right)$ is the height of UAV at time $t$ .
|
| 122 |
+
|
| 123 |
+
In addition, the channel small-scale fading is Rician fading in the case of LoS and Rayleigh fading in the case of NLoS.
|
| 124 |
+
|
| 125 |
+
## C. Communication Model
|
| 126 |
+
|
| 127 |
+
The baseband equivalent channel between the UAV and the communication BS sector $m$ at time $t$ is denoted by ${H}_{m}\left( t\right)$ , where $1 \leq m \leq M$ , and $M$ represents the total number of communication BS sectors linked with the UAV throughout its flight. The baseband equivalent channel ${H}_{m}\left( t\right)$ is influenced by the BS antenna array gain $G$ , the path loss $\beta$ , and the small-scale fading $h$ . The magnitudes of ${H}_{m}\left( t\right)$ and $\beta$ are related to the position $q\left( t\right)$ of the UAV at time $t$ , while $h$ is a random variable. The signal power received by the UAV from the communication BS sector $m$ at time $t$ can be expressed as:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{P}_{m}\left( t\right) = \bar{P}{\left| {H}_{m}\left( t\right) \right| }^{2} = \bar{P}{G}_{m}\left( {q\left( t\right) }\right) \beta \left( {q\left( t\right) }\right) h\left( t\right) \tag{7}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $\bar{P}$ represents the transmit power of the BS sector $m$ , which is assumed to remain constant. The path loss is calculated using the following equation:
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\beta \left( {q\left( t\right) }\right) = \left\{ \begin{array}{l} P{L}_{LoS},\text{ if LoS link } \\ P{L}_{NLoS},\text{ if NLoS link } \end{array}\right. \tag{8}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where $P{L}_{LoS}$ and $P{L}_{NLoS}$ are the linear scales of ${h}_{m}^{\mathrm{{LoS}}}\left( t\right)$ and ${h}_{m}^{\mathrm{{NLoS}}}\left( t\right)$ , respectively.
|
| 140 |
+
|
| 141 |
+
In this experiment, the signal to interference plus noise ratio (SINR) is used as a crucial criterion for evaluating the communication coverage performance of UAVs. This criterion can be expressed as:
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
{SIN}{R}_{t} = \frac{{P}_{m}\left( t\right) }{\mathop{\sum }\limits_{{n \neq m}}{P}_{n}\left( t\right) + {\sigma }^{2}} \tag{9}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where $n$ represents the BSs not associated with the UAV at time $t$ . In this case, the communication of the UAV is affected not only by interference from all non-associated BS sectors but also by the environmental noise, which impacts the quality of its communication.
|
| 148 |
+
|
| 149 |
+
To ensure communication coverage while the UAV is airborne, the SINR of the UAV should not drop below a minimum threshold $\alpha$ . That is, the UAV is not under the communication coverage of the BS when $\operatorname{SINR}\left( t\right) < \alpha$ . Each UAV has an independent SINR at time $t$ .
|
| 150 |
+
|
| 151 |
+
## III. Multi-UAV COOPERATIVE PURSUIT USING ACO-MATD3
|
| 152 |
+
|
| 153 |
+
In this subsection, we characterize the UAV's state space, action space, and reward function within a Markov game framework and detail our proposed ACO-MATD3 algorithm.
|
| 154 |
+
|
| 155 |
+
## A. Markov Game with Multi-UAV
|
| 156 |
+
|
| 157 |
+
This subsection explores the framework of the Markov game as applied to multi-UAV systems. It details the state and action spaces for UAVs and defines the reward function guiding their interactions in a complex environment.
|
| 158 |
+
|
| 159 |
+
The state space for each UAV $i$ at time $t$ is defined as ${s}_{it} = \left( {{s}_{ut},{s}_{ot},{SIN}{R}_{t}}\right)$ , where ${s}_{ut} = \left( {{x}_{t},{y}_{t},{v}_{xt},{v}_{yt}}\right)$ is a combination of the position and the speed. Additionally, ${s}_{ot} =$ $\left( {{l}_{uu},{l}_{uo},{l}_{ut}}\right)$ represents the distance from the UAV to other UAVs, obstacles and dynamic targets, respectively. ${SIN}{R}_{t}$ denotes the SINR of the UAV at that moment.
|
| 160 |
+
|
| 161 |
+
The action space for each UAV is discrete. The action of UAV $i$ is defined as ${V}_{u} = \left( {{V}_{x},{V}_{y}}\right)$ , which denotes the vector velocity on the $\mathrm{x}$ -axis and $\mathrm{y}$ -axis, respectively. It also changes its own speed when the UAV collides.
|
| 162 |
+
|
| 163 |
+
The reward function for the UAVs in this experiment has three components. It encourages the UAV to quickly pursuit the dynamic target by considering the distance between them, providing a reward ${R}_{\text{goal }}$ upon successful pursuit. It also penalizes collisions to ensure safe flight and rewards higher SINR to promote flying in areas with better communication coverage. The reward function can be expressed as:
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
r\left( {{s}_{t},{a}_{t}}\right) = {R}_{\text{dist }} + {R}_{\text{goal }} + {R}_{\text{coll }} + {R}_{{\text{SINR }}_{t}} \tag{10}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
## B. Fundamental of the ACO-MATD3 Approach
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+
Fig. 2: Framework of ACO-MATD3 algorithm.
|
| 174 |
+
|
| 175 |
+
The ACO algorithm is an optimization algorithm that simulates the foraging behavior of ants. It directs the ant colony towards the optimal path in complex search spaces through pheromone accumulation and evaporation, combined with a probabilistic selection mechanism. This experiment combines the ACO algorithm with the MATD3 algorithm, aiming to dynamically choose the most appropriate learning rate $\alpha$ , discount factor $\gamma$ , and batch size $\mathcal{B}$ based on the current situation at different stages. This integration enhances the adaptability and robustness of the ACO-MATD3 algorithm. The framework of the algorithm is illustrated in Fig. 2.
|
| 176 |
+
|
| 177 |
+
We define a search space containing three hyperparameters: $\alpha ,\gamma$ , and $\mathcal{B}$ . Each hyperparameter has multiple candidate values, and the range of values for these hyperparameters is given in detail in the next chapter. Additionally, we initialize a pheromone matrix.
|
| 178 |
+
|
| 179 |
+
In the initialization phase, we establish an initial colony of 100 ants. Each ant's hyperparameter configuration is derived by calculating selection probabilities based on the current values in the pheromone matrix. These probabilities then guide the random selection of hyperparameters from the corresponding spaces. The selection probability for each hyperparameter value is calculated as follows:
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
p\left( {v}_{i}\right) = \frac{\tau \left( {v}_{i}\right) }{\mathop{\sum }\limits_{{k = 1}}^{n}\tau \left( {v}_{k}\right) } \tag{11}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
where $p\left( {v}_{i}\right)$ represents the probability of selecting the $i$ -th value, $\tau \left( {v}_{i}\right)$ denotes the pheromone level associated with the $i$ -th value, and $n$ is the total number of possible values for the hyperparameter. This approach ensures that the search space is thoroughly explored, enabling the algorithm to evaluate a wide array of potential solutions right from the start.
|
| 186 |
+
|
| 187 |
+
In the multi-UAV system, each UAV uses hyperparameters derived from the current ant's configuration to execute a pursuit task, and the resulting reward values are recorded. If the reward from a particular set of hyperparameters exceeds the highest reward recorded in previous iterations, that configuration is designated as the optimal set for the current phase.
|
| 188 |
+
|
| 189 |
+
After each iteration, the pheromone level is adjusted according to the optimal hyperparameter configuration determined during the evaluation process. During this update process, the pheromone level for the chosen optimal configuration is increased to reinforce its selection in future iterations. Simultaneously, the pheromone levels for the other hyper-parameters are reduced in accordance with the evaporation rate to ensure diversity in the search process and prevent premature convergence. This pheromone updating method can be succinctly described as follows:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\tau \left( {v}_{i}\right) \leftarrow \tau \left( {v}_{i}\right) + {\Delta \tau } \tag{12}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
\tau \left( {v}_{i}\right) \leftarrow \tau \left( {v}_{i}\right) \times \left( {1 - \rho }\right) \tag{13}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
where ${\Delta \tau }$ represents the increment added to the pheromone level upon a successful iteration, $\rho$ is the evaporation rate that moderates the decrease in pheromone levels to facilitate sustained exploration and exploitation balance. This dynamic adjustment ensures that the search algorithm not only intensifies exploration around proven successful parameters but also explores new potential areas effectively.
|
| 200 |
+
|
| 201 |
+
In the ACO-MATD3 algorithm, the target Q-value for UAV $i$ is calculated as:
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
{y}_{i} = {r}_{i} + \gamma \mathop{\min }\limits_{{j = 1,2}}{Q}_{{w}_{i, j}^{\prime }}\left( {{x}^{\prime },{a}_{1}^{\prime },\ldots ,{a}_{N}^{\prime }}\right) \tag{14}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
where ${r}_{i}$ is the reward received by UAV $i,\gamma$ is the discount factor, ${Q}_{{w}_{i, j}^{\prime }}$ is the $j$ -th target critic network of UAV $i, x$ is the joint next state of all UAVs, and ${a}_{i}^{\prime }$ represents the joint actions of all UAVs at the next time.
|
| 208 |
+
|
| 209 |
+
The loss function for updating the critic networks is:
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
L\left( {w}_{i}\right) = {\mathbb{E}}_{\left( {x,{a}_{i}, r,{x}^{\prime }}\right) \sim D}\left\lbrack {\left( {y}_{i} - {Q}_{{w}_{i}}\left( x,{a}_{1},\ldots ,{a}_{N}\right) \right) }^{2}\right\rbrack \tag{15}
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
where ${w}_{i}$ represents the parameters of the critic network for UAV $i, D$ is the experience replay buffer.
|
| 216 |
+
|
| 217 |
+
The policy update rule for the actor networks is given by:
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
{\nabla }_{{\theta }_{i}}J\left( {\theta }_{i}\right) =
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{\mathbb{E}}_{x,{a}_{i} \sim D}\left\lbrack {\left. {\nabla }_{{\theta }_{i}}{\pi }_{{\theta }_{i}}\left( {s}_{i}\right) {\nabla }_{{a}_{i}}{Q}_{{w}_{i}}\left( x,{a}_{1},\ldots ,{a}_{N}\right) \right| }_{{a}_{i} = {\pi }_{{\theta }_{i}}\left( {s}_{i}\right) }\right\rbrack
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
(16)where ${\theta }_{i}$ represents the parameters of the actor network for UAV $i,{s}_{i}$ is the state of $i$ th UAV, ${\pi }_{{\theta }_{i}}\left( {s}_{i}\right)$ is the policy of UAV $i$ .
|
| 228 |
+
|
| 229 |
+
## IV. SIMULATION RESULTS AND DISCUSSION
|
| 230 |
+
|
| 231 |
+
## A. Parameter Setting
|
| 232 |
+
|
| 233 |
+
In this experiment, we build a $2\mathrm{\;{km}} \times 2\mathrm{\;{km}}$ urban area scenario with numerous buildings, each with a maximum height ${h}_{bd}$ of 90 meters. The presence of a LoS link is determined by examining the linear connection between the BSs and the UAVs, considering the distribution of buildings. There are seven BSs in this area, totaling $M = {21}$ sectors. The transmit power of each sector is $\bar{P} = {20}\mathrm{\;{dBm}}$ . The half-power beamwidth ${\phi }_{3dB}$ and ${\theta }_{3dB}$ both are ${65}^{ \circ }$ . The SINR interruption threshold is ${\gamma }_{th} = 1\mathrm{\;{dB}}$ . The noise power ${\sigma }^{2}$ of $5\mathrm{{dBm}}$ .
|
| 234 |
+
|
| 235 |
+
The hyperparameter search spaces for the ACO-MATD3 algorithm are: learning rate $= \{ {0.005},{0.01},{0.015}\}$ , discount factor $= \{ {0.93},{0.95},{0.97}\}$ , batch size $= \{ {512},{1024}\}$ . The remaining algorithm parameters and the parameters for the DRL algorithms are provided in Table 1.
|
| 236 |
+
|
| 237 |
+
TABLE I: DRL algorithm parameters setting
|
| 238 |
+
|
| 239 |
+
<table><tr><td>Definition</td><td>Value</td><td>Definition</td><td>Value</td></tr><tr><td>Max episodes</td><td>100000</td><td>Max step per episode</td><td>25</td></tr><tr><td>Replay buffer capacity</td><td>1000000</td><td>Batch size</td><td>1024</td></tr><tr><td>Learning rate</td><td>0.01</td><td>Gamma</td><td>0.95</td></tr><tr><td>R_coll</td><td>-2</td><td>R_goal</td><td>8</td></tr></table>
|
| 240 |
+
|
| 241 |
+
## B. Result Analysis
|
| 242 |
+
|
| 243 |
+
The experiment involves 3 UAVs, 3 dynamic targets, and 2 obstacles. To ensure fairness, all parameters were kept constant except for the ACO-MATD3 hyperparameter search space.
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
|
| 247 |
+
Fig. 3: Mean reward for different algorithms.
|
| 248 |
+
|
| 249 |
+
In Fig. 3, we compare the mean reward of the ACO-MATD3 algorithm with other algorithms. At the start of training, reward values drop significantly as the algorithms explore the environment to build awareness. It is clear from the figure that after reaching the converged state, the ACO-MATD3 algorithm achieves a higher mean reward than other algorithms. This highlights the effectiveness of the ACO-MATD3 algorithm, which can dynamically select optimal hyperparameters at different stages, enhancing its performance in complex environments with communication coverage challenges.
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+
Fig. 4: Communication return for different algorithms.
|
| 254 |
+
|
| 255 |
+
The communication return for several algorithms are shown in Fig. 4. The final convergence values of the ACO-MATD3 algorithm are higher than those of the other algorithms, indicating that the flight path selected by the ACO-MATD3 algorithm for multi-UAV operations has stronger communication coverage. This further verifies the effectiveness of the ACO-MATD3 algorithm. In contrast, DDPG shows poor convergence performance in communication return because the UAVs operate independently and cannot learn a common policy. DDPG has poor convergence performance in communication return because the UAVs are all independent of each other and cannot learn a common policy. This situation highlights the improvement brought by the CTDE framework for multi-UAV cooperation.
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+
Fig. 5: Mean reward of each UAV in ACO-MATD3 algorithm.
|
| 260 |
+
|
| 261 |
+
Fig. 5 demonstrates the mean rewards of the three UAVs using the ACO-MATD3 algorithm in this environment. The convergence state aligns with the overall mean reward convergence of the ACO-MATD3 algorithm, demonstrating the superiority of this algorithm with the CTDE mechanism in coordinating the decisions of each UAV. This indicates that the ACO-MATD3 algorithm effectively optimizes both overall performance and individual UAV policies.
|
| 262 |
+
|
| 263 |
+
## V. CONCLUSION
|
| 264 |
+
|
| 265 |
+
In this study, we have presented the ACO-MATD3 algorithm to address multi-UAV pursuit of dynamic targets under communication coverage. This algorithm has dynamically adjusted hyperparameters for different stages to enhance performance and stability. Experimental results have shown that ACO-MATD3 outperforms other algorithms in mean reward and communication return, demonstrating the significant enhancement in task efficiency achieved through dynamically adjusting hyperparameters. Future research will explore how to safely conduct multi-UAV pursuit missions in more complex environments, especially those with dynamic obstacles.
|
| 266 |
+
|
| 267 |
+
## REFERENCES
|
| 268 |
+
|
| 269 |
+
[1] M. F. F. Rahman, S. Fan, Y. Zhang, and L. Chen, "A comparative study on application of unmanned aerial vehicle systems in agriculture," Agriculture, vol. 11, no. 1, p. 22, 2021.
|
| 270 |
+
|
| 271 |
+
[2] R. Sharma and R. Arya, "UAV based long range environment monitoring system with industry 5.0 perspectives for smart city infrastructure," Computers & Industrial Engineering, vol. 168, p. 108066, 2022.
|
| 272 |
+
|
| 273 |
+
[3] Y. Zeng, X. Xu, S. Jin, and R. Zhang, "Simultaneous navigation and radio mapping for cellular-connected UAV with deep reinforcement learning," IEEE Transactions on Wireless Communications, vol. 20, no. 7, pp. 4205-4220, 2021.
|
| 274 |
+
|
| 275 |
+
[4] X. Zhou, S. Yan, J. Hu, J. Sun, J. Li, and F. Shu, "Joint optimization of a UAV's trajectory and transmit power for covert communications," IEEE Transactions on Signal Processing, vol. 67, no. 16, pp. 4276-4290, 2019.
|
| 276 |
+
|
| 277 |
+
[5] X. Liang, H. Wang, and H. Luo, "Collaborative pursuit-evasion strategy of UAV/UGV heterogeneous system in complex three-dimensional polygonal environment," Complexity, vol. 2020, no. 1, p. 7498740, 2020.
|
| 278 |
+
|
| 279 |
+
[6] K. Krishnamoorthy, D. Casbeer, and M. Pachter, "Minimum time UAV pursuit of a moving ground target using partial information," in 2015 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, 2015, pp. 204-208.
|
| 280 |
+
|
| 281 |
+
[7] A. Alexopoulos, T. Schmidt, and E. Badreddin, "Cooperative pursue in pursuit-evasion games with unmanned aerial vehicles," in 2015 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS). IEEE, 2015, pp. 4538-4543.
|
| 282 |
+
|
| 283 |
+
[8] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, "Continuous control with deep reinforcement learning," in 4th International Conference on Learning Representations (ICLR), 2016.
|
| 284 |
+
|
| 285 |
+
[9] S. Fujimoto, H. Hoof, and D. Meger, "Addressing function approximation error in actor-critic methods," in International conference on machine learning (ICML), 2018, pp. 1587-1596.
|
| 286 |
+
|
| 287 |
+
[10] R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, "Multi-agent actor-critic for mixed cooperative-competitive environments," Advances in neural information processing systems, vol. 30, 2017.
|
| 288 |
+
|
| 289 |
+
[11] F. Zhang, J. Li, and Z. Li, "A TD3-based multi-agent deep reinforcement learning method in mixed cooperation-competition environment," Neurocomputing, vol. 411, pp. 206-215, 2020.
|
| 290 |
+
|
| 291 |
+
[12] B. Li, H. Zhang, P. He, G. Wang, K. Yue, and E. Neretin, "Hierarchical maneuver decision method based on PG-Option for UAV pursuit-evasion game," Drones, vol. 7, no. 7, p. 449, 2023.
|
| 292 |
+
|
| 293 |
+
[13] X. Fu, J. Zhu, Z. Wei, H. Wang, and S. Li, "A UAV pursuit-evasion strategy based on UAV and imitation learning," International Journal of Aerospace Engineering, vol. 2022, no. 1, p. 3139610, 2022.
|
| 294 |
+
|
| 295 |
+
[14] R. Zhang, Q. Zong, X. Zhang, L. Dou, and B. Tian, "Game of drones: Multi-UAV pursuit-evasion game with online motion planning by deep reinforcement learning," IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 10, pp. 7900-7909, 2022.
|
| 296 |
+
|
| 297 |
+
[15] J. Cao, M. Ma, H. Li, R. Ma, Y. Sun, P. Yu, and L. Xiong, "A survey on security aspects for 3GPP 5G networks," IEEE communications surveys & tutorials, vol. 22, no. 1, pp. 170-195, 2019.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ED7EDryw3i/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,277 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DYNAMIC TARGET PURSUIT BY MULTI-UAV UNDER COMMUNICATION COVERAGE: ACO-MATD3 APPROACH
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Zhuang Cao
|
| 4 |
+
|
| 5 |
+
School of Information and Communication Engineering Hainan University
|
| 6 |
+
|
| 7 |
+
Haikou, Hainan
|
| 8 |
+
|
| 9 |
+
hnucz@hainanu.edu.cn
|
| 10 |
+
|
| 11 |
+
${2}^{\text{ nd }}$ Di Wu*
|
| 12 |
+
|
| 13 |
+
School of Information and Communication Engineering
|
| 14 |
+
|
| 15 |
+
Hainan University
|
| 16 |
+
|
| 17 |
+
Haikou, Hainan
|
| 18 |
+
|
| 19 |
+
hainuicaplab@hainanu.edu.cn
|
| 20 |
+
|
| 21 |
+
Abstract-This study proposes a new approach for cooperative pursuit of dynamic targets under communication coverage involving multi-unmanned aerial vehicles (UAVs). This approach combines the ant colony optimization algorithm with the multi-agent twin delay deep deterministic policy gradient, called ACO-MATD3. The ACO-MATD3 algorithm dynamically adjusts hyper-parameters based on varying stages and requirements, greatly enhancing the stability and performance of cooperative multi-UAV pursuit tasks, especially under strong communication coverage. Experimental results demonstrate that the ACO-MATD3 algorithm significantly outperforms other algorithms in terms of mean reward and communication return.
|
| 22 |
+
|
| 23 |
+
Index Terms-Multi-UAV, Pursuit, Communication coverage, Ant colony optimization algorithm, Multi-agent reinforcement learning
|
| 24 |
+
|
| 25 |
+
§ I. INTRODUCTION
|
| 26 |
+
|
| 27 |
+
In recent years, multi-unmanned aerial vehicles (UAVs) have found extensive applications in fields like agriculture [1], environmental monitoring [2], and communication [3], [4], due to their flexibility and ease of deployment. As technology progresses, UAVs are tasked with more complex challenges such as pursuing dynamic targets, where UAVs need to consistently pursue and approach a moving target in complex environments through strategic adjustments. This pursuit involves a strategic interaction between the UAVs and the targets, where effective decision-making is vital for success and showcases the UAV's intelligence. Therefore, developing effective pursuit strategies is crucial.
|
| 28 |
+
|
| 29 |
+
Significant research has been conducted on the pursuit of UAVs using traditional methods. For instance, the study in [5] developed a cooperative pursuit-evasion strategy for UAVs in a complex 3D environment, utilizing a heterogeneous system to enhance spatial perception and decision-making. However, this approach encounters challenges related to scalability, computational demands, and robustness in dynamic environments. In [6], the problem of minimizing the time for a UAV to pursuit a moving ground target by optimizing the pursuit strategy using sensor data. Additionally, a hierarchical game structure was proposed in [7] to enhance the cooperative pursuit-evasion capabilities of UAVs in dynamic environments. Despite these advancements, the high computational complexity of these methods and the necessity to predefine the UAVs' flight paths limit their applicability in unknown environments.
|
| 30 |
+
|
| 31 |
+
Fortunately, advancements in deep reinforcement learning (DRL) have introduced new methods for addressing UAV pursuit problems. Techniques such as the deep deterministic policy gradient (DDPG) [8] and twin delay deep deterministic policy gradient (TD3) [9] enable simultaneous learning of value and policy functions, thereby enhancing algorithm efficiency and stability. However, in multi-agent environments, interactions between agents can lead to policy non-convergence when DRL algorithms are applied directly. To address this issue, multi-agent reinforcement learning (MARL) algorithms, including the multi-agent deep deterministic policy gradient (MADDPG) [10] and multi-agent twin delay deep deterministic policy gradient (MATD3) [11], have been developed. The MATD3 is an improvement based on MADDPG. These algorithms improve stability and collaboration among agents by employing a centralized training and decentralized execution (CTDE) mechanism [10].
|
| 32 |
+
|
| 33 |
+
Based on these DRL methods mentioned above, several studies have attempted to utilize DRL to solve UAV pursuit tasks. An approach proposed for UAV pursuit-evasion games utilizes hierarchical maneuvering decision-making with soft actor-critic algorithm [12] to enhance autonomy and strategic flexibility in complex environments. However, this method needs to work on high-dimensional state spaces. Another study [13] proposed a UAV pursuit policy combining DDPG with imitation learning to improve sample exploration efficiency, resulting in better performance and faster convergence than traditional DDPG method. A multi-UAV pursuit-evasion game was also explored in [14], utilizing online motion planning and DRL to enhance UAV interactions in complex environments. However, these studies still do not address the challenge of maintaining communication among UAVs while performing their tasks.
|
| 34 |
+
|
| 35 |
+
Based on the above related research, we propose an algorithm that combines MATD3 and ant colony optimization (ACO) algorithm to address the multi-UAV cooperative pursuit problem under communication coverage, called ACO-MATD3. The algorithm can adaptively select the optimal hyperpa-rameters at different stages during the training process. As a result, the multi-UAV system learns a policy that allows it to pursuit dynamic targets in the airspace without prior knowledge, while maintaining strong communication coverage from base stations (BSs). The main contributions of this paper are as follows:
|
| 36 |
+
|
| 37 |
+
This work is partly distributed under the "South China Sea Rising Star" Education Platform Foundation of Hainan Province (JYNHXX2023-17G), the Natural Science Foundation of Hainan Province (624MS036), the Postgraduate Innovation Projects in Hainan Province (Qhys2023-290).
|
| 38 |
+
|
| 39 |
+
Corresponding author: Di Wu.
|
| 40 |
+
|
| 41 |
+
(1) In contrast to non-learning based approaches [5], [6], [7], the problem of multi-UAV cooperative pursuit problem under communication coverage is formulated as a Markov game. Each UAV operates as an independent agent while cooperating with others to maximize cumulative rewards and optimize their policies.
|
| 42 |
+
|
| 43 |
+
(2) Differently from other DRL-based approaches [12], [13], [14], this study investigates the communication connectivity between multi-UAV and BSs during pursuit tasks, and considers the effect of noise in the environment on communication.
|
| 44 |
+
|
| 45 |
+
(3) Compared with the MATD3 [11] algorithm, the ACO-MATD3 algorithm proposed in this study can dynamically optimize the hyperparameters according to the training stage, reduce the impact of hyperparameters on performance, and improve training efficiency and effectiveness.
|
| 46 |
+
|
| 47 |
+
The paper is organized as follows: Section 2 provides the problem description and system modeling. Section 3 presents the ACO-MATD3 algorithm proposed in this paper. Section 4 analyses the results of the experiment. Section 5 concludes the paper.
|
| 48 |
+
|
| 49 |
+
§ II. PROBLEM DESCRIPTION AND SYSTEM MODELING
|
| 50 |
+
|
| 51 |
+
In this section, we describe the multi-UAV pursuit problem under communication coverage. Then the BS antenna model and the path loss model are introduced. Finally, we illustrate the communication coverage model used in this experiment.
|
| 52 |
+
|
| 53 |
+
§ A. PROBLEM DESCRIPTION
|
| 54 |
+
|
| 55 |
+
< g r a p h i c s >
|
| 56 |
+
|
| 57 |
+
Fig. 1: Communication coverage strength map.
|
| 58 |
+
|
| 59 |
+
This experiment investigates the multi-UAV pursuit problem under communication coverage, consisting of multi-UAV, obstacles and dynamic targets, as shown in Fig. 1. Their initial positions are randomly generated. The BSs support UAV communication, with the blue shading in Fig. 1 indicating the strength of the communication coverage. During the pursuit of dynamic targets, each UAV must avoid collisions with obstacles and maintain strong communication coverage.
|
| 60 |
+
|
| 61 |
+
§ B. ANTENNA MODEL AND PATH LOSS MODEL
|
| 62 |
+
|
| 63 |
+
This experiment formulates the antenna model of the BSs through the 3GPP [15] specification. Each BS has the same height ${h}_{BS}$ and divided into three sectors, each vertically placed with a uniform linear array of 8 elements.
|
| 64 |
+
|
| 65 |
+
The radiation pattern of each element is determined by combining its horizontal and vertical radiation patterns, defined as
|
| 66 |
+
|
| 67 |
+
follows:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{AH} = - \min \left\lbrack {{12}{\left( \frac{\phi }{{\phi }_{3dB}}\right) }^{2},{A}_{m}}\right\rbrack \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{AV} = - \min \left\lbrack {{12}{\left( \frac{\theta - {90}}{{\theta }_{3dB}}\right) }^{2},{A}_{m}}\right\rbrack \tag{2}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where $\phi$ is the azimuth angle indicating the angle of the antenna in the horizontal plane, and $\theta$ is the elevation angle indicating the angle of the antenna in the vertical plane. Both are in degrees, ${\phi }_{3dB}$ and ${\theta }_{3dB}$ are the half-power beamwidths, ${A}_{m}$ is the element gain threshold.
|
| 78 |
+
|
| 79 |
+
The total gain of the antenna elements is expressed in dB as:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{G}_{{el}{e}_{dB}} = {G}_{max} + {A}_{ele} \tag{3}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
= {G}_{\max } + \left\{ {-\min \left\lbrack {-\left( {{AH} + {AV}}\right) ,{A}_{m}}\right\rbrack }\right\}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where ${A}_{ele}$ represents the power gain of the antenna element and ${G}_{\max }$ is the maximum directional gain of the antenna element. For ease of computation, we convert ${G}_{{el}{e}_{dB}}$ to the linear scale of ${G}_{ele}$ .
|
| 90 |
+
|
| 91 |
+
The combined gain of the antenna array is expressed in dB as:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
G = {10} \times {\log }_{10}{\left| {F}_{\text{ ele }} \times AF\right| }^{2} \tag{4}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where ${F}_{ele}$ is the arithmetic square root of ${G}_{ele}$ , and ${AF}$ is the antenna array factor.
|
| 98 |
+
|
| 99 |
+
This experiment determines whether the communication link between the UAV and the BS sector is a line of sight (LoS) link or an non-line of sight (NLoS) link by assessing whether buildings in the environment obscure the communication link. The path loss of the LoS link from the UAV to sector $m$ is expressed in $\mathrm{{dB}}$ as:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{h}_{m}^{\mathrm{{LoS}}}\left( t\right) = {28} + {22}{\log }_{10}{d}_{m}\left( t\right) + {20}{\log }_{10}{f}_{c} \tag{5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where ${d}_{m}\left( t\right)$ represents the distance between the UAV and sector $m$ , and ${f}_{c}$ denotes the carrier frequency.
|
| 106 |
+
|
| 107 |
+
The path loss of the NLoS link between the UAV and sector $m$ is given in $\mathrm{{dB}}$ as:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{h}_{m}^{\mathrm{{NLoS}}}\left( t\right) = - {17.5} + \left( {{46} - 7{\log }_{10}h\left( t\right) }\right) {\log }_{10}{d}_{m}\left( t\right) \tag{6}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
+ {20}{\log }_{10}\left( {{40\pi }{f}_{c}/3}\right)
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where $h\left( t\right)$ is the height of UAV at time $t$ .
|
| 118 |
+
|
| 119 |
+
In addition, the channel small-scale fading is Rician fading in the case of LoS and Rayleigh fading in the case of NLoS.
|
| 120 |
+
|
| 121 |
+
§ C. COMMUNICATION MODEL
|
| 122 |
+
|
| 123 |
+
The baseband equivalent channel between the UAV and the communication BS sector $m$ at time $t$ is denoted by ${H}_{m}\left( t\right)$ , where $1 \leq m \leq M$ , and $M$ represents the total number of communication BS sectors linked with the UAV throughout its flight. The baseband equivalent channel ${H}_{m}\left( t\right)$ is influenced by the BS antenna array gain $G$ , the path loss $\beta$ , and the small-scale fading $h$ . The magnitudes of ${H}_{m}\left( t\right)$ and $\beta$ are related to the position $q\left( t\right)$ of the UAV at time $t$ , while $h$ is a random variable. The signal power received by the UAV from the communication BS sector $m$ at time $t$ can be expressed as:
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{P}_{m}\left( t\right) = \bar{P}{\left| {H}_{m}\left( t\right) \right| }^{2} = \bar{P}{G}_{m}\left( {q\left( t\right) }\right) \beta \left( {q\left( t\right) }\right) h\left( t\right) \tag{7}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $\bar{P}$ represents the transmit power of the BS sector $m$ , which is assumed to remain constant. The path loss is calculated using the following equation:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\beta \left( {q\left( t\right) }\right) = \left\{ \begin{array}{l} P{L}_{LoS},\text{ if LoS link } \\ P{L}_{NLoS},\text{ if NLoS link } \end{array}\right. \tag{8}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where $P{L}_{LoS}$ and $P{L}_{NLoS}$ are the linear scales of ${h}_{m}^{\mathrm{{LoS}}}\left( t\right)$ and ${h}_{m}^{\mathrm{{NLoS}}}\left( t\right)$ , respectively.
|
| 136 |
+
|
| 137 |
+
In this experiment, the signal to interference plus noise ratio (SINR) is used as a crucial criterion for evaluating the communication coverage performance of UAVs. This criterion can be expressed as:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
{SIN}{R}_{t} = \frac{{P}_{m}\left( t\right) }{\mathop{\sum }\limits_{{n \neq m}}{P}_{n}\left( t\right) + {\sigma }^{2}} \tag{9}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
where $n$ represents the BSs not associated with the UAV at time $t$ . In this case, the communication of the UAV is affected not only by interference from all non-associated BS sectors but also by the environmental noise, which impacts the quality of its communication.
|
| 144 |
+
|
| 145 |
+
To ensure communication coverage while the UAV is airborne, the SINR of the UAV should not drop below a minimum threshold $\alpha$ . That is, the UAV is not under the communication coverage of the BS when $\operatorname{SINR}\left( t\right) < \alpha$ . Each UAV has an independent SINR at time $t$ .
|
| 146 |
+
|
| 147 |
+
§ III. MULTI-UAV COOPERATIVE PURSUIT USING ACO-MATD3
|
| 148 |
+
|
| 149 |
+
In this subsection, we characterize the UAV's state space, action space, and reward function within a Markov game framework and detail our proposed ACO-MATD3 algorithm.
|
| 150 |
+
|
| 151 |
+
§ A. MARKOV GAME WITH MULTI-UAV
|
| 152 |
+
|
| 153 |
+
This subsection explores the framework of the Markov game as applied to multi-UAV systems. It details the state and action spaces for UAVs and defines the reward function guiding their interactions in a complex environment.
|
| 154 |
+
|
| 155 |
+
The state space for each UAV $i$ at time $t$ is defined as ${s}_{it} = \left( {{s}_{ut},{s}_{ot},{SIN}{R}_{t}}\right)$ , where ${s}_{ut} = \left( {{x}_{t},{y}_{t},{v}_{xt},{v}_{yt}}\right)$ is a combination of the position and the speed. Additionally, ${s}_{ot} =$ $\left( {{l}_{uu},{l}_{uo},{l}_{ut}}\right)$ represents the distance from the UAV to other UAVs, obstacles and dynamic targets, respectively. ${SIN}{R}_{t}$ denotes the SINR of the UAV at that moment.
|
| 156 |
+
|
| 157 |
+
The action space for each UAV is discrete. The action of UAV $i$ is defined as ${V}_{u} = \left( {{V}_{x},{V}_{y}}\right)$ , which denotes the vector velocity on the $\mathrm{x}$ -axis and $\mathrm{y}$ -axis, respectively. It also changes its own speed when the UAV collides.
|
| 158 |
+
|
| 159 |
+
The reward function for the UAVs in this experiment has three components. It encourages the UAV to quickly pursuit the dynamic target by considering the distance between them, providing a reward ${R}_{\text{ goal }}$ upon successful pursuit. It also penalizes collisions to ensure safe flight and rewards higher SINR to promote flying in areas with better communication coverage. The reward function can be expressed as:
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
r\left( {{s}_{t},{a}_{t}}\right) = {R}_{\text{ dist }} + {R}_{\text{ goal }} + {R}_{\text{ coll }} + {R}_{{\text{ SINR }}_{t}} \tag{10}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
§ B. FUNDAMENTAL OF THE ACO-MATD3 APPROACH
|
| 166 |
+
|
| 167 |
+
< g r a p h i c s >
|
| 168 |
+
|
| 169 |
+
Fig. 2: Framework of ACO-MATD3 algorithm.
|
| 170 |
+
|
| 171 |
+
The ACO algorithm is an optimization algorithm that simulates the foraging behavior of ants. It directs the ant colony towards the optimal path in complex search spaces through pheromone accumulation and evaporation, combined with a probabilistic selection mechanism. This experiment combines the ACO algorithm with the MATD3 algorithm, aiming to dynamically choose the most appropriate learning rate $\alpha$ , discount factor $\gamma$ , and batch size $\mathcal{B}$ based on the current situation at different stages. This integration enhances the adaptability and robustness of the ACO-MATD3 algorithm. The framework of the algorithm is illustrated in Fig. 2.
|
| 172 |
+
|
| 173 |
+
We define a search space containing three hyperparameters: $\alpha ,\gamma$ , and $\mathcal{B}$ . Each hyperparameter has multiple candidate values, and the range of values for these hyperparameters is given in detail in the next chapter. Additionally, we initialize a pheromone matrix.
|
| 174 |
+
|
| 175 |
+
In the initialization phase, we establish an initial colony of 100 ants. Each ant's hyperparameter configuration is derived by calculating selection probabilities based on the current values in the pheromone matrix. These probabilities then guide the random selection of hyperparameters from the corresponding spaces. The selection probability for each hyperparameter value is calculated as follows:
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
p\left( {v}_{i}\right) = \frac{\tau \left( {v}_{i}\right) }{\mathop{\sum }\limits_{{k = 1}}^{n}\tau \left( {v}_{k}\right) } \tag{11}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
where $p\left( {v}_{i}\right)$ represents the probability of selecting the $i$ -th value, $\tau \left( {v}_{i}\right)$ denotes the pheromone level associated with the $i$ -th value, and $n$ is the total number of possible values for the hyperparameter. This approach ensures that the search space is thoroughly explored, enabling the algorithm to evaluate a wide array of potential solutions right from the start.
|
| 182 |
+
|
| 183 |
+
In the multi-UAV system, each UAV uses hyperparameters derived from the current ant's configuration to execute a pursuit task, and the resulting reward values are recorded. If the reward from a particular set of hyperparameters exceeds the highest reward recorded in previous iterations, that configuration is designated as the optimal set for the current phase.
|
| 184 |
+
|
| 185 |
+
After each iteration, the pheromone level is adjusted according to the optimal hyperparameter configuration determined during the evaluation process. During this update process, the pheromone level for the chosen optimal configuration is increased to reinforce its selection in future iterations. Simultaneously, the pheromone levels for the other hyper-parameters are reduced in accordance with the evaporation rate to ensure diversity in the search process and prevent premature convergence. This pheromone updating method can be succinctly described as follows:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\tau \left( {v}_{i}\right) \leftarrow \tau \left( {v}_{i}\right) + {\Delta \tau } \tag{12}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\tau \left( {v}_{i}\right) \leftarrow \tau \left( {v}_{i}\right) \times \left( {1 - \rho }\right) \tag{13}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where ${\Delta \tau }$ represents the increment added to the pheromone level upon a successful iteration, $\rho$ is the evaporation rate that moderates the decrease in pheromone levels to facilitate sustained exploration and exploitation balance. This dynamic adjustment ensures that the search algorithm not only intensifies exploration around proven successful parameters but also explores new potential areas effectively.
|
| 196 |
+
|
| 197 |
+
In the ACO-MATD3 algorithm, the target Q-value for UAV $i$ is calculated as:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
{y}_{i} = {r}_{i} + \gamma \mathop{\min }\limits_{{j = 1,2}}{Q}_{{w}_{i,j}^{\prime }}\left( {{x}^{\prime },{a}_{1}^{\prime },\ldots ,{a}_{N}^{\prime }}\right) \tag{14}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
where ${r}_{i}$ is the reward received by UAV $i,\gamma$ is the discount factor, ${Q}_{{w}_{i,j}^{\prime }}$ is the $j$ -th target critic network of UAV $i,x$ is the joint next state of all UAVs, and ${a}_{i}^{\prime }$ represents the joint actions of all UAVs at the next time.
|
| 204 |
+
|
| 205 |
+
The loss function for updating the critic networks is:
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
L\left( {w}_{i}\right) = {\mathbb{E}}_{\left( {x,{a}_{i},r,{x}^{\prime }}\right) \sim D}\left\lbrack {\left( {y}_{i} - {Q}_{{w}_{i}}\left( x,{a}_{1},\ldots ,{a}_{N}\right) \right) }^{2}\right\rbrack \tag{15}
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
where ${w}_{i}$ represents the parameters of the critic network for UAV $i,D$ is the experience replay buffer.
|
| 212 |
+
|
| 213 |
+
The policy update rule for the actor networks is given by:
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
{\nabla }_{{\theta }_{i}}J\left( {\theta }_{i}\right) =
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
{\mathbb{E}}_{x,{a}_{i} \sim D}\left\lbrack {\left. {\nabla }_{{\theta }_{i}}{\pi }_{{\theta }_{i}}\left( {s}_{i}\right) {\nabla }_{{a}_{i}}{Q}_{{w}_{i}}\left( x,{a}_{1},\ldots ,{a}_{N}\right) \right| }_{{a}_{i} = {\pi }_{{\theta }_{i}}\left( {s}_{i}\right) }\right\rbrack
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
(16)where ${\theta }_{i}$ represents the parameters of the actor network for UAV $i,{s}_{i}$ is the state of $i$ th UAV, ${\pi }_{{\theta }_{i}}\left( {s}_{i}\right)$ is the policy of UAV $i$ .
|
| 224 |
+
|
| 225 |
+
§ IV. SIMULATION RESULTS AND DISCUSSION
|
| 226 |
+
|
| 227 |
+
§ A. PARAMETER SETTING
|
| 228 |
+
|
| 229 |
+
In this experiment, we build a $2\mathrm{\;{km}} \times 2\mathrm{\;{km}}$ urban area scenario with numerous buildings, each with a maximum height ${h}_{bd}$ of 90 meters. The presence of a LoS link is determined by examining the linear connection between the BSs and the UAVs, considering the distribution of buildings. There are seven BSs in this area, totaling $M = {21}$ sectors. The transmit power of each sector is $\bar{P} = {20}\mathrm{\;{dBm}}$ . The half-power beamwidth ${\phi }_{3dB}$ and ${\theta }_{3dB}$ both are ${65}^{ \circ }$ . The SINR interruption threshold is ${\gamma }_{th} = 1\mathrm{\;{dB}}$ . The noise power ${\sigma }^{2}$ of $5\mathrm{{dBm}}$ .
|
| 230 |
+
|
| 231 |
+
The hyperparameter search spaces for the ACO-MATD3 algorithm are: learning rate $= \{ {0.005},{0.01},{0.015}\}$ , discount factor $= \{ {0.93},{0.95},{0.97}\}$ , batch size $= \{ {512},{1024}\}$ . The remaining algorithm parameters and the parameters for the DRL algorithms are provided in Table 1.
|
| 232 |
+
|
| 233 |
+
TABLE I: DRL algorithm parameters setting
|
| 234 |
+
|
| 235 |
+
max width=
|
| 236 |
+
|
| 237 |
+
Definition Value Definition Value
|
| 238 |
+
|
| 239 |
+
1-4
|
| 240 |
+
Max episodes 100000 Max step per episode 25
|
| 241 |
+
|
| 242 |
+
1-4
|
| 243 |
+
Replay buffer capacity 1000000 Batch size 1024
|
| 244 |
+
|
| 245 |
+
1-4
|
| 246 |
+
Learning rate 0.01 Gamma 0.95
|
| 247 |
+
|
| 248 |
+
1-4
|
| 249 |
+
R_coll -2 R_goal 8
|
| 250 |
+
|
| 251 |
+
1-4
|
| 252 |
+
|
| 253 |
+
§ B. RESULT ANALYSIS
|
| 254 |
+
|
| 255 |
+
The experiment involves 3 UAVs, 3 dynamic targets, and 2 obstacles. To ensure fairness, all parameters were kept constant except for the ACO-MATD3 hyperparameter search space.
|
| 256 |
+
|
| 257 |
+
< g r a p h i c s >
|
| 258 |
+
|
| 259 |
+
Fig. 3: Mean reward for different algorithms.
|
| 260 |
+
|
| 261 |
+
In Fig. 3, we compare the mean reward of the ACO-MATD3 algorithm with other algorithms. At the start of training, reward values drop significantly as the algorithms explore the environment to build awareness. It is clear from the figure that after reaching the converged state, the ACO-MATD3 algorithm achieves a higher mean reward than other algorithms. This highlights the effectiveness of the ACO-MATD3 algorithm, which can dynamically select optimal hyperparameters at different stages, enhancing its performance in complex environments with communication coverage challenges.
|
| 262 |
+
|
| 263 |
+
< g r a p h i c s >
|
| 264 |
+
|
| 265 |
+
Fig. 4: Communication return for different algorithms.
|
| 266 |
+
|
| 267 |
+
The communication return for several algorithms are shown in Fig. 4. The final convergence values of the ACO-MATD3 algorithm are higher than those of the other algorithms, indicating that the flight path selected by the ACO-MATD3 algorithm for multi-UAV operations has stronger communication coverage. This further verifies the effectiveness of the ACO-MATD3 algorithm. In contrast, DDPG shows poor convergence performance in communication return because the UAVs operate independently and cannot learn a common policy. DDPG has poor convergence performance in communication return because the UAVs are all independent of each other and cannot learn a common policy. This situation highlights the improvement brought by the CTDE framework for multi-UAV cooperation.
|
| 268 |
+
|
| 269 |
+
< g r a p h i c s >
|
| 270 |
+
|
| 271 |
+
Fig. 5: Mean reward of each UAV in ACO-MATD3 algorithm.
|
| 272 |
+
|
| 273 |
+
Fig. 5 demonstrates the mean rewards of the three UAVs using the ACO-MATD3 algorithm in this environment. The convergence state aligns with the overall mean reward convergence of the ACO-MATD3 algorithm, demonstrating the superiority of this algorithm with the CTDE mechanism in coordinating the decisions of each UAV. This indicates that the ACO-MATD3 algorithm effectively optimizes both overall performance and individual UAV policies.
|
| 274 |
+
|
| 275 |
+
§ V. CONCLUSION
|
| 276 |
+
|
| 277 |
+
In this study, we have presented the ACO-MATD3 algorithm to address multi-UAV pursuit of dynamic targets under communication coverage. This algorithm has dynamically adjusted hyperparameters for different stages to enhance performance and stability. Experimental results have shown that ACO-MATD3 outperforms other algorithms in mean reward and communication return, demonstrating the significant enhancement in task efficiency achieved through dynamically adjusting hyperparameters. Future research will explore how to safely conduct multi-UAV pursuit missions in more complex environments, especially those with dynamic obstacles.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,303 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Research on the classification of ship encounter scenarios based on CAE-LSTM
|
| 2 |
+
|
| 3 |
+
Taiyu Chai
|
| 4 |
+
|
| 5 |
+
School of Navigation
|
| 6 |
+
|
| 7 |
+
Wuhan University of Technology
|
| 8 |
+
|
| 9 |
+
Wuhan, China
|
| 10 |
+
|
| 11 |
+
282614@whut.edu.cn
|
| 12 |
+
|
| 13 |
+
Zhitao Yuan*
|
| 14 |
+
|
| 15 |
+
School of Navigation
|
| 16 |
+
|
| 17 |
+
Wuhan University of Technology Wuhan, China
|
| 18 |
+
|
| 19 |
+
ztyuan@whut.edu.cn
|
| 20 |
+
|
| 21 |
+
Weiqiang Wang
|
| 22 |
+
|
| 23 |
+
School of Navigation
|
| 24 |
+
|
| 25 |
+
Wuhan University of Technology Wuhan, China
|
| 26 |
+
|
| 27 |
+
weiqiangwang@whut.edu.cn
|
| 28 |
+
|
| 29 |
+
Shengjie Yang
|
| 30 |
+
|
| 31 |
+
School of Navigation
|
| 32 |
+
|
| 33 |
+
Wuhan University of Technology Wuhan, China
|
| 34 |
+
|
| 35 |
+
yangshengjie@whut.edu.cn
|
| 36 |
+
|
| 37 |
+
${Abstract}$ - To tackle the challenge of recognizing similar ship encounter scenarios under multi-ship interference coupling and dynamic evolution, this paper proposes a classification method that combines a Convolutional Auto-Encoder (CAE) and a Long Short-Term Memory (LSTM) recurrent neural network model. To extract many genuine ship encounter scenarios from historical AIS data for further categorization, first, a method for extracting ship encounter scenarios taking spatiotemporal proximity restrictions is devised. Then, by setting a time window and rasterizing the scenarios, a CAE-based model is constructed to characterize the spatial interference of ships in the scenarios. Further, an LSTM network is used to learn temporal evolution features, achieving a low-dimensional spatiotemporal vector representation of ship encounter scenarios. Finally, hierarchical clustering is applied to classify different ship encounter scenarios based on these low-dimensional spatiotemporal vectors. The proposed method is validated through extensive experiments using data from Ningbo-Zhoushan Port, and the results show that this method can effectively extract real ship encounter scenarios and accurately identify similar scenarios. This research provides robust support for a deep understanding of ship encounter scenarios and the mining of similar ship behavior patterns.
|
| 38 |
+
|
| 39 |
+
Keywords-ship encounter scenarios, scenarios classification, CAE, LSTM
|
| 40 |
+
|
| 41 |
+
## I. INTRODUCTION
|
| 42 |
+
|
| 43 |
+
In recent years, the continuous growth in shipping volume has significantly increased maritime traffic density, leading to a rise in ship collision accidents [1]. Research shows that these mishaps are mostly caused by human factors. [2]. To mitigate collision incidents caused by human error, researchers have developed numerous navigation collision avoidance algorithms to enhance maritime safety[3]. Historical ship encounter scenarios contain rich avoidance processes and strategies. Extracting these scenarios and analyzing collision avoidance behavior patterns in similar situations allows this implicit knowledge to be integrated into the design of collision avoidance algorithms. This approach enhances the practicality of these algorithms and improves avoidance safety in similar scenarios. Therefore, extracting real ship encounter scenarios and effectively classifying similar scenarios hold significant potential for advancing collision avoidance algorithm design.
|
| 44 |
+
|
| 45 |
+
Ship encounter scenarios essentially involve interactions between multiple vessels, which can be explained through their trajectories. Because the Automatic Identification System (AIS) is widely used on ships, scholars can collect large quantities of high-quality vessel trajectory data at a low cost, providing a rich and reliable data source for extracting ship encounter scenarios. Related research on encounter scenario extraction using AIS data has been carried out by several academics. Through the use of AIS data, Ma Jie et al. [4,5] were able to successfully extract ship encounter scenarios by analyzing the spatiotemporal correlations during ship interactions. Similarly, Based on the spatiotemporal proximity relationships between ships, Wang et al.[6] identified ship encounter possibilities from AIS data, evaluated the significance of each event, and sampled the data to create test scenarios for collision avoidance algorithms.
|
| 46 |
+
|
| 47 |
+
Ship encounter scenarios are typical spatiotemporal sequence data, often exhibiting significant temporal evolution characteristics and complex multi-vessel interaction couplings. This complexity makes classifying ship encounter scenarios challenging. Current research mainly focuses on clustering analysis of individual ship trajectories. To identify frequent paths and discover abnormal trajectories, Li et al. [7] for instance, suggested a multi-step clustering methodology that combines principal component analysis, dynamic time warping, and an enhanced trajectory clustering center method. Ship itineraries were inferred from AIS data by Zhang et al. [8] using data-driven techniques such as ant colony optimization and geographic clustering of applications with noise based on density (DBSCAN). Zhang et al [9] classified ship trajectories using K-Means and DBSCAN clustering algorithms, then identified potential collision scenarios by detecting illegal evasive maneuvers through relative bearing angles and quantified the collision risk index when evasive actions were taken. However, these methods primarily rely on the similarity calculation of individual ship trajectories. Although they perform well in trajectory similarity analysis and classification, encounter scenarios involve the interactions of multiple ships, featuring significant temporal evolution characteristics and complex multi-ship interference effects. As a result, these methods have limitations in representing and measuring the spatio-temporal interference features in encounter scenarios and face challenges when directly applied to encounter scenario classification.
|
| 48 |
+
|
| 49 |
+
---
|
| 50 |
+
|
| 51 |
+
This paper is supported by the National Natural Science Foundation of China(NSFC) under Grant NO.52031009. (Corresponding author: Zhitao Yuan).
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
In recent years, deep learning has shown great potential in handling complex spatio-temporal data, and some studies have begun exploring its potential in trajectory similarity computation. These works demonstrate how deep learning techniques can more effectively capture the features of ship trajectories. Compared to traditional methods, deep learning models can automatically learn useful features from large amounts of data without relying on manual feature extraction, offering certain advantages [10]. Liang et al [11] proposed an unsupervised learning method based on a convolutional autoencoder (CAE), which maps trajectories into two-dimensional matrices to generate trajectory images and automatically extracts low-dimensional features via the CAE to compute similarity. Chen et al [12] introduced a method based on convolutional neural networks (CNN) to identify movement patterns in emerging trajectories. In this approach, a mobility-based trajectory structure is introduced as input to the identification model, and evaluations using real maritime trajectory datasets show the superiority of this method. Kontopoulos et al [13] proposed a novel method that integrates research in computer vision and trajectory classification, automatically extracting meaningful information from trajectory data and identifying movement patterns without the need for expert input.
|
| 56 |
+
|
| 57 |
+
Overall, unsupervised and semi-supervised methods based on deep learning are gradually gaining attention in the field of maritime situational awareness. These methods share a common feature: they reduce reliance on manual intervention through automatic feature extraction, demonstrating strong adaptability, especially when handling large amounts of unlabeled data. It is recommended to develop an unsupervised learning method for representing the complex temporal evolution characteristics of ship encounter scenarios to enable effective classification. Based on the above analysis, this study proposes a ship encounter scenario classification method that combines a Convolutional Autoencoder (CAE) with a Long Short-Term Memory (LSTM) network. This approach comprehensively considers both the spatial interference coupling features among multiple ships and the temporal evolution patterns within the encounter scenario, enabling effective classification of ship encounter scenarios.
|
| 58 |
+
|
| 59 |
+
## II. METHODOLOGY
|
| 60 |
+
|
| 61 |
+
This paper focuses on two main tasks: the extraction of real ship encounter scenarios based on AIS data, and the classification of these scenarios using a combination of CAE and LSTM models. As seen in Figure. 1., the research framework consists of three steps: preprocessing AIS data, ship encounter scenario extraction, and clustering ship encounter scenarios.
|
| 62 |
+
|
| 63 |
+
Step 1: Data Preprocessing. Original AIS data is preprocessed to retain key attributes such as timestamp, Maritime Mobile Service Identity (MMSI), ship length, longitude, latitude, speed over ground (SOG), and course over ground (COG). These attributes are essential for calculating the subsequent spatiotemporal relationships of the vessels.
|
| 64 |
+
|
| 65 |
+
Step 2: Encounter Scenario Extraction. Based on the spatiotemporal proximity analysis of ships, ship encounter scenarios are extracted from historical AIS data. This extraction provides numerous encounter scenarios that reflect the real navigational behaviors of ships for subsequent classification.
|
| 66 |
+
|
| 67 |
+
Step 3: Time Slicing and Gridding. Time slicing and gridding are applied to the scenarios to characterize their spatiotemporal attributes.
|
| 68 |
+
|
| 69 |
+
Step 4: Feature Representation. CAE and LSTM represent the spatial and temporal features of the encounter scenarios with feature vectors.
|
| 70 |
+
|
| 71 |
+
Step 5: Clustering of Encounter Scenarios. Hierarchical clustering is applied to the feature vectors of all scenarios. To achieve the classification of encounter scenarios, the ideal number of clusters is found using the Silhouette Coefficient (SC) index.
|
| 72 |
+
|
| 73 |
+
In summary, based on the most advanced research findings, our CAE-based ship encounter scenario classification method offers the following innovations. We propose generating information trajectory images by remapping the ship trajectories involved in encounter scenarios into two-dimensional matrices:
|
| 74 |
+
|
| 75 |
+
1. The similarity between different encounter scenarios is measured by assessing the structural similarity between the corresponding information trajectory images.
|
| 76 |
+
|
| 77 |
+
2. A convolutional autoencoder neural network is proposed to learn the low-dimensional representation of these images in an unsupervised manner. The learned representation can effectively capture the characteristics of ship encounter scenarios.
|
| 78 |
+
|
| 79 |
+
Step 1: Data Preprocessing Retention of information Time Latitude and stamp longitude COG SOG MMSI Ship encounter Scenario relative distance Clustering LSTM ⑯ Featung Hierarchical clustering Time feature Raw Noise Data Filtering interpolation AIS data Abnormal Static data elimination matching Step 2: Encounter Scenario Extraction Spatial-temporal Ship pairs proximity assessment relationship judgment Co-occurrence time Minimum passing DCPA、TCPA distance Time and distance Collision warning thresholds threshold Step 3: Encounter Scenario Clustering Scenario Representation Time Slicing CAE Encoder Featun Hidden Layer Matrix Decoder Space-time sequence Spatial feature
|
| 80 |
+
|
| 81 |
+
Fig. 1. Overview of the proposed approach.
|
| 82 |
+
|
| 83 |
+
## A. Data Preprocessing
|
| 84 |
+
|
| 85 |
+
The quality of AIS data significantly impacts the accuracy of the extracted encounter scenarios. Due to various factors, AIS data may contain inconsistencies with the actual navigational state of the ships. Therefore, preprocessing is necessary before extracting encounter scenarios[14]. Main preprocessing operations include noise filtering, anomaly removal, data interpolation, and matching of static data information[15].
|
| 86 |
+
|
| 87 |
+
### B.AIS Data-Based Encounter Scenarios Extracted
|
| 88 |
+
|
| 89 |
+
Spatio-temporal relationships between ships are fundamental for extracting encounter scenarios. In this work, ship encounter scenarios are described as a series of ship pairs, that within a specific time sequence, satisfy specific spatiotemporal proximity conditions. Figure 2. shows a graphical description of ship encounter scenarios. The timeline is shown on the x-axis in Figure 2, and the ship identification numbers that are part of the encounter scenarios are shown on the y-axis. The lines with arrows represent the navigation period of the Own Ship (OS) in the study area, while the lines with arrows in front of each Target Ship (TS) indicate the periods when the TS meets the preset spatiotemporal proximity conditions with the OS.
|
| 90 |
+
|
| 91 |
+
ID of ships in encounter TS3 ${\mathrm{t}}_{4}$ ${\mathrm{t}}_{5}$ ${\mathrm{t}}_{6}$ ${\mathrm{t}}_{7}$ time OS TS1 ${\mathrm{t}}_{0}$ ${\mathrm{t}}_{1}$ ${\mathrm{t}}_{2}$ ${\mathrm{t}}_{3}$
|
| 92 |
+
|
| 93 |
+
Fig. 2. Overview of the proposed approach.
|
| 94 |
+
|
| 95 |
+
Additional evolution analysis of the Distance at the Closest Point of Approach (DCPA) and the Time to the Closest Point of Approach (TCPA) is necessary to precisely define spatiotemporal proximity relationships between ships at each time[16]. By analyzing the preprocessed AIS data, the spatiotemporal relationships between ships can be extracted, allowing the identification of ship encounters. Specifically, when two ships remain in the study area for a period exceeding the set time threshold, the minimum distance between them is calculated. Further analysis will be done on their relative distance, DCPA, and TCPA evolution patterns if this closest passing distance is less than the distance criterion. A ship pair will be deemed to meet the spatiotemporal proximity constraints that may result in a collision if their relative distance is decreasing and stays within the early-warning distance, and both DCPA and TCPA values stay below a specific threshold before approaching the closest passing distance. Under such circumstances, the relevant data will be saved and the segments of two ships that satisfy these spatiotemporal proximity constraints will be retrieved. The beginning and ending times of the extracted segments, as well as static and dynamic information on each ship (such as MMSI, length, width, type, and so on) at each timestamp over this period, are all included in this data. Figure. 3. provides a graphical illustration of DCPA and TCPA, with the calculation formulas provided below.
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{DCP}{A}_{t} = {D}_{ijt} \cdot \sqrt{1 - {\cos }^{2}\left( {\theta }_{ijt}\right) } \tag{1}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{TCP}{A}_{t} = \frac{-{D}_{ijt} \cdot \cos \left( {\theta }_{ijt}\right) }{{v}_{ijt}} \tag{2}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where ${D}_{ijt}$ represents the distance between ship $i$ and ship $j$ at time $t.{v}_{ijt}$ represents the relative speed between ship $i$ and ship $j$ at time $t.\cos \left( {\theta }_{ijt}\right)$ indicates the angle formed by the cosine of the relative velocity and the line joining the two ships.
|
| 106 |
+
|
| 107 |
+
$y$ $\left( {{v}_{j},{a}_{j}}\right) \angle$ ${v}_{i}$ $\left( {{x}_{j},{y}_{j}}\right)$ ${\text{ship}}_{j}$ ${d}_{ij}$ ${TCPA} = - D$ $\cos \left( {\theta }_{j}\right) /{v}_{j}$ $\left( {{v}_{i},{a}_{i}}\right)$ ${DCP}\dot{A}$ $\left( {{x}_{i},{y}_{i}}\right)$ ship
|
| 108 |
+
|
| 109 |
+
Fig. 3. DCPA and TCPA interpretation in graphics.
|
| 110 |
+
|
| 111 |
+
## C. Encounter Scenario time slice
|
| 112 |
+
|
| 113 |
+
Ship encounter scenarios, as spatiotemporal sequence data, involve mutual interference between ships that varies over time. Therefore, classifying encounter scenarios requires attention to both spatial interference characteristics and temporal evolution patterns of the ships. Time-slicing the scenarios and gridding each slice is the first step in the process of efficiently extracting the spatial and temporal features of these scenarios. This maps the temporal evolution of spatial interference characteristics into multi-time-window grids. Compared with the original trajectory image pixels, raster images contain richer information and are more conducive to CAE to characterize the interaction of ships in the encounter scenario.
|
| 114 |
+
|
| 115 |
+
Time Slice1 Time Slice m ... Time Slice1 Time Slice $\mathbf{m}$ ...
|
| 116 |
+
|
| 117 |
+
Fig. 4. Raster map generation and scene time slicing.
|
| 118 |
+
|
| 119 |
+
Thus, this paper projects the original ship trajectory into a two-dimensional matrix to generate a trajectory raster image based on the time sequence of the encounter scenarios, maintaining the original spatiotemporal characteristics. To balance the information richness of encounter scene slices and the total number of slices, the time window duration is set to 3 minutes, and the time window step to 1 minute. The particular procedure is depicted in Figure. 4.
|
| 120 |
+
|
| 121 |
+
## D. Feature Representation of Encounter Scenarios
|
| 122 |
+
|
| 123 |
+
To fully representant spatial interaction features between ships from multi-time window raster images and to learn the contextual relationships between feature sequences, as well as to uncover the temporal evolution patterns of the scenarios, we employ a multi-layer CAE neural network combined with LSTM for unsupervised learning and feature representation. The CAE, with convolutional and pooling layers, learns to identify local spatial interactions and patterns within each raster image[17]. Once spatial features are obtained, they are fed into the LSTM model, which captures the temporal evolution of these features over multiple time windows. The combination of CAE and LSTM enables a comprehensive representation of both the spatial interactions between ships and their dynamic changes over time.
|
| 124 |
+
|
| 125 |
+
This study employs a CAE-based autoencoder architecture. Compared to traditional autoencoders, CAE incorporates convolutional and pooling layers, allowing for better extraction of local features related to ship spatial interference in the scene grid maps. As shown in Figure. 5, the CAE model consists of three convolutional layers, three max-pooling layers, and fully connected layers. The encoder layer transforms input scene grid maps into low-dimensional feature vectors, thereby representing the spatial features of encounter scenarios. The decoder layer uses ReLU as the activation function to effectively reconstruct the low-dimensional feature vectors into scene grid maps. Additionally, to enhance the feature representation capability CAE, this study introduces a loss function sensitive to the structure of the images, specifically the structural similarity (SSIM) index, to ensure the accuracy of the extracted features. To further elucidate the working mechanism of the CAE model, the operations of convolutional and fully connected layers are described in detail as follows:
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
{x}_{k}^{l} = {A}_{E}\left( {{f}_{k}^{l} \odot {x}_{k}^{\left( l - 1\right) } + {b}_{k}^{l}}\right) \tag{3}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
Y = \mathcal{H}\left( x\right) = {wx} + \beta \tag{4}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where $l$ represents the layer number, $\odot$ denotes the convolution operation, ${f}_{k}^{l}$ represents the convolution kernel, ${x}_{k}^{l - 1}$ represents the feature map, ${b}_{k}^{l}$ is the bias term, and $Y$ is the feature vector with a final output dimension $L$ . The loss function, through training the model, ensures that the reconstruction $\widetilde{x}$ of the decoder output has minimal error relative to the original input $x$ . The following is the definition of the loss function SSIM:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathcal{F}\left( {x,\widetilde{x}}\right) = 1 - \frac{1}{M}\mathop{\sum }\limits_{{m = 1}}^{M}\operatorname{SSIM}\left( {x,{\widetilde{x}}_{m}}\right) \tag{5}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$\operatorname{SSIM}\left( {{x}_{m},{\widetilde{x}}_{m}}\right) =$ (6) Conv Feature Fully $\left( {5 \div 5 - 8}\right)$ Maps connected layer (2*2) Feature volution layer vector Unpooling DeConv Feature Fully connected layer $\frac{2{\mu }_{{x}_{m}}{\mu }_{{\widetilde{x}}_{m}} + {c}_{1}}{{\mu }_{{x}_{m}}^{2} + {\mu }_{{\widetilde{x}}_{m}}^{2} + {c}_{1}^{2}{\sigma }_{{x}_{m}}^{2} + {\sigma }_{{\widetilde{x}}_{m}}^{2} + {c}_{2}}$ Original Encounter Conv Conv Scenarios $\left( {9 \times 9 - {16}}\right)$ (7*7-16) Encoder (2*2 (2*2) Loss Convolutional Layer Function Decode Unpooling (2*2 Encounter DeConv DeConv Scenarios (9*9-16) (7*7-16)
|
| 142 |
+
|
| 143 |
+
Fig. 5. The architecture of convolutional autoencoder.
|
| 144 |
+
|
| 145 |
+
LSTM is widely used for studying persistent features in time series data and can effectively learn dependencies between time series[18]. Therefore, LSTM is chosen to represent temporal feature evolution. The LSTM primarily consists of three gating units: the forget gate, the input gate, and the output gate, as shown in Figure. 6. The forget gate controls the transmission or forgetting of information. The process is described by Equation (7):
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
{f}_{t} = \sigma \left( {{W}_{f} \cdot \left\lbrack {{h}_{t - 1},{x}_{t}}\right\rbrack + {b}_{f}}\right) \tag{7}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
where $W$ represents weight, $b$ represents bias, $\left\lbrack {{h}_{t - 1},{x}_{t}}\right\rbrack$ represents a vector consisting of the hidden layer output ${h}_{t - 1}$ of the previous LSTM module, and the input ${x}_{t}$ of the current module, $\sigma \left( \cdot \right)$ represents the sigmoid function.
|
| 152 |
+
|
| 153 |
+
0 $\sigma$ tanh tanh 0
|
| 154 |
+
|
| 155 |
+
Fig. 6. LSTM unit structure diagram.
|
| 156 |
+
|
| 157 |
+
## E. Clustering of Encounter Scenarios
|
| 158 |
+
|
| 159 |
+
Feature vectors can eventually describe the intricate spatial relationships and temporal evolution of ship encounter events through the aforementioned method. By calculating the distance between each related feature vector, the similarity between ship encounter scenarios is determined. Once the distances are obtained, clustering algorithms classify the scenarios, and the results are evaluated using metrics to obtain the final classification outcome. Hierarchical clustering, simple and widely used, can reflect the step-by-step partitioning process of each object through a hierarchical clustering tree. $\left\lbrack {{19},{20}}\right\rbrack$ Therefore, hierarchical clustering is chosen as the clustering algorithm for this study's encounter scenarios.
|
| 160 |
+
|
| 161 |
+
In the process of hierarchical clustering, it is challenging to directly select the best clustering result Therefore, an indicator is needed to select the appropriate number of clusters. In this paper, the value of $k$ is adaptively determined using the silhouette coefficient. ${SC}$ is defined by the mean distance from any point in the cluster to other points in the cluster after classification and the mean distance from any point to all points in the adjacent clusters. The better the categorization effect, the higher the SC value. The formula (8) displays the ${SC}$ calculating procedure.
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
{SC}\left( i\right) = \frac{{CTb}\left( i\right) - {CTa}\left( i\right) }{\max \{ {CTa}\left( i\right) ,{CTb}\left( i\right) \} } \tag{8}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
The average distance between scenario $i$ and other scenes in the same cluster is ${CTb}\left( i\right)$ , whereas the minimal average distance between scenario $i$ and other clusters is ${CTa}\left( i\right)$ . The silhouette coefficient ranges from -1 to 1 , with higher values indicating better clustering performance.
|
| 168 |
+
|
| 169 |
+
## III. CASE STUDY
|
| 170 |
+
|
| 171 |
+
## A. Data collection and processing
|
| 172 |
+
|
| 173 |
+
This research uses data from November 1, 2018, to November 30, 2018, for the outside waters of Ningbo-Zhoushan Port. As shown in Fig.7, the targeted area is situated between latitudes ${29} \circ {30}\mathrm{\;N} - {29} \circ {49}\mathrm{\;N}$ and longitudes ${122} \circ {20}\mathrm{E} - {122} \circ {60}\mathrm{\;E}$ . To guarantee the precision of the ship encounter scenario analysis, specific mission vessel data, including tugboats, fishing boats, and anchored ships, were removed from the data. Subsequently, the residual data underwent data preprocessing procedures in preparation for more experiments. It is evident from the trajectory distribution that there are a lot of ship interactions in the research area.
|
| 174 |
+
|
| 175 |
+
the outside waters of Ning-Bo-Zhoushan Port
|
| 176 |
+
|
| 177 |
+
Fig. 7. The location of the study area.
|
| 178 |
+
|
| 179 |
+
## B. Analysis and validation of scenario extraction results
|
| 180 |
+
|
| 181 |
+
Three sample ship encounter scenarios are shown in Figure 8 to confirm the retrieved ship encounter scenarios. Four graphics are used to explain each scenario: the first graph shows the encounter process from start to finish using printed trajectories. The end state of the interaction is indicated by the ship icon in this subgraph. The progression of relative distance, DCPA, and TCPA between the OS and other TSs during the encounter process is shown in the remaining three graphs (a), (b), and (c). In these cases, the DCPA stays tiny for a while, the TCPA changes from positive to negative, and the relative distance first drops to a very low value before gradually increasing. The retrieved scenarios are validated by the evolutionary patterns that align with real-world encounter experiences. The aforementioned evolution trends of relative distance, DCPA, and TCPA are all consistent across all extracted situations.
|
| 182 |
+
|
| 183 |
+
-06-TS -08-TS -cs-ts - threshold -OS-TS2 -OS-TS2 Time(+10s) $\operatorname{Time}\left( {\times {10}\mathrm{s}}\right)$ Time(-10s) TS/ —OS-TS2 ${\epsilon }_{\mathrm{{TS2}}}$ TS1 Longitade (*) Time(^10s) OS /TS3 Longitude $\left( {}^{0}\right)$ Time $\left( {\times {10}\mathrm{\;s}}\right)$
|
| 184 |
+
|
| 185 |
+
Fig. 8. Encounter situations involving varying numbers of ships and the development of their features.
|
| 186 |
+
|
| 187 |
+
Due to computational cost constraints, experimenting with all ship encounter scenarios is difficult. Therefore, selecting common encounter scenarios in maritime navigation as experimental data is necessary. As seen in Figure 9, the extracted encounter scenarios were first categorized and statistically examined according to the number of ships engaged. According to the classification results, two-ship encounters make up around half of all extracted scenarios, making them the most frequent. As ships involved increase, the number of scenarios gradually decreases, with a substantial decline occurring when the number of ships exceeds five.
|
| 188 |
+
|
| 189 |
+
18000 133 6 7 8 9 10 11 12 The number of ships involved in encounter 16090 16000 The number of encounters 14000 12000 10000 8682 8000 6000 4000 3961 2000 1798 2 3
|
| 190 |
+
|
| 191 |
+
Fig. 9. Scenario classification outcomes depending on the number of ships.
|
| 192 |
+
|
| 193 |
+
To ensure the experimental data is representative while also saving computational costs, two-ship and three-ship encounter scenarios are chosen as the experimental dataset. This selection includes common two-ship encounters and more complex multi-ship encounters, which occur more frequently in actual maritime navigation. The durations of the two types of encounter scenarios in the experimental dataset were then statistically analyzed, and Figure. 10. displays the results. The analysis revealed that the proportions of two-ship and three-ship scenarios lasting more than 10 minutes were 84.6% and 90.1%, respectively. This data segment is representative of all data exceeding 10 minutes, providing an important reference value for experimental analysis. Based on maritime navigation experience, scenarios lasting 10-20 minutes were chosen as experimental data. This selection ensures the significance of ship interactions while preventing the dataset from becoming overly large. Therefore, two-ship and three-ship encounter scenarios lasting 10-20 minutes were chosen as the final experimental dataset.
|
| 194 |
+
|
| 195 |
+
1800 two-ship 150 200 The duration of scenarios $\left( {\times {10}\mathrm{\;s}}\right)$ 1600 1400 Frequency 1000 800 400 200 0 50 100
|
| 196 |
+
|
| 197 |
+
Fig. 10. Duration statistics for encounter scenarios.
|
| 198 |
+
|
| 199 |
+
## C. Experimental Software Environment and Model Training
|
| 200 |
+
|
| 201 |
+
For the experimental software environment, Python was chosen, using the PyTorch deep learning framework to train the model. The hyperparameter settings are shown in Table I. In Table I, Adam is the optimizer for the adaptive moment estimation method; Batch size represents the number of samples trained in each batch; Epoch refers to the number of training epochs; and Num Hidden Unit is the hidden layer dimensions of the LSTM.
|
| 202 |
+
|
| 203 |
+
TABLE I. HYPERPARAMETER SETTINGS
|
| 204 |
+
|
| 205 |
+
<table><tr><td>HYPERPARAMETER</td><td>Parameter Value</td></tr><tr><td>Optimizer</td><td>Adam</td></tr><tr><td>CAE hidden layer dimensions</td><td>8</td></tr><tr><td>Batch size</td><td>128</td></tr><tr><td>Learning Rate</td><td>0.001</td></tr><tr><td>Epoch</td><td>760</td></tr><tr><td>Num Hidden Unit</td><td>3</td></tr></table>
|
| 206 |
+
|
| 207 |
+
A total of 500 scenarios were selected from the experimental dataset for model training. First, the encounter scenarios were time-sliced, resulting in 7,366 and 7,261 scenario grid images, respectively. These encounter scenarios were then input into the CAE to extract spatial features. After 760 training epochs, the change in the loss function values with the number of training epochs is shown in Figure. 11. The training error converges to a very small value, indicating that the trained CAE can reconstruct the input data from the latent layer features. To demonstrate that the trained CAE can reconstruct the original encounter scenarios, the original scenario images and their reconstructed versions are shown in Figure. 12. The first row displays the original ship encounter scenarios, while the second row shows the reconstructed images. The structural similarity between the original and reconstructed scenarios demonstrates that the CAE model excels in capturing low-dimensional representations and reconstructing high-quality images from these features. Finally, the feature matrix generated by the CAE is input into the LSTM model to learn the spatial feature evolution of the scenarios over time, outputting feature vectors to represent them.
|
| 208 |
+
|
| 209 |
+
0.8 two-ship three-ship 400 600 800 Epochs 0.6 Loss 0.4 0.2 0 200
|
| 210 |
+
|
| 211 |
+
Fig. 11. Loss during the training of CAE.
|
| 212 |
+
|
| 213 |
+
Original Reconstructed
|
| 214 |
+
|
| 215 |
+
Fig. 12. Original and reconstructed encounter scenario images of CAE
|
| 216 |
+
|
| 217 |
+
## D. Clustering and Evaluation
|
| 218 |
+
|
| 219 |
+
The ship encounter scenarios were represented by feature vectors using the CAE-LSTM approach. Subsequently, hierarchical clustering was applied to these feature vectors to classify the ship encounter scenarios and obtain clustering results. SC was used to determine the ideal number of clusters and evaluate the effectiveness of clustering. Cluster counts varied from two to fifteen., and the ${SC}$ values varied accordingly, as shown in Figure. 13.
|
| 220 |
+
|
| 221 |
+
0.9 two-ship three-ship 12 14 Number of Clusters Mean Silhouette Value 0.8 0.7 0.6 0.5 0.4 0.3 0.2
|
| 222 |
+
|
| 223 |
+
Fig. 13. Variation of silhouette coefficient values with the number of clusters
|
| 224 |
+
|
| 225 |
+
It demonstrates that both datasets obtained the highest silhouette coefficient values when there are two clusters. However, avoiding too few clusters is required to ensure a detailed separation of the microscopic aspects of ship interactions in various encounter scenarios. Therefore, 5 and 4 were chosen as the final number of clusters for the two datasets, respectively. These values represent the inflection points of the silhouette coefficient for both datasets. Beyond these points, as the number of clusters increases, the silhouette coefficient generally declines, indicating a deterioration in clustering performance.
|
| 226 |
+
|
| 227 |
+
After clustering the encounter scenarios, the frequency and duration distributions for each cluster are shown in Figures 14 and Figure. 15, respectively. For further analysis, the two clusters with the highest and lowest frequencies from each dataset were selected for feature analysis.
|
| 228 |
+
|
| 229 |
+
two-ship three-ship 300 250 Frequency 200 150 100 50 0 3 250 Frequency 200 150 100 1 3 K
|
| 230 |
+
|
| 231 |
+
Fig. 14. Frequency distribution of encounter scenarios.
|
| 232 |
+
|
| 233 |
+
rwo-ship 115 Duration ( $\times {10}\mathrm{\;s}$ ) 105 2 The number of clusters Duration ( $\times {10}\mathrm{\;s}$ ) 115 110 105 95 3 The number of clusters
|
| 234 |
+
|
| 235 |
+
Fig. 15. The duration distribution of each cluster of encounter scenarios.
|
| 236 |
+
|
| 237 |
+
The interaction process between ship trajectories and the evolution of two features-relative distance and TCPA is shown in Figures 16 and Figure. 17. The first row of three images shows the complete trajectory of three encounter scenarios, where " $\circ$ " and " $\times$ " represent the start and end positions of the encounter scenario, respectively. The relevant scenarios' relative distance and TCPA evolution are shown in the other two rows. The first two columns belong to the same cluster and illustrate the common characteristics of the scenarios. The third column represents a different cluster to highlight the distinctions.
|
| 238 |
+
|
| 239 |
+
For the two-ship encounter scenarios, Cluster 4 features ships moving in opposite directions, showing a head-on encounter with the relative distance initially decreasing and then increasing, and the TCPA exhibiting a linear decreasing trend. Cluster 5, on the other hand, consists of ships moving in the same direction, with the relative distance remaining constant and TCPA showing a decreasing trend but with significant fluctuations. For the three-ship encounter scenarios, Cluster 1 involves one target ship crossing paths with the OS, while the other target ship encounters head-on. The relative distances for both target ships initially decrease and then increase, with the increase varying in magnitude. The TCPA shows a decreasing trend, with one ship's TCPA decreasing linearly and the other exhibiting noticeable fluctuations. In contrast, Cluster 3 features both target ships crossing paths with the OS. Although the relative distance trend is similar to Cluster 1 , the ships in Cluster 3 are moving in the same direction, resulting in consistent changes in relative distance and TCPA fluctuating consistently before reaching zero. In summary, the interaction of trajectories, the evolution of features, and the duration within the same cluster exhibit consistent patterns. Different clusters, however, show distinctly different patterns.
|
| 240 |
+
|
| 241 |
+
Latinde Latitude Latitude -OS-TS #4 44 -OS-TS Distance(m) #4 #4 TCPA(min)
|
| 242 |
+
|
| 243 |
+
Fig. 16. Trajectory interaction and feature evolution process of the two-ship encounter scenarios.
|
| 244 |
+
|
| 245 |
+
Locitoud #3 #3 Time(*10s) Time $\left( {\times {10}\mathrm{\;s}}\right)$ -OS-TS1 —OS-TS1 OS-TS2 -OS-TS) --- threshold Time(×10s) $\operatorname{Time}\left( {\times {10}\mathrm{\;s}}\right)$ #1 Distance(m) #1 #1 -OS-TS1 OS-TS2 TCPA(min) #1 Time(×10s)
|
| 246 |
+
|
| 247 |
+
Fig. 17. Trajectory interaction and feature evolution process of the three-ship encounter scenarios.
|
| 248 |
+
|
| 249 |
+
Through the above analysis, the ship encounter scenario clustering method proposed in this paper effectively classifies different scenarios. The visual verification of trajectory interactions and feature evolution during the encounter process confirms the validity of this classification method. It demonstrates the various interaction patterns and contexts among multiple ships in complex navigable waters, aiding in distinguishing and understanding different types of ship encounter scenarios.
|
| 250 |
+
|
| 251 |
+
## IV. CONCLUSION
|
| 252 |
+
|
| 253 |
+
This paper proposes a method for clarifying ship encounter scenarios. First, ship encounter scenarios are segmented using time windows, and convolutional autoencoders generate spatial feature vectors for each time slice. Next, these spatial feature vectors are sequentially input into a long short-term memory (LSTM) network to produce temporal feature vectors. Finally, hierarchical clustering is applied to group the feature vectors based on their spatiotemporal attributes. Experimental results demonstrate that this method effectively classifies encounter scenarios involving various numbers of ships. The visualization of the interaction process and the dynamic evolution of features between ships confirms the classification's effectiveness.
|
| 254 |
+
|
| 255 |
+
## V. FUTURE WORK
|
| 256 |
+
|
| 257 |
+
In the future, we plan to make improvements in the following two directions:
|
| 258 |
+
|
| 259 |
+
1. Increase the size of the experimental data sample and optimize the scenario construction method to develop a multi-ship encounter scenario library tailored for complex navigational waters. Additionally, establish a query index based on ship scenarios.
|
| 260 |
+
|
| 261 |
+
2. Improve the classification method of ship encounter scenarios and enrich the dynamic characterization of encounter scenarios; design relevant application algorithms based on the scenario library, such as scenario prediction, risk assessment, and ship collision avoidance algorithms, etc., and further study the characterization of multi-ship encounter scenarios and the evolution law in depth.
|
| 262 |
+
|
| 263 |
+
## REFERENCES
|
| 264 |
+
|
| 265 |
+
[1] Xin, X., Liu, K., Yang, Z., Zhang, J., & Wu, X. (2021). A probabilistic risk approach for the collision detection of multi-ships under spatiotemporal movement uncertainty. Reliability Engineering & System Safety, 215, 107772.
|
| 266 |
+
|
| 267 |
+
[2] Fan, S., Blanco-Davis, E., Yang, Z., Zhang, J., & Yan, X. (2020). Incorporation of human factors into maritime accident analysis using a data-driven Bayesian network. Reliability Engineering & System Safety, 203, 107070.
|
| 268 |
+
|
| 269 |
+
[3] Goerlandt, F., & Montewka, J. (2015). Maritime transportation risk analysis: Review and analysis in light of some foundational issues. Reliability Engineering & System Safety, 138, 115-134.
|
| 270 |
+
|
| 271 |
+
[4] Ma, J., Liu, Q., Zhang, C., Liu, K., & Zhang, Y. (2019). Spatiotemporal analysis of AIS-based data and extraction of ship encounter situations. Journal of China Safety Science, (5), 111-116.
|
| 272 |
+
|
| 273 |
+
[5] Ma, J., Li, W., Zhang, C., & Zhang, Y. (2021). Ship encounter situation identification in converging waters based on AIS data. China Navigation, (01), 68-74.
|
| 274 |
+
|
| 275 |
+
[6] Wang, W., Huang, L., Liu, K., Zhou, Y., Yuan, Z., Xin, X., & Wu, X. (2024). Ship encounter scenario generation for collision avoidance algorithm testing based on AIS data. Ocean Engineering, 291, 116436.
|
| 276 |
+
|
| 277 |
+
[7] Li, H., Liu, J., Liu, R. W., Xiong, N., Wu, K., & Kim, T. H. (2017). A dimensionality reduction-based multi-step clustering method for robust vessel trajectory analysis. Sensors, 17(8), 1792.
|
| 278 |
+
|
| 279 |
+
[8] Zhang, S. K., Shi, G. Y., Liu, Z. J., Zhao, Z. W., & Wu, Z. L. (2018). Data-driven based automatic maritime routing from massive AIS trajectories in the face of disparity. Ocean Engineering, 155, 240-250.
|
| 280 |
+
|
| 281 |
+
[9] Zhang, M., Montewka, J., Manderbacka, T., Kujala, P., & Hirdaris, S. (2021). A big data analytics method for the evaluation of ship-ship collision risk reflecting hydrometeorological conditions. Reliability Engineering & System Safety, 213, 107674.
|
| 282 |
+
|
| 283 |
+
[10] Zhou, F., Li, J., & Wang, Y. (2023). An improved CNN-LSTM network for modulation identification relying on periodic features of signal. IET Communications, 17(18), 2097-2106.
|
| 284 |
+
|
| 285 |
+
[11] Liang, M., Liu, R. W., Li, S., Gao, Z., Liu, X., & Lu, F. (2021). An unsupervised learning method with convolutional auto-encoder for vessel trajectory similarity computation. Ocean Engineering, 225, 108803.
|
| 286 |
+
|
| 287 |
+
[12] Chen, X., Liu, Y., Achuthan, K., Zhang, X., & Chen, J. (2021). A semi-supervised deep learning model for ship encounter situation classification. Ocean Engineering, 239, 109824.
|
| 288 |
+
|
| 289 |
+
[13] Kontopoulos, I., Makris, A., Zissis, D., & Tserpes, K. (2021, June). A computer vision approach for trajectory classification. In 2021 22nd IEEE International Conference on Mobile Data Management (MDM) (pp. 163- 168). IEEE.
|
| 290 |
+
|
| 291 |
+
[14] Chun, D. H., Roh, M. I., Lee, H. W., Ha, J., & Yu, D. (2021). Deep reinforcement learning-based collision avoidance for an autonomous ship. Ocean Engineering, 234, 109216.
|
| 292 |
+
|
| 293 |
+
[15] Liu, K., Yuan, Z., Xin, X., Zhang, J., & Wang, W. (2021). Conflict detection method based on dynamic ship domain model for visualization of collision risk hot-spots. Ocean Engineering, 242, 110143.
|
| 294 |
+
|
| 295 |
+
[16] Li, S., Liu, J., & Negenborn, R. R. (2019). Distributed coordination for collision avoidance of multiple ships considering ship maneuverability. Ocean Engineering, 181, 212-226.
|
| 296 |
+
|
| 297 |
+
[17] Wang, W., Ramesh, A., Zhu, J., Li, J., & Zhao, D. (2020). Clustering of driving encounter scenarios using connected vehicle trajectories. IEEE Transactions on Intelligent Vehicles, 5(3), 485-496.
|
| 298 |
+
|
| 299 |
+
[18] Chen, H., Shao, Y., Ao, G., & Zhang, H. (2021). Speed prediction based on GCN-LSTM neural network for online maps. Journal of Transportati on Engineering, (04), 183-196.
|
| 300 |
+
|
| 301 |
+
[19] Fahad, A., Alshatri, N., Tari, Z., Alamri, A., Khalil, I., Zomaya, A. Y., ... & Bouras, A. (2014). A survey of clustering algorithms for big data: Taxonomy and empirical analysis. IEEE Transactions on Emerging Topics in Computing, 2(3), 267-279.
|
| 302 |
+
|
| 303 |
+
[20] Fan, J. (2019). OPE-HCA: An optimal probabilistic estimation approach for hierarchical clustering algorithm. Neural Computing and Applications, 31, 2095-2105.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FE4XKb4tcU/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,279 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ RESEARCH ON THE CLASSIFICATION OF SHIP ENCOUNTER SCENARIOS BASED ON CAE-LSTM
|
| 2 |
+
|
| 3 |
+
Taiyu Chai
|
| 4 |
+
|
| 5 |
+
School of Navigation
|
| 6 |
+
|
| 7 |
+
Wuhan University of Technology
|
| 8 |
+
|
| 9 |
+
Wuhan, China
|
| 10 |
+
|
| 11 |
+
282614@whut.edu.cn
|
| 12 |
+
|
| 13 |
+
Zhitao Yuan*
|
| 14 |
+
|
| 15 |
+
School of Navigation
|
| 16 |
+
|
| 17 |
+
Wuhan University of Technology Wuhan, China
|
| 18 |
+
|
| 19 |
+
ztyuan@whut.edu.cn
|
| 20 |
+
|
| 21 |
+
Weiqiang Wang
|
| 22 |
+
|
| 23 |
+
School of Navigation
|
| 24 |
+
|
| 25 |
+
Wuhan University of Technology Wuhan, China
|
| 26 |
+
|
| 27 |
+
weiqiangwang@whut.edu.cn
|
| 28 |
+
|
| 29 |
+
Shengjie Yang
|
| 30 |
+
|
| 31 |
+
School of Navigation
|
| 32 |
+
|
| 33 |
+
Wuhan University of Technology Wuhan, China
|
| 34 |
+
|
| 35 |
+
yangshengjie@whut.edu.cn
|
| 36 |
+
|
| 37 |
+
${Abstract}$ - To tackle the challenge of recognizing similar ship encounter scenarios under multi-ship interference coupling and dynamic evolution, this paper proposes a classification method that combines a Convolutional Auto-Encoder (CAE) and a Long Short-Term Memory (LSTM) recurrent neural network model. To extract many genuine ship encounter scenarios from historical AIS data for further categorization, first, a method for extracting ship encounter scenarios taking spatiotemporal proximity restrictions is devised. Then, by setting a time window and rasterizing the scenarios, a CAE-based model is constructed to characterize the spatial interference of ships in the scenarios. Further, an LSTM network is used to learn temporal evolution features, achieving a low-dimensional spatiotemporal vector representation of ship encounter scenarios. Finally, hierarchical clustering is applied to classify different ship encounter scenarios based on these low-dimensional spatiotemporal vectors. The proposed method is validated through extensive experiments using data from Ningbo-Zhoushan Port, and the results show that this method can effectively extract real ship encounter scenarios and accurately identify similar scenarios. This research provides robust support for a deep understanding of ship encounter scenarios and the mining of similar ship behavior patterns.
|
| 38 |
+
|
| 39 |
+
Keywords-ship encounter scenarios, scenarios classification, CAE, LSTM
|
| 40 |
+
|
| 41 |
+
§ I. INTRODUCTION
|
| 42 |
+
|
| 43 |
+
In recent years, the continuous growth in shipping volume has significantly increased maritime traffic density, leading to a rise in ship collision accidents [1]. Research shows that these mishaps are mostly caused by human factors. [2]. To mitigate collision incidents caused by human error, researchers have developed numerous navigation collision avoidance algorithms to enhance maritime safety[3]. Historical ship encounter scenarios contain rich avoidance processes and strategies. Extracting these scenarios and analyzing collision avoidance behavior patterns in similar situations allows this implicit knowledge to be integrated into the design of collision avoidance algorithms. This approach enhances the practicality of these algorithms and improves avoidance safety in similar scenarios. Therefore, extracting real ship encounter scenarios and effectively classifying similar scenarios hold significant potential for advancing collision avoidance algorithm design.
|
| 44 |
+
|
| 45 |
+
Ship encounter scenarios essentially involve interactions between multiple vessels, which can be explained through their trajectories. Because the Automatic Identification System (AIS) is widely used on ships, scholars can collect large quantities of high-quality vessel trajectory data at a low cost, providing a rich and reliable data source for extracting ship encounter scenarios. Related research on encounter scenario extraction using AIS data has been carried out by several academics. Through the use of AIS data, Ma Jie et al. [4,5] were able to successfully extract ship encounter scenarios by analyzing the spatiotemporal correlations during ship interactions. Similarly, Based on the spatiotemporal proximity relationships between ships, Wang et al.[6] identified ship encounter possibilities from AIS data, evaluated the significance of each event, and sampled the data to create test scenarios for collision avoidance algorithms.
|
| 46 |
+
|
| 47 |
+
Ship encounter scenarios are typical spatiotemporal sequence data, often exhibiting significant temporal evolution characteristics and complex multi-vessel interaction couplings. This complexity makes classifying ship encounter scenarios challenging. Current research mainly focuses on clustering analysis of individual ship trajectories. To identify frequent paths and discover abnormal trajectories, Li et al. [7] for instance, suggested a multi-step clustering methodology that combines principal component analysis, dynamic time warping, and an enhanced trajectory clustering center method. Ship itineraries were inferred from AIS data by Zhang et al. [8] using data-driven techniques such as ant colony optimization and geographic clustering of applications with noise based on density (DBSCAN). Zhang et al [9] classified ship trajectories using K-Means and DBSCAN clustering algorithms, then identified potential collision scenarios by detecting illegal evasive maneuvers through relative bearing angles and quantified the collision risk index when evasive actions were taken. However, these methods primarily rely on the similarity calculation of individual ship trajectories. Although they perform well in trajectory similarity analysis and classification, encounter scenarios involve the interactions of multiple ships, featuring significant temporal evolution characteristics and complex multi-ship interference effects. As a result, these methods have limitations in representing and measuring the spatio-temporal interference features in encounter scenarios and face challenges when directly applied to encounter scenario classification.
|
| 48 |
+
|
| 49 |
+
This paper is supported by the National Natural Science Foundation of China(NSFC) under Grant NO.52031009. (Corresponding author: Zhitao Yuan).
|
| 50 |
+
|
| 51 |
+
In recent years, deep learning has shown great potential in handling complex spatio-temporal data, and some studies have begun exploring its potential in trajectory similarity computation. These works demonstrate how deep learning techniques can more effectively capture the features of ship trajectories. Compared to traditional methods, deep learning models can automatically learn useful features from large amounts of data without relying on manual feature extraction, offering certain advantages [10]. Liang et al [11] proposed an unsupervised learning method based on a convolutional autoencoder (CAE), which maps trajectories into two-dimensional matrices to generate trajectory images and automatically extracts low-dimensional features via the CAE to compute similarity. Chen et al [12] introduced a method based on convolutional neural networks (CNN) to identify movement patterns in emerging trajectories. In this approach, a mobility-based trajectory structure is introduced as input to the identification model, and evaluations using real maritime trajectory datasets show the superiority of this method. Kontopoulos et al [13] proposed a novel method that integrates research in computer vision and trajectory classification, automatically extracting meaningful information from trajectory data and identifying movement patterns without the need for expert input.
|
| 52 |
+
|
| 53 |
+
Overall, unsupervised and semi-supervised methods based on deep learning are gradually gaining attention in the field of maritime situational awareness. These methods share a common feature: they reduce reliance on manual intervention through automatic feature extraction, demonstrating strong adaptability, especially when handling large amounts of unlabeled data. It is recommended to develop an unsupervised learning method for representing the complex temporal evolution characteristics of ship encounter scenarios to enable effective classification. Based on the above analysis, this study proposes a ship encounter scenario classification method that combines a Convolutional Autoencoder (CAE) with a Long Short-Term Memory (LSTM) network. This approach comprehensively considers both the spatial interference coupling features among multiple ships and the temporal evolution patterns within the encounter scenario, enabling effective classification of ship encounter scenarios.
|
| 54 |
+
|
| 55 |
+
§ II. METHODOLOGY
|
| 56 |
+
|
| 57 |
+
This paper focuses on two main tasks: the extraction of real ship encounter scenarios based on AIS data, and the classification of these scenarios using a combination of CAE and LSTM models. As seen in Figure. 1., the research framework consists of three steps: preprocessing AIS data, ship encounter scenario extraction, and clustering ship encounter scenarios.
|
| 58 |
+
|
| 59 |
+
Step 1: Data Preprocessing. Original AIS data is preprocessed to retain key attributes such as timestamp, Maritime Mobile Service Identity (MMSI), ship length, longitude, latitude, speed over ground (SOG), and course over ground (COG). These attributes are essential for calculating the subsequent spatiotemporal relationships of the vessels.
|
| 60 |
+
|
| 61 |
+
Step 2: Encounter Scenario Extraction. Based on the spatiotemporal proximity analysis of ships, ship encounter scenarios are extracted from historical AIS data. This extraction provides numerous encounter scenarios that reflect the real navigational behaviors of ships for subsequent classification.
|
| 62 |
+
|
| 63 |
+
Step 3: Time Slicing and Gridding. Time slicing and gridding are applied to the scenarios to characterize their spatiotemporal attributes.
|
| 64 |
+
|
| 65 |
+
Step 4: Feature Representation. CAE and LSTM represent the spatial and temporal features of the encounter scenarios with feature vectors.
|
| 66 |
+
|
| 67 |
+
Step 5: Clustering of Encounter Scenarios. Hierarchical clustering is applied to the feature vectors of all scenarios. To achieve the classification of encounter scenarios, the ideal number of clusters is found using the Silhouette Coefficient (SC) index.
|
| 68 |
+
|
| 69 |
+
In summary, based on the most advanced research findings, our CAE-based ship encounter scenario classification method offers the following innovations. We propose generating information trajectory images by remapping the ship trajectories involved in encounter scenarios into two-dimensional matrices:
|
| 70 |
+
|
| 71 |
+
1. The similarity between different encounter scenarios is measured by assessing the structural similarity between the corresponding information trajectory images.
|
| 72 |
+
|
| 73 |
+
2. A convolutional autoencoder neural network is proposed to learn the low-dimensional representation of these images in an unsupervised manner. The learned representation can effectively capture the characteristics of ship encounter scenarios.
|
| 74 |
+
|
| 75 |
+
Step 1: Data Preprocessing Retention of information Time Latitude and stamp longitude COG SOG MMSI Ship encounter Scenario relative distance Clustering LSTM ⑯ Featung Hierarchical clustering Time feature Raw Noise Data Filtering interpolation AIS data Abnormal Static data elimination matching Step 2: Encounter Scenario Extraction Spatial-temporal Ship pairs proximity assessment relationship judgment Co-occurrence time Minimum passing DCPA、TCPA distance Time and distance Collision warning thresholds threshold Step 3: Encounter Scenario Clustering Scenario Representation Time Slicing CAE Encoder Featun Hidden Layer Matrix Decoder Space-time sequence Spatial feature
|
| 76 |
+
|
| 77 |
+
Fig. 1. Overview of the proposed approach.
|
| 78 |
+
|
| 79 |
+
§ A. DATA PREPROCESSING
|
| 80 |
+
|
| 81 |
+
The quality of AIS data significantly impacts the accuracy of the extracted encounter scenarios. Due to various factors, AIS data may contain inconsistencies with the actual navigational state of the ships. Therefore, preprocessing is necessary before extracting encounter scenarios[14]. Main preprocessing operations include noise filtering, anomaly removal, data interpolation, and matching of static data information[15].
|
| 82 |
+
|
| 83 |
+
§ B.AIS DATA-BASED ENCOUNTER SCENARIOS EXTRACTED
|
| 84 |
+
|
| 85 |
+
Spatio-temporal relationships between ships are fundamental for extracting encounter scenarios. In this work, ship encounter scenarios are described as a series of ship pairs, that within a specific time sequence, satisfy specific spatiotemporal proximity conditions. Figure 2. shows a graphical description of ship encounter scenarios. The timeline is shown on the x-axis in Figure 2, and the ship identification numbers that are part of the encounter scenarios are shown on the y-axis. The lines with arrows represent the navigation period of the Own Ship (OS) in the study area, while the lines with arrows in front of each Target Ship (TS) indicate the periods when the TS meets the preset spatiotemporal proximity conditions with the OS.
|
| 86 |
+
|
| 87 |
+
ID of ships in encounter TS3 ${\mathrm{t}}_{4}$ ${\mathrm{t}}_{5}$ ${\mathrm{t}}_{6}$ ${\mathrm{t}}_{7}$ time OS TS1 ${\mathrm{t}}_{0}$ ${\mathrm{t}}_{1}$ ${\mathrm{t}}_{2}$ ${\mathrm{t}}_{3}$
|
| 88 |
+
|
| 89 |
+
Fig. 2. Overview of the proposed approach.
|
| 90 |
+
|
| 91 |
+
Additional evolution analysis of the Distance at the Closest Point of Approach (DCPA) and the Time to the Closest Point of Approach (TCPA) is necessary to precisely define spatiotemporal proximity relationships between ships at each time[16]. By analyzing the preprocessed AIS data, the spatiotemporal relationships between ships can be extracted, allowing the identification of ship encounters. Specifically, when two ships remain in the study area for a period exceeding the set time threshold, the minimum distance between them is calculated. Further analysis will be done on their relative distance, DCPA, and TCPA evolution patterns if this closest passing distance is less than the distance criterion. A ship pair will be deemed to meet the spatiotemporal proximity constraints that may result in a collision if their relative distance is decreasing and stays within the early-warning distance, and both DCPA and TCPA values stay below a specific threshold before approaching the closest passing distance. Under such circumstances, the relevant data will be saved and the segments of two ships that satisfy these spatiotemporal proximity constraints will be retrieved. The beginning and ending times of the extracted segments, as well as static and dynamic information on each ship (such as MMSI, length, width, type, and so on) at each timestamp over this period, are all included in this data. Figure. 3. provides a graphical illustration of DCPA and TCPA, with the calculation formulas provided below.
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{DCP}{A}_{t} = {D}_{ijt} \cdot \sqrt{1 - {\cos }^{2}\left( {\theta }_{ijt}\right) } \tag{1}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{TCP}{A}_{t} = \frac{-{D}_{ijt} \cdot \cos \left( {\theta }_{ijt}\right) }{{v}_{ijt}} \tag{2}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where ${D}_{ijt}$ represents the distance between ship $i$ and ship $j$ at time $t.{v}_{ijt}$ represents the relative speed between ship $i$ and ship $j$ at time $t.\cos \left( {\theta }_{ijt}\right)$ indicates the angle formed by the cosine of the relative velocity and the line joining the two ships.
|
| 102 |
+
|
| 103 |
+
$y$ $\left( {{v}_{j},{a}_{j}}\right) \angle$ ${v}_{i}$ $\left( {{x}_{j},{y}_{j}}\right)$ ${\text{ ship }}_{j}$ ${d}_{ij}$ ${TCPA} = - D$ $\cos \left( {\theta }_{j}\right) /{v}_{j}$ $\left( {{v}_{i},{a}_{i}}\right)$ ${DCP}\dot{A}$ $\left( {{x}_{i},{y}_{i}}\right)$ ship
|
| 104 |
+
|
| 105 |
+
Fig. 3. DCPA and TCPA interpretation in graphics.
|
| 106 |
+
|
| 107 |
+
§ C. ENCOUNTER SCENARIO TIME SLICE
|
| 108 |
+
|
| 109 |
+
Ship encounter scenarios, as spatiotemporal sequence data, involve mutual interference between ships that varies over time. Therefore, classifying encounter scenarios requires attention to both spatial interference characteristics and temporal evolution patterns of the ships. Time-slicing the scenarios and gridding each slice is the first step in the process of efficiently extracting the spatial and temporal features of these scenarios. This maps the temporal evolution of spatial interference characteristics into multi-time-window grids. Compared with the original trajectory image pixels, raster images contain richer information and are more conducive to CAE to characterize the interaction of ships in the encounter scenario.
|
| 110 |
+
|
| 111 |
+
Time Slice1 Time Slice m ... Time Slice1 Time Slice $\mathbf{m}$ ...
|
| 112 |
+
|
| 113 |
+
Fig. 4. Raster map generation and scene time slicing.
|
| 114 |
+
|
| 115 |
+
Thus, this paper projects the original ship trajectory into a two-dimensional matrix to generate a trajectory raster image based on the time sequence of the encounter scenarios, maintaining the original spatiotemporal characteristics. To balance the information richness of encounter scene slices and the total number of slices, the time window duration is set to 3 minutes, and the time window step to 1 minute. The particular procedure is depicted in Figure. 4.
|
| 116 |
+
|
| 117 |
+
§ D. FEATURE REPRESENTATION OF ENCOUNTER SCENARIOS
|
| 118 |
+
|
| 119 |
+
To fully representant spatial interaction features between ships from multi-time window raster images and to learn the contextual relationships between feature sequences, as well as to uncover the temporal evolution patterns of the scenarios, we employ a multi-layer CAE neural network combined with LSTM for unsupervised learning and feature representation. The CAE, with convolutional and pooling layers, learns to identify local spatial interactions and patterns within each raster image[17]. Once spatial features are obtained, they are fed into the LSTM model, which captures the temporal evolution of these features over multiple time windows. The combination of CAE and LSTM enables a comprehensive representation of both the spatial interactions between ships and their dynamic changes over time.
|
| 120 |
+
|
| 121 |
+
This study employs a CAE-based autoencoder architecture. Compared to traditional autoencoders, CAE incorporates convolutional and pooling layers, allowing for better extraction of local features related to ship spatial interference in the scene grid maps. As shown in Figure. 5, the CAE model consists of three convolutional layers, three max-pooling layers, and fully connected layers. The encoder layer transforms input scene grid maps into low-dimensional feature vectors, thereby representing the spatial features of encounter scenarios. The decoder layer uses ReLU as the activation function to effectively reconstruct the low-dimensional feature vectors into scene grid maps. Additionally, to enhance the feature representation capability CAE, this study introduces a loss function sensitive to the structure of the images, specifically the structural similarity (SSIM) index, to ensure the accuracy of the extracted features. To further elucidate the working mechanism of the CAE model, the operations of convolutional and fully connected layers are described in detail as follows:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{x}_{k}^{l} = {A}_{E}\left( {{f}_{k}^{l} \odot {x}_{k}^{\left( l - 1\right) } + {b}_{k}^{l}}\right) \tag{3}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
Y = \mathcal{H}\left( x\right) = {wx} + \beta \tag{4}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where $l$ represents the layer number, $\odot$ denotes the convolution operation, ${f}_{k}^{l}$ represents the convolution kernel, ${x}_{k}^{l - 1}$ represents the feature map, ${b}_{k}^{l}$ is the bias term, and $Y$ is the feature vector with a final output dimension $L$ . The loss function, through training the model, ensures that the reconstruction $\widetilde{x}$ of the decoder output has minimal error relative to the original input $x$ . The following is the definition of the loss function SSIM:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathcal{F}\left( {x,\widetilde{x}}\right) = 1 - \frac{1}{M}\mathop{\sum }\limits_{{m = 1}}^{M}\operatorname{SSIM}\left( {x,{\widetilde{x}}_{m}}\right) \tag{5}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$\operatorname{SSIM}\left( {{x}_{m},{\widetilde{x}}_{m}}\right) =$ (6) Conv Feature Fully $\left( {5 \div 5 - 8}\right)$ Maps connected layer (2*2) Feature volution layer vector Unpooling DeConv Feature Fully connected layer $\frac{2{\mu }_{{x}_{m}}{\mu }_{{\widetilde{x}}_{m}} + {c}_{1}}{{\mu }_{{x}_{m}}^{2} + {\mu }_{{\widetilde{x}}_{m}}^{2} + {c}_{1}^{2}{\sigma }_{{x}_{m}}^{2} + {\sigma }_{{\widetilde{x}}_{m}}^{2} + {c}_{2}}$ Original Encounter Conv Conv Scenarios $\left( {9 \times 9 - {16}}\right)$ (7*7-16) Encoder (2*2 (2*2) Loss Convolutional Layer Function Decode Unpooling (2*2 Encounter DeConv DeConv Scenarios (9*9-16) (7*7-16)
|
| 138 |
+
|
| 139 |
+
Fig. 5. The architecture of convolutional autoencoder.
|
| 140 |
+
|
| 141 |
+
LSTM is widely used for studying persistent features in time series data and can effectively learn dependencies between time series[18]. Therefore, LSTM is chosen to represent temporal feature evolution. The LSTM primarily consists of three gating units: the forget gate, the input gate, and the output gate, as shown in Figure. 6. The forget gate controls the transmission or forgetting of information. The process is described by Equation (7):
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
{f}_{t} = \sigma \left( {{W}_{f} \cdot \left\lbrack {{h}_{t - 1},{x}_{t}}\right\rbrack + {b}_{f}}\right) \tag{7}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where $W$ represents weight, $b$ represents bias, $\left\lbrack {{h}_{t - 1},{x}_{t}}\right\rbrack$ represents a vector consisting of the hidden layer output ${h}_{t - 1}$ of the previous LSTM module, and the input ${x}_{t}$ of the current module, $\sigma \left( \cdot \right)$ represents the sigmoid function.
|
| 148 |
+
|
| 149 |
+
0 $\sigma$ tanh tanh 0
|
| 150 |
+
|
| 151 |
+
Fig. 6. LSTM unit structure diagram.
|
| 152 |
+
|
| 153 |
+
§ E. CLUSTERING OF ENCOUNTER SCENARIOS
|
| 154 |
+
|
| 155 |
+
Feature vectors can eventually describe the intricate spatial relationships and temporal evolution of ship encounter events through the aforementioned method. By calculating the distance between each related feature vector, the similarity between ship encounter scenarios is determined. Once the distances are obtained, clustering algorithms classify the scenarios, and the results are evaluated using metrics to obtain the final classification outcome. Hierarchical clustering, simple and widely used, can reflect the step-by-step partitioning process of each object through a hierarchical clustering tree. $\left\lbrack {{19},{20}}\right\rbrack$ Therefore, hierarchical clustering is chosen as the clustering algorithm for this study's encounter scenarios.
|
| 156 |
+
|
| 157 |
+
In the process of hierarchical clustering, it is challenging to directly select the best clustering result Therefore, an indicator is needed to select the appropriate number of clusters. In this paper, the value of $k$ is adaptively determined using the silhouette coefficient. ${SC}$ is defined by the mean distance from any point in the cluster to other points in the cluster after classification and the mean distance from any point to all points in the adjacent clusters. The better the categorization effect, the higher the SC value. The formula (8) displays the ${SC}$ calculating procedure.
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
{SC}\left( i\right) = \frac{{CTb}\left( i\right) - {CTa}\left( i\right) }{\max \{ {CTa}\left( i\right) ,{CTb}\left( i\right) \} } \tag{8}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
The average distance between scenario $i$ and other scenes in the same cluster is ${CTb}\left( i\right)$ , whereas the minimal average distance between scenario $i$ and other clusters is ${CTa}\left( i\right)$ . The silhouette coefficient ranges from -1 to 1, with higher values indicating better clustering performance.
|
| 164 |
+
|
| 165 |
+
§ III. CASE STUDY
|
| 166 |
+
|
| 167 |
+
§ A. DATA COLLECTION AND PROCESSING
|
| 168 |
+
|
| 169 |
+
This research uses data from November 1, 2018, to November 30, 2018, for the outside waters of Ningbo-Zhoushan Port. As shown in Fig.7, the targeted area is situated between latitudes ${29} \circ {30}\mathrm{\;N} - {29} \circ {49}\mathrm{\;N}$ and longitudes ${122} \circ {20}\mathrm{E} - {122} \circ {60}\mathrm{\;E}$ . To guarantee the precision of the ship encounter scenario analysis, specific mission vessel data, including tugboats, fishing boats, and anchored ships, were removed from the data. Subsequently, the residual data underwent data preprocessing procedures in preparation for more experiments. It is evident from the trajectory distribution that there are a lot of ship interactions in the research area.
|
| 170 |
+
|
| 171 |
+
the outside waters of Ning-Bo-Zhoushan Port
|
| 172 |
+
|
| 173 |
+
Fig. 7. The location of the study area.
|
| 174 |
+
|
| 175 |
+
§ B. ANALYSIS AND VALIDATION OF SCENARIO EXTRACTION RESULTS
|
| 176 |
+
|
| 177 |
+
Three sample ship encounter scenarios are shown in Figure 8 to confirm the retrieved ship encounter scenarios. Four graphics are used to explain each scenario: the first graph shows the encounter process from start to finish using printed trajectories. The end state of the interaction is indicated by the ship icon in this subgraph. The progression of relative distance, DCPA, and TCPA between the OS and other TSs during the encounter process is shown in the remaining three graphs (a), (b), and (c). In these cases, the DCPA stays tiny for a while, the TCPA changes from positive to negative, and the relative distance first drops to a very low value before gradually increasing. The retrieved scenarios are validated by the evolutionary patterns that align with real-world encounter experiences. The aforementioned evolution trends of relative distance, DCPA, and TCPA are all consistent across all extracted situations.
|
| 178 |
+
|
| 179 |
+
-06-TS -08-TS -cs-ts - threshold -OS-TS2 -OS-TS2 Time(+10s) $\operatorname{Time}\left( {\times {10}\mathrm{s}}\right)$ Time(-10s) TS/ —OS-TS2 ${\epsilon }_{\mathrm{{TS2}}}$ TS1 Longitade (*) Time(1̂0s) OS /TS3 Longitude $\left( {}^{0}\right)$ Time $\left( {\times {10}\mathrm{\;s}}\right)$
|
| 180 |
+
|
| 181 |
+
Fig. 8. Encounter situations involving varying numbers of ships and the development of their features.
|
| 182 |
+
|
| 183 |
+
Due to computational cost constraints, experimenting with all ship encounter scenarios is difficult. Therefore, selecting common encounter scenarios in maritime navigation as experimental data is necessary. As seen in Figure 9, the extracted encounter scenarios were first categorized and statistically examined according to the number of ships engaged. According to the classification results, two-ship encounters make up around half of all extracted scenarios, making them the most frequent. As ships involved increase, the number of scenarios gradually decreases, with a substantial decline occurring when the number of ships exceeds five.
|
| 184 |
+
|
| 185 |
+
18000 133 6 7 8 9 10 11 12 The number of ships involved in encounter 16090 16000 The number of encounters 14000 12000 10000 8682 8000 6000 4000 3961 2000 1798 2 3
|
| 186 |
+
|
| 187 |
+
Fig. 9. Scenario classification outcomes depending on the number of ships.
|
| 188 |
+
|
| 189 |
+
To ensure the experimental data is representative while also saving computational costs, two-ship and three-ship encounter scenarios are chosen as the experimental dataset. This selection includes common two-ship encounters and more complex multi-ship encounters, which occur more frequently in actual maritime navigation. The durations of the two types of encounter scenarios in the experimental dataset were then statistically analyzed, and Figure. 10. displays the results. The analysis revealed that the proportions of two-ship and three-ship scenarios lasting more than 10 minutes were 84.6% and 90.1%, respectively. This data segment is representative of all data exceeding 10 minutes, providing an important reference value for experimental analysis. Based on maritime navigation experience, scenarios lasting 10-20 minutes were chosen as experimental data. This selection ensures the significance of ship interactions while preventing the dataset from becoming overly large. Therefore, two-ship and three-ship encounter scenarios lasting 10-20 minutes were chosen as the final experimental dataset.
|
| 190 |
+
|
| 191 |
+
1800 two-ship 150 200 The duration of scenarios $\left( {\times {10}\mathrm{\;s}}\right)$ 1600 1400 Frequency 1000 800 400 200 0 50 100
|
| 192 |
+
|
| 193 |
+
Fig. 10. Duration statistics for encounter scenarios.
|
| 194 |
+
|
| 195 |
+
§ C. EXPERIMENTAL SOFTWARE ENVIRONMENT AND MODEL TRAINING
|
| 196 |
+
|
| 197 |
+
For the experimental software environment, Python was chosen, using the PyTorch deep learning framework to train the model. The hyperparameter settings are shown in Table I. In Table I, Adam is the optimizer for the adaptive moment estimation method; Batch size represents the number of samples trained in each batch; Epoch refers to the number of training epochs; and Num Hidden Unit is the hidden layer dimensions of the LSTM.
|
| 198 |
+
|
| 199 |
+
TABLE I. HYPERPARAMETER SETTINGS
|
| 200 |
+
|
| 201 |
+
max width=
|
| 202 |
+
|
| 203 |
+
HYPERPARAMETER Parameter Value
|
| 204 |
+
|
| 205 |
+
1-2
|
| 206 |
+
Optimizer Adam
|
| 207 |
+
|
| 208 |
+
1-2
|
| 209 |
+
CAE hidden layer dimensions 8
|
| 210 |
+
|
| 211 |
+
1-2
|
| 212 |
+
Batch size 128
|
| 213 |
+
|
| 214 |
+
1-2
|
| 215 |
+
Learning Rate 0.001
|
| 216 |
+
|
| 217 |
+
1-2
|
| 218 |
+
Epoch 760
|
| 219 |
+
|
| 220 |
+
1-2
|
| 221 |
+
Num Hidden Unit 3
|
| 222 |
+
|
| 223 |
+
1-2
|
| 224 |
+
|
| 225 |
+
A total of 500 scenarios were selected from the experimental dataset for model training. First, the encounter scenarios were time-sliced, resulting in 7,366 and 7,261 scenario grid images, respectively. These encounter scenarios were then input into the CAE to extract spatial features. After 760 training epochs, the change in the loss function values with the number of training epochs is shown in Figure. 11. The training error converges to a very small value, indicating that the trained CAE can reconstruct the input data from the latent layer features. To demonstrate that the trained CAE can reconstruct the original encounter scenarios, the original scenario images and their reconstructed versions are shown in Figure. 12. The first row displays the original ship encounter scenarios, while the second row shows the reconstructed images. The structural similarity between the original and reconstructed scenarios demonstrates that the CAE model excels in capturing low-dimensional representations and reconstructing high-quality images from these features. Finally, the feature matrix generated by the CAE is input into the LSTM model to learn the spatial feature evolution of the scenarios over time, outputting feature vectors to represent them.
|
| 226 |
+
|
| 227 |
+
0.8 two-ship three-ship 400 600 800 Epochs 0.6 Loss 0.4 0.2 0 200
|
| 228 |
+
|
| 229 |
+
Fig. 11. Loss during the training of CAE.
|
| 230 |
+
|
| 231 |
+
Original Reconstructed
|
| 232 |
+
|
| 233 |
+
Fig. 12. Original and reconstructed encounter scenario images of CAE
|
| 234 |
+
|
| 235 |
+
§ D. CLUSTERING AND EVALUATION
|
| 236 |
+
|
| 237 |
+
The ship encounter scenarios were represented by feature vectors using the CAE-LSTM approach. Subsequently, hierarchical clustering was applied to these feature vectors to classify the ship encounter scenarios and obtain clustering results. SC was used to determine the ideal number of clusters and evaluate the effectiveness of clustering. Cluster counts varied from two to fifteen., and the ${SC}$ values varied accordingly, as shown in Figure. 13.
|
| 238 |
+
|
| 239 |
+
0.9 two-ship three-ship 12 14 Number of Clusters Mean Silhouette Value 0.8 0.7 0.6 0.5 0.4 0.3 0.2
|
| 240 |
+
|
| 241 |
+
Fig. 13. Variation of silhouette coefficient values with the number of clusters
|
| 242 |
+
|
| 243 |
+
It demonstrates that both datasets obtained the highest silhouette coefficient values when there are two clusters. However, avoiding too few clusters is required to ensure a detailed separation of the microscopic aspects of ship interactions in various encounter scenarios. Therefore, 5 and 4 were chosen as the final number of clusters for the two datasets, respectively. These values represent the inflection points of the silhouette coefficient for both datasets. Beyond these points, as the number of clusters increases, the silhouette coefficient generally declines, indicating a deterioration in clustering performance.
|
| 244 |
+
|
| 245 |
+
After clustering the encounter scenarios, the frequency and duration distributions for each cluster are shown in Figures 14 and Figure. 15, respectively. For further analysis, the two clusters with the highest and lowest frequencies from each dataset were selected for feature analysis.
|
| 246 |
+
|
| 247 |
+
two-ship three-ship 300 250 Frequency 200 150 100 50 0 3 250 Frequency 200 150 100 1 3 K
|
| 248 |
+
|
| 249 |
+
Fig. 14. Frequency distribution of encounter scenarios.
|
| 250 |
+
|
| 251 |
+
rwo-ship 115 Duration ( $\times {10}\mathrm{\;s}$ ) 105 2 The number of clusters Duration ( $\times {10}\mathrm{\;s}$ ) 115 110 105 95 3 The number of clusters
|
| 252 |
+
|
| 253 |
+
Fig. 15. The duration distribution of each cluster of encounter scenarios.
|
| 254 |
+
|
| 255 |
+
The interaction process between ship trajectories and the evolution of two features-relative distance and TCPA is shown in Figures 16 and Figure. 17. The first row of three images shows the complete trajectory of three encounter scenarios, where " $\circ$ " and " $\times$ " represent the start and end positions of the encounter scenario, respectively. The relevant scenarios' relative distance and TCPA evolution are shown in the other two rows. The first two columns belong to the same cluster and illustrate the common characteristics of the scenarios. The third column represents a different cluster to highlight the distinctions.
|
| 256 |
+
|
| 257 |
+
For the two-ship encounter scenarios, Cluster 4 features ships moving in opposite directions, showing a head-on encounter with the relative distance initially decreasing and then increasing, and the TCPA exhibiting a linear decreasing trend. Cluster 5, on the other hand, consists of ships moving in the same direction, with the relative distance remaining constant and TCPA showing a decreasing trend but with significant fluctuations. For the three-ship encounter scenarios, Cluster 1 involves one target ship crossing paths with the OS, while the other target ship encounters head-on. The relative distances for both target ships initially decrease and then increase, with the increase varying in magnitude. The TCPA shows a decreasing trend, with one ship's TCPA decreasing linearly and the other exhibiting noticeable fluctuations. In contrast, Cluster 3 features both target ships crossing paths with the OS. Although the relative distance trend is similar to Cluster 1, the ships in Cluster 3 are moving in the same direction, resulting in consistent changes in relative distance and TCPA fluctuating consistently before reaching zero. In summary, the interaction of trajectories, the evolution of features, and the duration within the same cluster exhibit consistent patterns. Different clusters, however, show distinctly different patterns.
|
| 258 |
+
|
| 259 |
+
Latinde Latitude Latitude -OS-TS #4 44 -OS-TS Distance(m) #4 #4 TCPA(min)
|
| 260 |
+
|
| 261 |
+
Fig. 16. Trajectory interaction and feature evolution process of the two-ship encounter scenarios.
|
| 262 |
+
|
| 263 |
+
Locitoud #3 #3 Time(*10s) Time $\left( {\times {10}\mathrm{\;s}}\right)$ -OS-TS1 —OS-TS1 OS-TS2 -OS-TS) — threshold Time(×10s) $\operatorname{Time}\left( {\times {10}\mathrm{\;s}}\right)$ #1 Distance(m) #1 #1 -OS-TS1 OS-TS2 TCPA(min) #1 Time(×10s)
|
| 264 |
+
|
| 265 |
+
Fig. 17. Trajectory interaction and feature evolution process of the three-ship encounter scenarios.
|
| 266 |
+
|
| 267 |
+
Through the above analysis, the ship encounter scenario clustering method proposed in this paper effectively classifies different scenarios. The visual verification of trajectory interactions and feature evolution during the encounter process confirms the validity of this classification method. It demonstrates the various interaction patterns and contexts among multiple ships in complex navigable waters, aiding in distinguishing and understanding different types of ship encounter scenarios.
|
| 268 |
+
|
| 269 |
+
§ IV. CONCLUSION
|
| 270 |
+
|
| 271 |
+
This paper proposes a method for clarifying ship encounter scenarios. First, ship encounter scenarios are segmented using time windows, and convolutional autoencoders generate spatial feature vectors for each time slice. Next, these spatial feature vectors are sequentially input into a long short-term memory (LSTM) network to produce temporal feature vectors. Finally, hierarchical clustering is applied to group the feature vectors based on their spatiotemporal attributes. Experimental results demonstrate that this method effectively classifies encounter scenarios involving various numbers of ships. The visualization of the interaction process and the dynamic evolution of features between ships confirms the classification's effectiveness.
|
| 272 |
+
|
| 273 |
+
§ V. FUTURE WORK
|
| 274 |
+
|
| 275 |
+
In the future, we plan to make improvements in the following two directions:
|
| 276 |
+
|
| 277 |
+
1. Increase the size of the experimental data sample and optimize the scenario construction method to develop a multi-ship encounter scenario library tailored for complex navigational waters. Additionally, establish a query index based on ship scenarios.
|
| 278 |
+
|
| 279 |
+
2. Improve the classification method of ship encounter scenarios and enrich the dynamic characterization of encounter scenarios; design relevant application algorithms based on the scenario library, such as scenario prediction, risk assessment, and ship collision avoidance algorithms, etc., and further study the characterization of multi-ship encounter scenarios and the evolution law in depth.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,349 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Impacts of speed and spacing on resistance in ship formations
|
| 2 |
+
|
| 3 |
+
Linying Chen
|
| 4 |
+
|
| 5 |
+
State Key Laboratory of Maritime
|
| 6 |
+
|
| 7 |
+
Technology and Safety,
|
| 8 |
+
|
| 9 |
+
School of Navigation, Wuhan
|
| 10 |
+
|
| 11 |
+
University of Technology
|
| 12 |
+
|
| 13 |
+
Wuhan, China
|
| 14 |
+
|
| 15 |
+
LinyingChen@whut.edu.cn
|
| 16 |
+
|
| 17 |
+
Linhao Xue
|
| 18 |
+
|
| 19 |
+
School of Navigation, Wuhan
|
| 20 |
+
|
| 21 |
+
University of Technology
|
| 22 |
+
|
| 23 |
+
Wuhan, China
|
| 24 |
+
|
| 25 |
+
xue_lh@whut.edu.cn
|
| 26 |
+
|
| 27 |
+
Yangying He
|
| 28 |
+
|
| 29 |
+
School of Intelligent Sports
|
| 30 |
+
|
| 31 |
+
Engineering, Wuhan Sports
|
| 32 |
+
|
| 33 |
+
University
|
| 34 |
+
|
| 35 |
+
Wuhan, China
|
| 36 |
+
|
| 37 |
+
yangyinghe@whsu.edu.cn
|
| 38 |
+
|
| 39 |
+
Pengfei Chen
|
| 40 |
+
|
| 41 |
+
State Key Laboratory of Maritime
|
| 42 |
+
|
| 43 |
+
Technology and Safety,
|
| 44 |
+
|
| 45 |
+
School of Navigation, Wuhan
|
| 46 |
+
|
| 47 |
+
University of Technology
|
| 48 |
+
|
| 49 |
+
Wuhan, China
|
| 50 |
+
|
| 51 |
+
Chenpf@whut.edu.cn
|
| 52 |
+
|
| 53 |
+
Junmin Mou
|
| 54 |
+
|
| 55 |
+
State Key Laboratory of Maritime Technology and Safety, School of Navigation, Wuhan University of Technology Wuhan, China Moujm@whut.edu.cn
|
| 56 |
+
|
| 57 |
+
Yamin Huang
|
| 58 |
+
|
| 59 |
+
State Key Laboratory of Maritime
|
| 60 |
+
|
| 61 |
+
Technology and Safety,
|
| 62 |
+
|
| 63 |
+
School of Navigation, Wuhan
|
| 64 |
+
|
| 65 |
+
University of Technology
|
| 66 |
+
|
| 67 |
+
Wuhan, China
|
| 68 |
+
|
| 69 |
+
YaminHuang@whut.edu.cn
|
| 70 |
+
|
| 71 |
+
Abstract-Sailing in formation has the benefits of drag reduction. In current studies of hydrodynamic analysis of ship formations, the impacts of speed and spacing between adjacent ships on total resistance are seldom considered. To estimate the weight of different factors in formation on total resistance variation, the impacts of speed, longitudinal distance, and transverse locations on the observed total resistance of formations are investigated by analyzing hydrodynamic data in tandem, parallel, and triangle formation. The relation between resistance variation and speed is revealed. The regression analysis results on different formations indicate the differences between longitudinal spacing and transverse impacts. The regression formulation can be adopted to predict total resistance in formations.
|
| 72 |
+
|
| 73 |
+
Keywords-drag reduction, formation, regression analysis
|
| 74 |
+
|
| 75 |
+
## I. INTRODUCTION
|
| 76 |
+
|
| 77 |
+
Nowadays, saving energy, reducing atmospheric pollutant emissions, and lowering carbon emissions are key concerns in the shipping industry. Increasingly, scholars are focusing on reducing ship resistance to save energy. Inspired by observing and analyzing duck flock swimming behavior [1], scholars have drawn insights from biomimicry and begun researching drag reduction through ship formations.
|
| 78 |
+
|
| 79 |
+
Chen et al. [2] studied the wave interference characteristics of two ships sailing in parallel and following each other and a three-ship "V" formation in shallow water using the bare hull of Series 60. The results indicate that when the two ships follow each other, the wave resistance for both ships decreases. In a three-ship "V" formation, the waves from the trailing ship provide additional thrust, significantly reducing the wave resistance of the leading ship. However, the additional reactive force from the wave crests of the leading ship increases the resistance of the trailing ship. Zheng et al. [3] used the second-order source method based on the Dawson method to calculate the wave resistance of four Wigley ships in three common formations: single-ship, two-ship formation, and three-body ship formation. They identified optimal ship formations for drag reduction in different speed ranges, and adjusting the relative positions of the ships in the Wigley formation can achieve drag reduction. Qin Yan et al. [4] first performed a numerical analysis of the drag characteristics of a single Wigley ship at different speeds. They compared the results with the hydrodynamic performance of a "train" formation at various longitudinal spacings. The analysis showed that, under all conditions, the total drag of the train formation was about ${10}\%$ to ${20}\%$ less than that of a single ship. For lower speeds, reducing the longitudinal distance can achieve drag reduction, but at higher speeds, increasing the longitudinal spacing helps maintain drag reduction. Liu et al. [5] used CFD to study the drag reduction effects of a KCS ship model in a twin-ship "train" formation at different speeds, showing that the drag reduction for the following ship could reach up to 24.3%. He et al. [6][7] focused on the hydrodynamic performance of three-ship formations at low speeds, analyzing linear, parallel, and triangular formations with equal and unequal spacing. The optimal ship formation configuration for drag reduction under different formations was ultimately identified. A regression model [8] was also developed to predict total resistance in different formation systems. Meanwhile, machine learning methods have also been applied to vehicle platooning problems to predict the drag of each vehicle in platoons of varying numbers (varies from 2 - 4). In summary, sailing in formation has the potential for drag reduction. Existing work [9][10][11] mainly focuses on observing drag reduction benefits at different speeds and formations configurations. However, the impact of factors on the resistance reduction of ship formation is unclear. Further research should be investigated to understand how different factors affect the total drag in ship formations.
|
| 80 |
+
|
| 81 |
+
Therefore, this paper aims to clarify the direct relationship between speed, spacing, and total resistance in ship formations. The primary innovation of this paper lies in employing regression analysis to quantitatively assess the ship formation CFD database, aiming to determine the extent to which speed and distance influence the resistance encountered during ship formation navigation.
|
| 82 |
+
|
| 83 |
+
The main contributions of the paper are as follows:
|
| 84 |
+
|
| 85 |
+
---
|
| 86 |
+
|
| 87 |
+
National Natural Science Foundation of China
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
- Quantitative analysis and estimation of the effects of factors (speed, longitudinal distances, and transverse locations) on total resistance in formations are provided.
|
| 92 |
+
|
| 93 |
+
- A regression model is established to predict the total resistance of the multi-ship formation system.
|
| 94 |
+
|
| 95 |
+
Subsequently, the datasets investigated in our research are introduced in Section II. Section III explains the proposed research approach. The analysis results for the impacts of different factors are presented, and the regression model is built in Section IV. In the last, Section V concludes the main findings and recommendations for further research.
|
| 96 |
+
|
| 97 |
+
## II. DATA DESCRIPTION
|
| 98 |
+
|
| 99 |
+
## A. Source of data
|
| 100 |
+
|
| 101 |
+
In this research, the dataset consists entirely of CFD simulation data. All the simulation is calculated via commercial software STAR CCM+ V13.06. Before the systematic simulation, verification and validation have been done. Therefore, the accuracy of the CFD results is guaranteed.
|
| 102 |
+
|
| 103 |
+
## B. Studied ship in dataset
|
| 104 |
+
|
| 105 |
+
In our CFD simulation conditions, the three-ship isomorphic formation is composed of three identically bare hulls of the full-swing tugboat 'WillLead I'. The parameters of the ship are shown in Table 1, and the side view is presented in Figure 1.
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
|
| 109 |
+
Fig. 1. Side view of the bare hull of 'Willlead I'
|
| 110 |
+
|
| 111 |
+
TABLE I. PARAMETERS OF 'WILL LEAD I '
|
| 112 |
+
|
| 113 |
+
<table><tr><td/><td>$\lambda$</td><td>${\mathrm{L}}_{\mathrm{{OA}}}\left( \mathrm{m}\right)$</td><td>${\mathrm{L}}_{\mathrm{{PP}}}\left( \mathrm{m}\right)$</td><td>B(m)</td><td>T(m)</td><td>${\mathbf{A}}_{\mathbf{S}}\left( {\mathbf{m}}^{2}\right)$</td></tr><tr><td>Full scale</td><td>1.00</td><td>34.95</td><td>30.00</td><td>10.50</td><td>4.00</td><td>432.41</td></tr><tr><td>Model scale</td><td>17.475</td><td>2</td><td>1.72</td><td>0.674</td><td>0.211</td><td>0.672</td></tr></table>
|
| 114 |
+
|
| 115 |
+
## C. Data composition
|
| 116 |
+
|
| 117 |
+
The dataset comprises CFD simulation results in four different formation configurations: tandem formation, parallel formation, right triangle formation, and general formation. Besides, the longitudinal distance $\left( {{\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2}}\right)$ and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ are different. The illustration of formation configurations is shown in Figure 2. The range of ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , ${\mathrm{{SP}}}_{2}$ is shown in Table 2. In tandem formation, ${\mathrm{{SP}}}_{1}$ equals ${\mathrm{{SP}}}_{2}$ as zero; in parallel formation, ${\mathrm{{ST}}}_{1}$ equals ${\mathrm{{ST}}}_{2}$ as zero. In a right triangle formation, the bow of ship 2 aligns with ship 3, and the centerline of ship ${}_{1}$ aligns with ship 2 . In a general triangle formation, the bow of ship ${}_{1}$ aligns with ship 3 .
|
| 118 |
+
|
| 119 |
+
TABLE II. RANGE OF ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}$
|
| 120 |
+
|
| 121 |
+
<table><tr><td>Configuration</td><td>${\mathbf{{ST}}}_{1}\left( \mathbf{m}\right)$</td><td>$\mathbf{S{T}_{2}\left( m\right) }$</td><td>${\mathbf{{SP}}}_{1}\left( \mathbf{m}\right)$</td><td>${\mathrm{{SP}}}_{2}\left( \mathrm{m}\right)$</td></tr><tr><td>Tandem</td><td>0.25-2.0</td><td>0.25-2.0</td><td>/</td><td>/</td></tr><tr><td>Parallel</td><td>/</td><td>/</td><td>0.1685-2.022</td><td>0.337-2.696</td></tr><tr><td>Right triangle</td><td>${0.25} - {1.0}$</td><td>${0.25} - {1.0}$</td><td>0.1685-0.674</td><td>0.1685-0.674</td></tr><tr><td>General triangle</td><td>${0.25} - {1.0}$</td><td>${0.25} - {1.0}$</td><td>0.1685</td><td>0.337-0.5055</td></tr></table>
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
|
| 125 |
+
Fig. 2. Illustration of formation configurations
|
| 126 |
+
|
| 127 |
+
## III. METHODOLOGY
|
| 128 |
+
|
| 129 |
+
This research uses CFD data to investigate the influence of speed and spacing between adjacent ships in formations. In this section, the no-dimension coefficients of the formation and speed are illustrated in the coordinate system. The data analysis method is introduced, including data preparation.
|
| 130 |
+
|
| 131 |
+
## A. Dimensionless coefficients and coordinate system
|
| 132 |
+
|
| 133 |
+
The coordinate system to describe the motion and resistance of the formation is presented in Figure 3. The space-fixed coordinate system ${\mathrm{O}}_{\mathrm{o}} - {\mathrm{X}}_{\mathrm{o}}{\mathrm{Y}}_{\mathrm{o}}$ and the ship-fixed coordinate system O-xy constitute the global coordinate system. The space-fixed coordinate system is used to describe the motion of the formation, and the ship-fixed coordinate system is used to describe the resistance of the ship in formation. In the space-fixed coordinate system, the Xo direction points to the true north. In the ship-fixed coordinate system, the $\mathrm{x}$ direction indicates the bow of ship, and the $y$ direction points to the starboard side. The direction of no-dimension coefficients of resistance, including drag and the lateral force, are provided in Figure 3. ${\mathrm{X}}^{\prime }$ is the no-dimension coefficient of longitudinal resistance, and the direction of ${\mathrm{X}}^{\prime }$ from the bow to the stern is opposite to the $\mathrm{x}$ direction. ${\mathrm{Y}}^{\prime }$ is no dimension coefficient of lateral force and the direction of ${\mathrm{Y}}^{\prime }$ from the portside to the starboard side agrees with the y direction. The total dimensionless longitudinal resistance coefficient ${\mathrm{X}}_{\text{total }}^{\prime }$ can be obtained by summing ${\mathrm{X}}^{\prime }$ of each ship in the formation. In a similar vein the total dimensionless longitudinal resistance coefficient ${Y}_{\text{total }}^{\prime }$ can be obtained by summing ${Y}^{\prime }$ of each ship in the formation system. The equations of ${\mathrm{X}}_{\text{total }}$ and ${\mathrm{Y}}_{\text{total }}$ as follows:
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
{X}_{\text{total }}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{3}{X}^{\prime } \tag{1}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
{Y}_{\text{total }}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{3}{Y}^{\prime } \tag{2}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
In the research, the fleet is assumed to sail in calm water. Therefore, the impact of wind and current is not considered.
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+
Fig. 3. Illustration of the coordinate system
|
| 148 |
+
|
| 149 |
+
## B. Data preparation
|
| 150 |
+
|
| 151 |
+
Since the CFD simulation via STAR CCM+ V13.06 needs to set up the numerical and physical layouts, longitudinal distances $\left( {{\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2}}\right)$ and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ mentioned in section 2 could only represent the geometric relationship between neighbor ships. To facilitate the learning of the characteristics of the data during the regression analysis, the longitudinal and transverse locations in the dataset are rearranged. ${\mathrm{{ST}}}_{\mathrm{i}}$ is specified to be the sum-of-signs value, when ship $i$ is in front of ship $i + 1$ , and ${\mathrm{{ST}}}_{\mathrm{i}}$ is specified to be the opposite of the geometric value when it is behind ${\mathrm{{ship}}}_{\mathrm{i} + 1},{\mathrm{{SP}}}_{\mathrm{i}}$ is specified to be the sum-of-signs value of geometric value when ship $i$ is located on ${\operatorname{ship}}_{\mathrm{i} + 1}$ ’s port side, and ${\mathrm{{SP}}}_{\mathrm{i}}$ is specified to be the opposite of the geometric value when ${\operatorname{ship}}_{\mathrm{i}}$ is located on ship $\mathrm{i} + 1$ ’s starboard side.
|
| 152 |
+
|
| 153 |
+
## C. Data analysis method
|
| 154 |
+
|
| 155 |
+
Figure 4 presents the steps of the regression analysis method.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+
Fig. 4. Flow diagram of regression analysis.
|
| 160 |
+
|
| 161 |
+
The hydrodynamic dataset of the ship formation is divided into different subsets to analyze the effects of speed and spacing between ships. The impacts of both longitudinal distances and lateral locations are considered on the total resistance of the ship formation system. The total resistance variations among the formation of different speeds have been observed. However, the direct relationship between total resistance and speed is still not revealed. The relationship between total resistance and speed is expected to be found using the tandem formation dataset. During the quantitative analysis of the speed impacts on total drag in tandem formation, the tandem formation dataset is split into subsets of different ${\mathrm{{ST}}}_{1}$ distances. Then, a correlation analysis between total resistance and speed is performed to highlight the strength of the correlation and determine which speed criterion more effectively characterizes variations in total resistance.
|
| 162 |
+
|
| 163 |
+
Three steps are taken to quantify the impacts of longitudinal spacing and lateral locations. Firstly, the dataset is divided into six subsets based on different speeds. Each subset is further categorized into tandem formation, parallel formation, and triangle formation. After that, regression analysis is conducted on subsets of total resistance data at uniform speeds. The results will reveal if the impacts of ST and SP differ across various fleet speeds. Finally, overall functions will be defined to describe ST and SP impacts, incorporating speed variations, with coefficients estimated from the entire dataset.
|
| 164 |
+
|
| 165 |
+
After correlation analysis with different factors, a model for the formation system's total drag regression formulation is developed, including the five features: speed, ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ . Multivariate polynomial and ridge regression methods are combined to build a regression model. Polynomial regression is a method of regression analysis based on polynomial functions for fitting non-linear relationships in data. Compared with linear regression, polynomial regression could model the non-linear characteristics of the data by introducing polynomial terms, thus increasing the flexibility and applicability of the model. In practice, data has many features, and polynomial regression for a single feature performs poorly on fitting data with many features. Thus, multivariate polynomial regression is used in this study to fit the total resistance dataset of ship formations.
|
| 166 |
+
|
| 167 |
+
In practical applications of using multivariate polynomials for regression analysis, choosing the polynomial degree carefully is crucial. If the degree is too low, it may result in poor fitting performance. On the other hand, if the degree is too high, it can lead to overfitting issues where the model fits noise in the data rather than capturing the underlying trends. Therefore, when employing multivariate polynomials for regression analysis, it's crucial to select the degree of the polynomial judiciously. To address potential overfitting issues and improve the accuracy of data fitting when using multivariate polynomials to establish regression equations, this study introduces a combined approach of ridge regression with multivariate polynomial regression to build the regression model. Ridge regression is an improved least squares estimation method that addresses multicollinearity by introducing an L2 norm penalty term, thereby enhancing model stability and generalization capability. The penalty term is $\lambda$ times the sum of the squares of all regression coefficients (where $\lambda$ is the penalty coefficient). Combining ridge regression with multivariate polynomial regression can effectively control the complexity of the model and reduce the risk of overfitting by introducing a penalty term. This is particularly beneficial when input features are highly correlated or when the condition number of the data matrix is high. Such stability helps mitigate numerical computation issues that may arise in multivariate polynomial regression, thereby enhancing the reliability of the model.
|
| 168 |
+
|
| 169 |
+
## IV. RESULTS AND DISCUSSION
|
| 170 |
+
|
| 171 |
+
In this section, the impacts of speed, longitudinal location, and transverse spacing are analyzed to estimate the final regression model.
|
| 172 |
+
|
| 173 |
+
## A. Variation of drag due to speed
|
| 174 |
+
|
| 175 |
+
To estimate the relationship between speed and total resistance, the total resistance of the formation and the speed is provided in Figures 5 to 8. In these plots, the relationship between speed and total resistance of tandem formation under different longitudinal spacing ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ is depicted. Simultaneously, the combined resistance experienced by three individual ships sailing alone at various speeds is also provided.
|
| 176 |
+
|
| 177 |
+
The blue dots in the graph represent the total resistance experienced by the formation system, while the red line indicates the combined resistance experienced by three individual ships sailing alone at different speeds. The purpose of marking the red line on the graph is to determine whether a three-ship tandem formation can achieve a resistance gain compared to three ships sailing individually. When ${\mathrm{{ST}}}_{1}$ is set as ${0.25}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , and ${2.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$ both ships sailing individually and ships sailing in formation, the resistance of 'WillLead I' ships decreases as ship speed increases. Simultaneously, the formation system benefits from resistance gains, with the maximum gain occurring at a speed of ${0.212}\mathrm{\;m}/\mathrm{s}$ , reaching up to ${4.85}\%$ in maximum resistance reduction.
|
| 178 |
+
|
| 179 |
+
When ${\mathrm{{ST}}}_{1}$ is set as ${0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , the total resistance observed during sailing in formation decreases as speed increases. However, the formation system did not gain resistance benefits. Instead, it experienced resistance amplification, with the maximum increase reaching ${119.3}\%$ at ${\mathrm{{ST}}}_{1} = {0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ .
|
| 180 |
+
|
| 181 |
+
When ${\mathrm{{ST}}}_{1}$ is set to ${1.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$ and ${1.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , the formation system experiences resistance gains. However, as ship speed increases, the resistance benefits gradually decrease. Additionally, when ${\mathrm{{ST}}}_{2}$ is smaller than ${\mathrm{{ST}}}_{1}$ , the resistance benefits of the formation system nearly disappear as the ship speed increases to 0.424m/s.
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+
(c) ${\mathrm{{ST}}}_{2} = {1.5}{\mathrm{L}}_{\mathrm{{OA}}}$
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+
Fig. 5. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {0.25}{\mathrm{\;L}}_{\mathrm{{OA}}}$
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+
Fig. 6. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
|
| 201 |
+
Fig. 7. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {1.0}{\mathrm{L}}_{\mathrm{{OA}}}$
|
| 202 |
+
|
| 203 |
+
In tandem formations, the transverse distances SP1 and SP2 and the lateral forces do not affect the total resistance of the formation system. A correlation analysis between total resistance and speed of the formation is conducted. The results are shown in Table 3. All correlation coefficients are significant at 0.01 level of p-value(two-tailed).
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
|
| 209 |
+
Fig. 8. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {1.5}{\mathrm{L}}_{\mathrm{{OA}}}$
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+
Fig. 9. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {2.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$
|
| 216 |
+
|
| 217 |
+
## B. Quantification of longitudinal spacing and transverse location
|
| 218 |
+
|
| 219 |
+
This section presents regression analysis results of spacing in adjacent ships in formations. The results reveal the impact of spacing in adjacent ships $\left( {{\mathrm{{ST}}}_{1}{\mathrm{{ST}}}_{2}{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ on total resistance. In tandem formation, the transverse locations ${\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ , are set as zero. Besides, both ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ are varied from ${0.25}{\mathrm{L}}_{\mathrm{{OA}}}$ to ${2.0}\mathrm{{Log}}$ . So, there is no need to standardize the coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ when calculating the coefficient in tandem formation subset.
|
| 220 |
+
|
| 221 |
+
Similarly, ${\mathrm{{ST}}}_{1}$ , and ${\mathrm{{ST}}}_{2}$ , are set as zero in parallel formation. The effect of standardizing the coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ before calculating the coefficient in the parallel formation subset is insignificant. However, longitudinal distance and transverse spacing existed between the neighboring ships in the triangle formation. The longitudinal distance is much bigger than the transverse spacing. The unstandardized coefficients can not be compared directly. However, the standardized coefficients, derived from standardized regression analysis, are adjusted so that the variances of the variables are 1 . in triangle formation. Thus, considering the need for standardizing correlation analysis under triangular formation configurations, standardized regression analysis is adopted for correlation analysis in all conditions to unify the correlation coefficient analysis operations.
|
| 222 |
+
|
| 223 |
+
The whole data set of the total resistance of tandem formation is split into different subsets with the same speed. The coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ for the total drag variable in each subset are presented in Fig 10. The results clarify whether ${\mathrm{{ST}}}_{1}$ or ${\mathrm{{ST}}}_{2}$ significantly impact total resistance in this multivariant regression model.
|
| 224 |
+
|
| 225 |
+
Two comparisons are made to interpret the estimated standardized coefficients. For tandem formation within the same subset, the weights of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ are compared. The impact of ${\mathrm{{ST}}}_{1}$ on total resistance is more significant than that of ${\mathrm{{ST}}}_{2}$ .
|
| 226 |
+
|
| 227 |
+
The other comparison involves analyzing coefficients for different speed groups, which reveals how external impacts vary among ships at different speeds. This analysis shows distinct trends in the effects of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ . on total resistance is flat when the speed gets bigger. The correlation coefficient of ${\mathrm{{ST}}}_{2}$ ranges between -0.083 and -0.075 , indicating a negative correlation between ${\mathrm{{ST}}}_{2}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{2}$ increasing, total resistance tends to decrease. It is suggested that increasing ${\mathrm{{ST}}}_{2}$ can help the formation system reduce total resistance. However, the influence of ${\mathrm{{ST}}}_{2}$ on total resistance is instinctive. The correlation coefficient of ${\mathrm{{ST}}}_{1}$ ranges between 0.42 and 0.435, indicating a positive correlation between ${\mathrm{{ST}}}_{1}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{1}$ increasing, the formation system may gain energy benefits. It is suggested that decreasing ${\mathrm{{ST}}}_{1}$ can help the formation system reduce total resistance. However, the influence of ${\mathrm{{ST}}}_{1}$ on total resistance is significant. Thus, choosing ${\mathrm{{ST}}}_{1}$ carefully is more effective than selecting ${\mathrm{{ST}}}_{2}$ in obtaining total resistance benefits in tandem formation.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+
Fig. 10. The standardized coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ on total resistance in tandem formation.
|
| 232 |
+
|
| 233 |
+
The whole data set of the total resistance of parallel formation is split into different subsets with the same speed. The coefficients of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ for total resistance in each subset are presented in Fig 11.
|
| 234 |
+
|
| 235 |
+
Examining the standardized coefficients for parallel formation within the same subset allows for comparing the effects of SP1 and SP2. For parallel formation, both SP1 and SP2 have a significant impact on total resistance. The impact of SP1 is slightly higher than that of ${\mathrm{{SP}}}_{2}$ . In parallel formation, controlling the lateral spacing ${\mathrm{{SP}}}_{1}$ between ${\mathrm{{Ship}}}_{1}$ and ${\mathrm{{Ship}}}_{2}$ is more effective in gaining resistance benefits compared to controlling the lateral spacing ${\mathrm{{SP}}}_{2}$ between ${\mathrm{{Ship}}}_{2}$ and $\mathrm{{Ship}}3$ . It also can be observed that the trends of both impacts of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ on total resistance are undulatory with speed varying. The correlation coefficient of ${\mathrm{{SP}}}_{1}$ ranges between 0.823 and 0.844, indicating a positive correlation between ${\mathrm{{SP}}}_{1}$ and total resistance in parallel formation. With ${\mathrm{{SP}}}_{1}$ increasing, resistance benefits tend to decrease. The influence of ${\mathrm{{ST}}}_{2}$ on total resistance is positive. The correlation coefficient of ${\mathrm{{SP}}}_{2}$ varies from 0.700 to 0.722, indicating a positive correlation between ${\mathrm{{ST}}}_{1}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{1}$ increasing, the formation may gain resistance reduction benefits too.
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+
Fig. 11. The standardized coefficients of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ on total resistance in parallel formation.
|
| 240 |
+
|
| 241 |
+
The whole data set of the total resistance of right triangle formation is split into different subsets with the same speed. The coefficients of ST and SP for total resistance in each subset are presented in Fig 12. Analyzing the standardized coefficients for right triangle formation within the same subset reveals that the impact of ST is less significant compared to SP Besides, the impact of both ST and SP on total resistance is positive. The Impacts of SP is more significant than ST. It also can be observed that the effect of ST on total resistance changes more gradually with speed compared to the impact of SP on total resistance. The correlation coefficient of ST ranges remains at 0.43 , nearly unchanged, and the correlation coefficient of SP varies from 0.70 to 0.72 , similar to the standardized correlation coefficient of ${\mathrm{{SP}}}_{2}$ in parallel formation.
|
| 242 |
+
|
| 243 |
+
Regression models have been developed to quantitatively assess the effects of speed, ST, and SP on total resistance for tandem, parallel, and triangle formations. This paper presents the final regression models established using the complete dataset. Multivariant polynomial and ridge regression methods are combined to build the regression model. Due to the limited sample size, k-fold cross-validation was employed to enhance the robustness of the regression model.
|
| 244 |
+
|
| 245 |
+
The 4th-order regression functions are listed as equation (3)
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
{X}_{\text{total }} = {0.01S}{P}_{1}^{4} - {0.13S}{P}_{1}^{3}S{P}_{2} + {0.81S}{P}_{1}^{3}S{T}_{1} + {0.81S}{P}_{1}^{3}S{T}_{2} + {1.6S}{P}_{1}^{3} + {0.12S}{P}_{1}^{2}S{P}_{2}^{2} + {0.6S}{P}_{1}^{2}S{P}_{2}S{T}_{1} + {0.6S}{P}_{1}^{2}S{P}_{2}S{T}_{2}
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
- {0.01S}{P}_{1}^{2}S{P}_{2}U + {0.98S}{P}_{1}^{2}S{P}_{2} + {2.22S}{P}_{1}^{2}S{T}_{1}^{2} - {0.12S}{P}_{1}^{2}S{T}_{1}S{T}_{2} + {0.03S}{P}_{1}^{2}S{T}_{1}U + {0.26S}{P}_{1}^{2}S{T}_{1} - {0.19S}{P}_{1}^{2}S{T}_{2}^{2}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
+ {0.01S}{P}_{1}^{2}S{T}_{2}U + {0.26S}{P}_{1}^{2}S{T}_{2} + {0.05S}{P}_{1}^{2}U - {1.28S}{P}_{1}^{2} - {0.24S}{P}_{1}S{P}_{2}^{3} + {0.85S}{P}_{1}S{P}_{2}^{2} + {2.01S}{P}_{1}S{P}_{2}S{T}_{1}^{2}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
- {0.52S}{P}_{1}S{P}_{2}S{T}_{1}S{T}_{2} + {0.02S}{P}_{1}S{P}_{2}S{T}_{1}U + {0.45S}{P}_{1}S{P}_{2}S{T}_{1} - {0.59S}{P}_{1}S{P}_{2}S{T}_{2}{}^{2} + {0.45S}{P}_{1}S{P}_{2}S{T}_{2} + {0.04S}{P}_{1}S{P}_{2}U \tag{3}
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
- {0.74S}{P}_{1}S{P}_{2} + {3.0S}{P}_{1}S{T}_{1}^{3} - {1.11S}{P}_{1}S{T}_{1}^{2}S{T}_{2} + {0.08S}{P}_{1}S{T}_{1}^{2}U - {2.08S}{P}_{1}S{T}_{1}^{2} - {1.19S}{P}_{1}S{T}_{1}S{T}_{2}^{2} - {0.06S}{P}_{1}S{T}_{1}S{T}_{2}U
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
+ {0.98S}{P}_{1}S{T}_{1}S{T}_{2} - {0.02S}{P}_{1}S{T}_{1}U - {0.29S}{P}_{1}S{T}_{1} - {1.29S}{P}_{1}S{T}_{2}^{3} - {0.07S}{P}_{1}S{T}_{2}^{2}U + {1.06S}{P}_{1}S{T}_{2}^{2} + {0.01S}{P}_{1}S{T}_{2}U
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
- {0.29S}{P}_{1}S{T}_{2} - {0.02S}{P}_{1}U - {0.45S}{P}_{1} + {0.1S}{P}_{2}^{4} + {0.27S}{P}_{2}^{3}S{T}_{1} + {0.27S}{P}_{2}^{3}S{T}_{2} + {0.02S}{P}_{2}^{3}U + {0.03S}{P}_{2}^{3} + {2.41S}{P}_{2}^{2}S{T}_{1}^{2}
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
- {0.33S}{P}_{2}^{2}S{T}_{1}S{T}_{2} + {0.06S}{P}_{2}^{2}S{T}_{1}U + {0.21S}{P}_{2}^{2}S{T}_{1} - {0.4S}{P}_{2}^{2}S{T}_{2}^{2} + {0.04S}{P}_{2}^{2}S{T}_{2}U + {0.21S}{P}_{2}^{2}S{T}_{2} + {0.02S}{P}_{2}^{2}U
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
- {0.35S}{P}_{2}{}^{2} + {3.26S}{P}_{2}S{T}_{1}^{3} - {1.18S}{P}_{2}S{T}_{1}{}^{2}S{T}_{2} + {0.23S}{P}_{2}S{T}_{1}^{2}U - {2.6S}{P}_{2}S{T}_{1}{}^{2} - {1.27S}{P}_{2}S{T}_{1}S{T}_{2}{}^{2} + {0.09S}{P}_{2}S{T}_{1}S{T}_{2}U
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
+ {0.7S}{P}_{2}S{T}_{1}S{T}_{2} + {0.01S}{P}_{2}S{T}_{1}{U}^{2} + {0.04S}{P}_{2}S{T}_{1}U - {0.06S}{P}_{2}S{T}_{1} - {1.38S}{P}_{2}S{T}_{2}^{3} + {0.08S}{P}_{2}S{T}_{2}^{2}U + {0.8S}{P}_{2}S{T}_{2}^{2}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
+ {0.01S}{P}_{2}S{T}_{2}{U}^{2} + {0.07S}{P}_{2}S{T}_{2}U - {0.06S}{P}_{2}S{T}_{2} + {0.02S}{P}_{2}{U}^{2} - {0.14S}{P}_{2}U + {0.18S}{P}_{2} + {2.1S}{T}_{1}^{4} - {0.68S}{T}_{1}^{3}S{T}_{2}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
+ {0.12S}{T}_{1}^{3}U - {4.17S}{T}_{1}^{3} - {0.75S}{T}_{1}^{2}S{T}_{2}^{2} - {0.02S}{T}_{1}^{2}S{T}_{2}U + {1.18S}{T}_{1}^{2}S{T}_{2} - {0.08S}{T}_{1}^{2}U + {2.5S}{T}_{1}^{2} - {0.76S}{T}_{1}S{T}_{2}^{3}
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
- {0.02S}{T}_{1}S{T}_{2}^{2}U + {1.29S}{T}_{1}S{T}_{2}^{2} + {0.09S}{T}_{1}S{T}_{2}U - {1.48S}{T}_{1}S{T}_{2} + {0.01S}{T}_{1}{U}^{2} + {0.01S}{T}_{1}U - {0.17S}{T}_{1} - {0.83S}{T}_{2}^{4}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
- {0.03S}{T}_{2}^{3}U + {1.42S}{T}_{2}^{3} + {0.11S}{T}_{2}^{2}U - {1.6S}{T}_{2}^{2} - {0.02S}{T}_{2}U - {0.17S}{T}_{2} - {0.02}{U}^{4} + {0.01}{U}^{3} + {0.02}{U}^{2} + {0.15U} + {0.62}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
The results of the estimation of the regression analysis are shown in Table 4. According to the regression analysis results, about ${98.2}\%$ of the variance in the total power of the training systems can be explained by fleet speed. ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}\left( {\mathrm{R}}^{2}\right.$ is 0.982 for the whole dataset). Besides, speed has an estimate of 0.273 , indicating a positive but relatively small effect on the dependent variable.
|
| 304 |
+
|
| 305 |
+
The standard error is 0.836 , which is relatively large and suggests high uncertainty in the estimate. The t-statistic is 0.327 , falling below common critical values (such as 1.96), indicating that the effect of this feature may not be significant. The standardized estimate of 0.327 aligns with the t-statistic, reinforcing that the standardized impact is also relatively modest. Feature ${\mathrm{{ST}}}_{1}$ has an estimate of -0.171, reflecting a negative effect on the dependent variable. With a standard error of 0.157 , the precision of this estimate is relatively high. However, the t-statistic of -1.089 is below common critical values, suggesting that the impact of ${\mathrm{{ST}}}_{1}$ might also be nonsignificant. The standardized estimate of -1.089 confirms the direction of the effect but similarly indicates that its significance is weak. Feature ${\mathrm{{ST}}}_{2}$ has an estimate of -0.167, suggesting a negative effect on the dependent variable. The standard error is 0.157 , indicating high precision in the forecast. The t-statistic of -1.069 implies that this feature's impact may not be significant. The standardized estimate of -1.069 supports the direction of the effect but demonstrates that the impact is not substantial. Feature ${\mathrm{{SP}}}_{1}$ is estimated at -0.501, indicating a strong negative impact on the dependent variable.
|
| 306 |
+
|
| 307 |
+
TABLE IV. ESTIMATION RESULTS OF THE FINAL REGRESSION MODEL
|
| 308 |
+
|
| 309 |
+
<table><tr><td/><td>${\mathbf{R}}^{2}$</td><td>F-state</td><td>$\mathbf{{Estimate}}$</td><td>Std.error</td><td>t-stat</td></tr><tr><td/><td>0.982</td><td>168.045</td><td>0.603</td><td>0.089</td><td>6.759</td></tr><tr><td>${\mathrm{C}}_{\mathrm{U}}$</td><td>/</td><td>/</td><td>0.273</td><td>0.836</td><td>0.327</td></tr><tr><td>${\mathrm{C}}_{\mathrm{{ST}}1}$</td><td>/</td><td>/</td><td>-0.171</td><td>0.157</td><td>-1.09</td></tr><tr><td>${\mathrm{C}}_{\mathrm{{ST}}2}$</td><td>/</td><td>/</td><td>-0.167</td><td>0.157</td><td>-1.07</td></tr><tr><td>${\mathrm{C}}_{\mathrm{{SP}}1}$</td><td>/</td><td>/</td><td>-0.501</td><td>0.156</td><td>-3.205</td></tr><tr><td>${\mathrm{C}}_{\mathrm{{SP}}2}$</td><td>/</td><td>/</td><td>0.128</td><td>0.159</td><td>0.806</td></tr></table>
|
| 310 |
+
|
| 311 |
+
The standard error is 0.156 , which is relatively small, suggesting high accuracy in the estimate. The t-statistic of - 3.205 exceeds common critical values, demonstrating that the effect of ${\mathrm{{SP}}}_{1}$ is significant. The standardized estimate of -3.205 confirms that the impact remains strong even after standardization. Feature ${\mathrm{{SP}}}_{2}$ has an estimate of 0.128, showing a positive but small effect on the dependent variable. The standard error is 0.159 , which is relatively large, reflecting higher uncertainty in the estimate. The t-statistic of 0.806 is below common critical values, indicating that the effect of ${\mathrm{{SP}}}_{2}$ is insignificant. The standardized estimate of 0.806 suggests that the impact is also small after standardization.
|
| 312 |
+
|
| 313 |
+

|
| 314 |
+
|
| 315 |
+
Fig. 12. The standardized coefficients of ST and SP on total resistance in triangle formation.
|
| 316 |
+
|
| 317 |
+
## V. CONCLUSION
|
| 318 |
+
|
| 319 |
+
The paper established a regression model to analyze the effects of factors including speed, longitudinal distances $\left( {\mathrm{{ST}}}_{1}\right.$ , ${\mathrm{{ST}}}_{2}$ ), and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ on the total resistance of ship formations derived from CFD data. The variation of total resistance in tandem formation due to speed can be observed. The correlation analysis shows a strong correlation between speed and total resistance. The longitudinal spacing and transverse location impact on total resistance vary for different formation configurations. For tandem formation, both ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ have a more significant influence on total resistance. For parallel formation, the impact of both ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ slightly fluctuates with growing ship speed. However, for triangle formation, the impact of SP on total resistance shows a strong positive correlation. The ST impact on total resistance is negative. The regression analysis results revealed that about ${98.2}\%$ of the variance in the total resistance of various ship formation systems was mainly explained by the factors that influenced its formation speed, ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ .
|
| 320 |
+
|
| 321 |
+
This paper investigates the impact of different factors in the formation of total resistance. The estimated result indicates that more CFD data should be used in the regression analysis process. More intelligent methods can be used for regression analysis.
|
| 322 |
+
|
| 323 |
+
## ACKNOWLEDGMENT
|
| 324 |
+
|
| 325 |
+
The work presented in this study is financially supported by the National Natural Science Foundation of China under grants 52271364, 52101402, and 52271367.
|
| 326 |
+
|
| 327 |
+
## REFERENCES
|
| 328 |
+
|
| 329 |
+
[1] Z.-M. Yuan, M. Chen, L. Jia, C. Ji, and A. Incecik, "Wave-riding and wave-passing by ducklings in formation swimming," Journal of Fluid Mechanics, vol. 928, 2021.
|
| 330 |
+
|
| 331 |
+
[2] Chen BO, and. Wu Jiankang, "Wave Interactions Generated by Multi-Ship Unite Moving in Shallow Water ", CHINESE JOURNAL OF APPLIED MECHANICS, vol. 22, no. 02, pp. 159-163+329, 2005.
|
| 332 |
+
|
| 333 |
+
[3] ZHENG Yi and Li Jian-bo," An investigation into the possibility of resistance reduction for multiple ships in given formations," SHIP SCIENCE AND TECHNOLOGY, vol. 42, no. 17, pp. 12-16, 2020.
|
| 334 |
+
|
| 335 |
+
[4] Y. Qin, C. Yao, Y. Zheng, J. Huang, and E. Amer Soc Mechanical, "STUDY ON HYDRODYNAMIC PERFORMANCE OF A CONCEPTIONAL SEA-TRAIN,", Hamburg, GERMANY, 2022 Jun 05- 10 2022.
|
| 336 |
+
|
| 337 |
+
[5] Z. Liu, C. Dai, X. Cui, Y. Wang, H. Liu, and B. Zhou, "Hydrodynamic Interactions between Ships in a Fleet," Journal of Marine Science and Engineering, Article vol. 12, no. 1, Jan 2024.
|
| 338 |
+
|
| 339 |
+
[6] Y. He, J. Mou, L. Chen, Q. Zeng, Y. Huang, P. Chen, and S. Zhang, "Will sailing in formation reduce energy consumption? Numerical prediction of resistance for ships in different formation configurations," Applied Energy, vol. 312, Apr 152022.
|
| 340 |
+
|
| 341 |
+
[7] Y. He, L. Chen, J. Mou, Q. Zeng, Y. Huang, P. Chen, and S. Zhang, "Ship Emission Reduction via Energy-Saving Formation," IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 3, pp. 2599-2614, 2024.
|
| 342 |
+
|
| 343 |
+
[8] F. Jaffar, T. Farid, M. Sajid, Y. Ayaz, and M. J. Khan, "Prediction of Drag Force on Vehicles in a Platoon Configuration Using Machine Learning," Ieee Access, Article vol. 8, pp. 201823-201834, 2020.
|
| 344 |
+
|
| 345 |
+
[9] D. Zhang, L. Chao, and G. Pan, "Analysis of hydrodynamic interaction impacts on a two-AUV system," Ships and Offshore Structures, vol. 14, no. 1, pp. 23-34, 2018.
|
| 346 |
+
|
| 347 |
+
[10] L. Zou and L. Larsson, "Numerical predictions of ship-to-ship interaction in shallow water," Ocean Engineering, Article vol. 72, pp. 386-402, Nov 1 2013.
|
| 348 |
+
|
| 349 |
+
[11] L. Zou, Z.-j. Zou, and Y. Liu, "CFD-based predictions of hydrodynamic forces in ship-tug boat interactions," Ships and Offshore Structures, Article vol. 14, pp. S300-S310, Oct 32019.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/FjSPgP2m1X/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,369 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ IMPACTS OF SPEED AND SPACING ON RESISTANCE IN SHIP FORMATIONS
|
| 2 |
+
|
| 3 |
+
Linying Chen
|
| 4 |
+
|
| 5 |
+
State Key Laboratory of Maritime
|
| 6 |
+
|
| 7 |
+
Technology and Safety,
|
| 8 |
+
|
| 9 |
+
School of Navigation, Wuhan
|
| 10 |
+
|
| 11 |
+
University of Technology
|
| 12 |
+
|
| 13 |
+
Wuhan, China
|
| 14 |
+
|
| 15 |
+
LinyingChen@whut.edu.cn
|
| 16 |
+
|
| 17 |
+
Linhao Xue
|
| 18 |
+
|
| 19 |
+
School of Navigation, Wuhan
|
| 20 |
+
|
| 21 |
+
University of Technology
|
| 22 |
+
|
| 23 |
+
Wuhan, China
|
| 24 |
+
|
| 25 |
+
xue_lh@whut.edu.cn
|
| 26 |
+
|
| 27 |
+
Yangying He
|
| 28 |
+
|
| 29 |
+
School of Intelligent Sports
|
| 30 |
+
|
| 31 |
+
Engineering, Wuhan Sports
|
| 32 |
+
|
| 33 |
+
University
|
| 34 |
+
|
| 35 |
+
Wuhan, China
|
| 36 |
+
|
| 37 |
+
yangyinghe@whsu.edu.cn
|
| 38 |
+
|
| 39 |
+
Pengfei Chen
|
| 40 |
+
|
| 41 |
+
State Key Laboratory of Maritime
|
| 42 |
+
|
| 43 |
+
Technology and Safety,
|
| 44 |
+
|
| 45 |
+
School of Navigation, Wuhan
|
| 46 |
+
|
| 47 |
+
University of Technology
|
| 48 |
+
|
| 49 |
+
Wuhan, China
|
| 50 |
+
|
| 51 |
+
Chenpf@whut.edu.cn
|
| 52 |
+
|
| 53 |
+
Junmin Mou
|
| 54 |
+
|
| 55 |
+
State Key Laboratory of Maritime Technology and Safety, School of Navigation, Wuhan University of Technology Wuhan, China Moujm@whut.edu.cn
|
| 56 |
+
|
| 57 |
+
Yamin Huang
|
| 58 |
+
|
| 59 |
+
State Key Laboratory of Maritime
|
| 60 |
+
|
| 61 |
+
Technology and Safety,
|
| 62 |
+
|
| 63 |
+
School of Navigation, Wuhan
|
| 64 |
+
|
| 65 |
+
University of Technology
|
| 66 |
+
|
| 67 |
+
Wuhan, China
|
| 68 |
+
|
| 69 |
+
YaminHuang@whut.edu.cn
|
| 70 |
+
|
| 71 |
+
Abstract-Sailing in formation has the benefits of drag reduction. In current studies of hydrodynamic analysis of ship formations, the impacts of speed and spacing between adjacent ships on total resistance are seldom considered. To estimate the weight of different factors in formation on total resistance variation, the impacts of speed, longitudinal distance, and transverse locations on the observed total resistance of formations are investigated by analyzing hydrodynamic data in tandem, parallel, and triangle formation. The relation between resistance variation and speed is revealed. The regression analysis results on different formations indicate the differences between longitudinal spacing and transverse impacts. The regression formulation can be adopted to predict total resistance in formations.
|
| 72 |
+
|
| 73 |
+
Keywords-drag reduction, formation, regression analysis
|
| 74 |
+
|
| 75 |
+
§ I. INTRODUCTION
|
| 76 |
+
|
| 77 |
+
Nowadays, saving energy, reducing atmospheric pollutant emissions, and lowering carbon emissions are key concerns in the shipping industry. Increasingly, scholars are focusing on reducing ship resistance to save energy. Inspired by observing and analyzing duck flock swimming behavior [1], scholars have drawn insights from biomimicry and begun researching drag reduction through ship formations.
|
| 78 |
+
|
| 79 |
+
Chen et al. [2] studied the wave interference characteristics of two ships sailing in parallel and following each other and a three-ship "V" formation in shallow water using the bare hull of Series 60. The results indicate that when the two ships follow each other, the wave resistance for both ships decreases. In a three-ship "V" formation, the waves from the trailing ship provide additional thrust, significantly reducing the wave resistance of the leading ship. However, the additional reactive force from the wave crests of the leading ship increases the resistance of the trailing ship. Zheng et al. [3] used the second-order source method based on the Dawson method to calculate the wave resistance of four Wigley ships in three common formations: single-ship, two-ship formation, and three-body ship formation. They identified optimal ship formations for drag reduction in different speed ranges, and adjusting the relative positions of the ships in the Wigley formation can achieve drag reduction. Qin Yan et al. [4] first performed a numerical analysis of the drag characteristics of a single Wigley ship at different speeds. They compared the results with the hydrodynamic performance of a "train" formation at various longitudinal spacings. The analysis showed that, under all conditions, the total drag of the train formation was about ${10}\%$ to ${20}\%$ less than that of a single ship. For lower speeds, reducing the longitudinal distance can achieve drag reduction, but at higher speeds, increasing the longitudinal spacing helps maintain drag reduction. Liu et al. [5] used CFD to study the drag reduction effects of a KCS ship model in a twin-ship "train" formation at different speeds, showing that the drag reduction for the following ship could reach up to 24.3%. He et al. [6][7] focused on the hydrodynamic performance of three-ship formations at low speeds, analyzing linear, parallel, and triangular formations with equal and unequal spacing. The optimal ship formation configuration for drag reduction under different formations was ultimately identified. A regression model [8] was also developed to predict total resistance in different formation systems. Meanwhile, machine learning methods have also been applied to vehicle platooning problems to predict the drag of each vehicle in platoons of varying numbers (varies from 2 - 4). In summary, sailing in formation has the potential for drag reduction. Existing work [9][10][11] mainly focuses on observing drag reduction benefits at different speeds and formations configurations. However, the impact of factors on the resistance reduction of ship formation is unclear. Further research should be investigated to understand how different factors affect the total drag in ship formations.
|
| 80 |
+
|
| 81 |
+
Therefore, this paper aims to clarify the direct relationship between speed, spacing, and total resistance in ship formations. The primary innovation of this paper lies in employing regression analysis to quantitatively assess the ship formation CFD database, aiming to determine the extent to which speed and distance influence the resistance encountered during ship formation navigation.
|
| 82 |
+
|
| 83 |
+
The main contributions of the paper are as follows:
|
| 84 |
+
|
| 85 |
+
National Natural Science Foundation of China
|
| 86 |
+
|
| 87 |
+
* Quantitative analysis and estimation of the effects of factors (speed, longitudinal distances, and transverse locations) on total resistance in formations are provided.
|
| 88 |
+
|
| 89 |
+
* A regression model is established to predict the total resistance of the multi-ship formation system.
|
| 90 |
+
|
| 91 |
+
Subsequently, the datasets investigated in our research are introduced in Section II. Section III explains the proposed research approach. The analysis results for the impacts of different factors are presented, and the regression model is built in Section IV. In the last, Section V concludes the main findings and recommendations for further research.
|
| 92 |
+
|
| 93 |
+
§ II. DATA DESCRIPTION
|
| 94 |
+
|
| 95 |
+
§ A. SOURCE OF DATA
|
| 96 |
+
|
| 97 |
+
In this research, the dataset consists entirely of CFD simulation data. All the simulation is calculated via commercial software STAR CCM+ V13.06. Before the systematic simulation, verification and validation have been done. Therefore, the accuracy of the CFD results is guaranteed.
|
| 98 |
+
|
| 99 |
+
§ B. STUDIED SHIP IN DATASET
|
| 100 |
+
|
| 101 |
+
In our CFD simulation conditions, the three-ship isomorphic formation is composed of three identically bare hulls of the full-swing tugboat 'WillLead I'. The parameters of the ship are shown in Table 1, and the side view is presented in Figure 1.
|
| 102 |
+
|
| 103 |
+
< g r a p h i c s >
|
| 104 |
+
|
| 105 |
+
Fig. 1. Side view of the bare hull of 'Willlead I'
|
| 106 |
+
|
| 107 |
+
TABLE I. PARAMETERS OF 'WILL LEAD I '
|
| 108 |
+
|
| 109 |
+
max width=
|
| 110 |
+
|
| 111 |
+
X $\lambda$ ${\mathrm{L}}_{\mathrm{{OA}}}\left( \mathrm{m}\right)$ ${\mathrm{L}}_{\mathrm{{PP}}}\left( \mathrm{m}\right)$ B(m) T(m) ${\mathbf{A}}_{\mathbf{S}}\left( {\mathbf{m}}^{2}\right)$
|
| 112 |
+
|
| 113 |
+
1-7
|
| 114 |
+
Full scale 1.00 34.95 30.00 10.50 4.00 432.41
|
| 115 |
+
|
| 116 |
+
1-7
|
| 117 |
+
Model scale 17.475 2 1.72 0.674 0.211 0.672
|
| 118 |
+
|
| 119 |
+
1-7
|
| 120 |
+
|
| 121 |
+
§ C. DATA COMPOSITION
|
| 122 |
+
|
| 123 |
+
The dataset comprises CFD simulation results in four different formation configurations: tandem formation, parallel formation, right triangle formation, and general formation. Besides, the longitudinal distance $\left( {{\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2}}\right)$ and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ are different. The illustration of formation configurations is shown in Figure 2. The range of ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , ${\mathrm{{SP}}}_{2}$ is shown in Table 2. In tandem formation, ${\mathrm{{SP}}}_{1}$ equals ${\mathrm{{SP}}}_{2}$ as zero; in parallel formation, ${\mathrm{{ST}}}_{1}$ equals ${\mathrm{{ST}}}_{2}$ as zero. In a right triangle formation, the bow of ship 2 aligns with ship 3, and the centerline of ship ${}_{1}$ aligns with ship 2 . In a general triangle formation, the bow of ship ${}_{1}$ aligns with ship 3 .
|
| 124 |
+
|
| 125 |
+
TABLE II. RANGE OF ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}$
|
| 126 |
+
|
| 127 |
+
max width=
|
| 128 |
+
|
| 129 |
+
Configuration ${\mathbf{{ST}}}_{1}\left( \mathbf{m}\right)$ $\mathbf{S{T}_{2}\left( m\right) }$ ${\mathbf{{SP}}}_{1}\left( \mathbf{m}\right)$ ${\mathrm{{SP}}}_{2}\left( \mathrm{m}\right)$
|
| 130 |
+
|
| 131 |
+
1-5
|
| 132 |
+
Tandem 0.25-2.0 0.25-2.0 / /
|
| 133 |
+
|
| 134 |
+
1-5
|
| 135 |
+
Parallel / / 0.1685-2.022 0.337-2.696
|
| 136 |
+
|
| 137 |
+
1-5
|
| 138 |
+
Right triangle ${0.25} - {1.0}$ ${0.25} - {1.0}$ 0.1685-0.674 0.1685-0.674
|
| 139 |
+
|
| 140 |
+
1-5
|
| 141 |
+
General triangle ${0.25} - {1.0}$ ${0.25} - {1.0}$ 0.1685 0.337-0.5055
|
| 142 |
+
|
| 143 |
+
1-5
|
| 144 |
+
|
| 145 |
+
< g r a p h i c s >
|
| 146 |
+
|
| 147 |
+
Fig. 2. Illustration of formation configurations
|
| 148 |
+
|
| 149 |
+
§ III. METHODOLOGY
|
| 150 |
+
|
| 151 |
+
This research uses CFD data to investigate the influence of speed and spacing between adjacent ships in formations. In this section, the no-dimension coefficients of the formation and speed are illustrated in the coordinate system. The data analysis method is introduced, including data preparation.
|
| 152 |
+
|
| 153 |
+
§ A. DIMENSIONLESS COEFFICIENTS AND COORDINATE SYSTEM
|
| 154 |
+
|
| 155 |
+
The coordinate system to describe the motion and resistance of the formation is presented in Figure 3. The space-fixed coordinate system ${\mathrm{O}}_{\mathrm{o}} - {\mathrm{X}}_{\mathrm{o}}{\mathrm{Y}}_{\mathrm{o}}$ and the ship-fixed coordinate system O-xy constitute the global coordinate system. The space-fixed coordinate system is used to describe the motion of the formation, and the ship-fixed coordinate system is used to describe the resistance of the ship in formation. In the space-fixed coordinate system, the Xo direction points to the true north. In the ship-fixed coordinate system, the $\mathrm{x}$ direction indicates the bow of ship, and the $y$ direction points to the starboard side. The direction of no-dimension coefficients of resistance, including drag and the lateral force, are provided in Figure 3. ${\mathrm{X}}^{\prime }$ is the no-dimension coefficient of longitudinal resistance, and the direction of ${\mathrm{X}}^{\prime }$ from the bow to the stern is opposite to the $\mathrm{x}$ direction. ${\mathrm{Y}}^{\prime }$ is no dimension coefficient of lateral force and the direction of ${\mathrm{Y}}^{\prime }$ from the portside to the starboard side agrees with the y direction. The total dimensionless longitudinal resistance coefficient ${\mathrm{X}}_{\text{ total }}^{\prime }$ can be obtained by summing ${\mathrm{X}}^{\prime }$ of each ship in the formation. In a similar vein the total dimensionless longitudinal resistance coefficient ${Y}_{\text{ total }}^{\prime }$ can be obtained by summing ${Y}^{\prime }$ of each ship in the formation system. The equations of ${\mathrm{X}}_{\text{ total }}$ and ${\mathrm{Y}}_{\text{ total }}$ as follows:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{X}_{\text{ total }}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{3}{X}^{\prime } \tag{1}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
{Y}_{\text{ total }}^{\prime } = \mathop{\sum }\limits_{{i = 1}}^{3}{Y}^{\prime } \tag{2}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
In the research, the fleet is assumed to sail in calm water. Therefore, the impact of wind and current is not considered.
|
| 166 |
+
|
| 167 |
+
< g r a p h i c s >
|
| 168 |
+
|
| 169 |
+
Fig. 3. Illustration of the coordinate system
|
| 170 |
+
|
| 171 |
+
§ B. DATA PREPARATION
|
| 172 |
+
|
| 173 |
+
Since the CFD simulation via STAR CCM+ V13.06 needs to set up the numerical and physical layouts, longitudinal distances $\left( {{\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2}}\right)$ and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ mentioned in section 2 could only represent the geometric relationship between neighbor ships. To facilitate the learning of the characteristics of the data during the regression analysis, the longitudinal and transverse locations in the dataset are rearranged. ${\mathrm{{ST}}}_{\mathrm{i}}$ is specified to be the sum-of-signs value, when ship $i$ is in front of ship $i + 1$ , and ${\mathrm{{ST}}}_{\mathrm{i}}$ is specified to be the opposite of the geometric value when it is behind ${\mathrm{{ship}}}_{\mathrm{i} + 1},{\mathrm{{SP}}}_{\mathrm{i}}$ is specified to be the sum-of-signs value of geometric value when ship $i$ is located on ${\operatorname{ship}}_{\mathrm{i} + 1}$ ’s port side, and ${\mathrm{{SP}}}_{\mathrm{i}}$ is specified to be the opposite of the geometric value when ${\operatorname{ship}}_{\mathrm{i}}$ is located on ship $\mathrm{i} + 1$ ’s starboard side.
|
| 174 |
+
|
| 175 |
+
§ C. DATA ANALYSIS METHOD
|
| 176 |
+
|
| 177 |
+
Figure 4 presents the steps of the regression analysis method.
|
| 178 |
+
|
| 179 |
+
< g r a p h i c s >
|
| 180 |
+
|
| 181 |
+
Fig. 4. Flow diagram of regression analysis.
|
| 182 |
+
|
| 183 |
+
The hydrodynamic dataset of the ship formation is divided into different subsets to analyze the effects of speed and spacing between ships. The impacts of both longitudinal distances and lateral locations are considered on the total resistance of the ship formation system. The total resistance variations among the formation of different speeds have been observed. However, the direct relationship between total resistance and speed is still not revealed. The relationship between total resistance and speed is expected to be found using the tandem formation dataset. During the quantitative analysis of the speed impacts on total drag in tandem formation, the tandem formation dataset is split into subsets of different ${\mathrm{{ST}}}_{1}$ distances. Then, a correlation analysis between total resistance and speed is performed to highlight the strength of the correlation and determine which speed criterion more effectively characterizes variations in total resistance.
|
| 184 |
+
|
| 185 |
+
Three steps are taken to quantify the impacts of longitudinal spacing and lateral locations. Firstly, the dataset is divided into six subsets based on different speeds. Each subset is further categorized into tandem formation, parallel formation, and triangle formation. After that, regression analysis is conducted on subsets of total resistance data at uniform speeds. The results will reveal if the impacts of ST and SP differ across various fleet speeds. Finally, overall functions will be defined to describe ST and SP impacts, incorporating speed variations, with coefficients estimated from the entire dataset.
|
| 186 |
+
|
| 187 |
+
After correlation analysis with different factors, a model for the formation system's total drag regression formulation is developed, including the five features: speed, ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ . Multivariate polynomial and ridge regression methods are combined to build a regression model. Polynomial regression is a method of regression analysis based on polynomial functions for fitting non-linear relationships in data. Compared with linear regression, polynomial regression could model the non-linear characteristics of the data by introducing polynomial terms, thus increasing the flexibility and applicability of the model. In practice, data has many features, and polynomial regression for a single feature performs poorly on fitting data with many features. Thus, multivariate polynomial regression is used in this study to fit the total resistance dataset of ship formations.
|
| 188 |
+
|
| 189 |
+
In practical applications of using multivariate polynomials for regression analysis, choosing the polynomial degree carefully is crucial. If the degree is too low, it may result in poor fitting performance. On the other hand, if the degree is too high, it can lead to overfitting issues where the model fits noise in the data rather than capturing the underlying trends. Therefore, when employing multivariate polynomials for regression analysis, it's crucial to select the degree of the polynomial judiciously. To address potential overfitting issues and improve the accuracy of data fitting when using multivariate polynomials to establish regression equations, this study introduces a combined approach of ridge regression with multivariate polynomial regression to build the regression model. Ridge regression is an improved least squares estimation method that addresses multicollinearity by introducing an L2 norm penalty term, thereby enhancing model stability and generalization capability. The penalty term is $\lambda$ times the sum of the squares of all regression coefficients (where $\lambda$ is the penalty coefficient). Combining ridge regression with multivariate polynomial regression can effectively control the complexity of the model and reduce the risk of overfitting by introducing a penalty term. This is particularly beneficial when input features are highly correlated or when the condition number of the data matrix is high. Such stability helps mitigate numerical computation issues that may arise in multivariate polynomial regression, thereby enhancing the reliability of the model.
|
| 190 |
+
|
| 191 |
+
§ IV. RESULTS AND DISCUSSION
|
| 192 |
+
|
| 193 |
+
In this section, the impacts of speed, longitudinal location, and transverse spacing are analyzed to estimate the final regression model.
|
| 194 |
+
|
| 195 |
+
§ A. VARIATION OF DRAG DUE TO SPEED
|
| 196 |
+
|
| 197 |
+
To estimate the relationship between speed and total resistance, the total resistance of the formation and the speed is provided in Figures 5 to 8. In these plots, the relationship between speed and total resistance of tandem formation under different longitudinal spacing ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ is depicted. Simultaneously, the combined resistance experienced by three individual ships sailing alone at various speeds is also provided.
|
| 198 |
+
|
| 199 |
+
The blue dots in the graph represent the total resistance experienced by the formation system, while the red line indicates the combined resistance experienced by three individual ships sailing alone at different speeds. The purpose of marking the red line on the graph is to determine whether a three-ship tandem formation can achieve a resistance gain compared to three ships sailing individually. When ${\mathrm{{ST}}}_{1}$ is set as ${0.25}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , and ${2.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$ both ships sailing individually and ships sailing in formation, the resistance of 'WillLead I' ships decreases as ship speed increases. Simultaneously, the formation system benefits from resistance gains, with the maximum gain occurring at a speed of ${0.212}\mathrm{\;m}/\mathrm{s}$ , reaching up to ${4.85}\%$ in maximum resistance reduction.
|
| 200 |
+
|
| 201 |
+
When ${\mathrm{{ST}}}_{1}$ is set as ${0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , the total resistance observed during sailing in formation decreases as speed increases. However, the formation system did not gain resistance benefits. Instead, it experienced resistance amplification, with the maximum increase reaching ${119.3}\%$ at ${\mathrm{{ST}}}_{1} = {0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ .
|
| 202 |
+
|
| 203 |
+
When ${\mathrm{{ST}}}_{1}$ is set to ${1.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$ and ${1.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$ , the formation system experiences resistance gains. However, as ship speed increases, the resistance benefits gradually decrease. Additionally, when ${\mathrm{{ST}}}_{2}$ is smaller than ${\mathrm{{ST}}}_{1}$ , the resistance benefits of the formation system nearly disappear as the ship speed increases to 0.424m/s.
|
| 204 |
+
|
| 205 |
+
< g r a p h i c s >
|
| 206 |
+
|
| 207 |
+
(c) ${\mathrm{{ST}}}_{2} = {1.5}{\mathrm{L}}_{\mathrm{{OA}}}$
|
| 208 |
+
|
| 209 |
+
< g r a p h i c s >
|
| 210 |
+
|
| 211 |
+
Fig. 5. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {0.25}{\mathrm{\;L}}_{\mathrm{{OA}}}$
|
| 212 |
+
|
| 213 |
+
< g r a p h i c s >
|
| 214 |
+
|
| 215 |
+
< g r a p h i c s >
|
| 216 |
+
|
| 217 |
+
Fig. 6. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {0.5}{\mathrm{\;L}}_{\mathrm{{OA}}}$
|
| 218 |
+
|
| 219 |
+
< g r a p h i c s >
|
| 220 |
+
|
| 221 |
+
< g r a p h i c s >
|
| 222 |
+
|
| 223 |
+
Fig. 7. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {1.0}{\mathrm{L}}_{\mathrm{{OA}}}$
|
| 224 |
+
|
| 225 |
+
In tandem formations, the transverse distances SP1 and SP2 and the lateral forces do not affect the total resistance of the formation system. A correlation analysis between total resistance and speed of the formation is conducted. The results are shown in Table 3. All correlation coefficients are significant at 0.01 level of p-value(two-tailed).
|
| 226 |
+
|
| 227 |
+
< g r a p h i c s >
|
| 228 |
+
|
| 229 |
+
< g r a p h i c s >
|
| 230 |
+
|
| 231 |
+
Fig. 8. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {1.5}{\mathrm{L}}_{\mathrm{{OA}}}$
|
| 232 |
+
|
| 233 |
+
< g r a p h i c s >
|
| 234 |
+
|
| 235 |
+
< g r a p h i c s >
|
| 236 |
+
|
| 237 |
+
Fig. 9. Variation of resistance coefficient with speed when ${\mathrm{{ST}}}_{1} = {2.0}{\mathrm{\;L}}_{\mathrm{{OA}}}$
|
| 238 |
+
|
| 239 |
+
§ B. QUANTIFICATION OF LONGITUDINAL SPACING AND TRANSVERSE LOCATION
|
| 240 |
+
|
| 241 |
+
This section presents regression analysis results of spacing in adjacent ships in formations. The results reveal the impact of spacing in adjacent ships $\left( {{\mathrm{{ST}}}_{1}{\mathrm{{ST}}}_{2}{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ on total resistance. In tandem formation, the transverse locations ${\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ , are set as zero. Besides, both ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ are varied from ${0.25}{\mathrm{L}}_{\mathrm{{OA}}}$ to ${2.0}\mathrm{{Log}}$ . So, there is no need to standardize the coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ when calculating the coefficient in tandem formation subset.
|
| 242 |
+
|
| 243 |
+
Similarly, ${\mathrm{{ST}}}_{1}$ , and ${\mathrm{{ST}}}_{2}$ , are set as zero in parallel formation. The effect of standardizing the coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ before calculating the coefficient in the parallel formation subset is insignificant. However, longitudinal distance and transverse spacing existed between the neighboring ships in the triangle formation. The longitudinal distance is much bigger than the transverse spacing. The unstandardized coefficients can not be compared directly. However, the standardized coefficients, derived from standardized regression analysis, are adjusted so that the variances of the variables are 1 . in triangle formation. Thus, considering the need for standardizing correlation analysis under triangular formation configurations, standardized regression analysis is adopted for correlation analysis in all conditions to unify the correlation coefficient analysis operations.
|
| 244 |
+
|
| 245 |
+
The whole data set of the total resistance of tandem formation is split into different subsets with the same speed. The coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ for the total drag variable in each subset are presented in Fig 10. The results clarify whether ${\mathrm{{ST}}}_{1}$ or ${\mathrm{{ST}}}_{2}$ significantly impact total resistance in this multivariant regression model.
|
| 246 |
+
|
| 247 |
+
Two comparisons are made to interpret the estimated standardized coefficients. For tandem formation within the same subset, the weights of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ are compared. The impact of ${\mathrm{{ST}}}_{1}$ on total resistance is more significant than that of ${\mathrm{{ST}}}_{2}$ .
|
| 248 |
+
|
| 249 |
+
The other comparison involves analyzing coefficients for different speed groups, which reveals how external impacts vary among ships at different speeds. This analysis shows distinct trends in the effects of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ . on total resistance is flat when the speed gets bigger. The correlation coefficient of ${\mathrm{{ST}}}_{2}$ ranges between -0.083 and -0.075, indicating a negative correlation between ${\mathrm{{ST}}}_{2}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{2}$ increasing, total resistance tends to decrease. It is suggested that increasing ${\mathrm{{ST}}}_{2}$ can help the formation system reduce total resistance. However, the influence of ${\mathrm{{ST}}}_{2}$ on total resistance is instinctive. The correlation coefficient of ${\mathrm{{ST}}}_{1}$ ranges between 0.42 and 0.435, indicating a positive correlation between ${\mathrm{{ST}}}_{1}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{1}$ increasing, the formation system may gain energy benefits. It is suggested that decreasing ${\mathrm{{ST}}}_{1}$ can help the formation system reduce total resistance. However, the influence of ${\mathrm{{ST}}}_{1}$ on total resistance is significant. Thus, choosing ${\mathrm{{ST}}}_{1}$ carefully is more effective than selecting ${\mathrm{{ST}}}_{2}$ in obtaining total resistance benefits in tandem formation.
|
| 250 |
+
|
| 251 |
+
< g r a p h i c s >
|
| 252 |
+
|
| 253 |
+
Fig. 10. The standardized coefficients of ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ on total resistance in tandem formation.
|
| 254 |
+
|
| 255 |
+
The whole data set of the total resistance of parallel formation is split into different subsets with the same speed. The coefficients of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ for total resistance in each subset are presented in Fig 11.
|
| 256 |
+
|
| 257 |
+
Examining the standardized coefficients for parallel formation within the same subset allows for comparing the effects of SP1 and SP2. For parallel formation, both SP1 and SP2 have a significant impact on total resistance. The impact of SP1 is slightly higher than that of ${\mathrm{{SP}}}_{2}$ . In parallel formation, controlling the lateral spacing ${\mathrm{{SP}}}_{1}$ between ${\mathrm{{Ship}}}_{1}$ and ${\mathrm{{Ship}}}_{2}$ is more effective in gaining resistance benefits compared to controlling the lateral spacing ${\mathrm{{SP}}}_{2}$ between ${\mathrm{{Ship}}}_{2}$ and $\mathrm{{Ship}}3$ . It also can be observed that the trends of both impacts of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ on total resistance are undulatory with speed varying. The correlation coefficient of ${\mathrm{{SP}}}_{1}$ ranges between 0.823 and 0.844, indicating a positive correlation between ${\mathrm{{SP}}}_{1}$ and total resistance in parallel formation. With ${\mathrm{{SP}}}_{1}$ increasing, resistance benefits tend to decrease. The influence of ${\mathrm{{ST}}}_{2}$ on total resistance is positive. The correlation coefficient of ${\mathrm{{SP}}}_{2}$ varies from 0.700 to 0.722, indicating a positive correlation between ${\mathrm{{ST}}}_{1}$ and total resistance in tandem formation. With ${\mathrm{{ST}}}_{1}$ increasing, the formation may gain resistance reduction benefits too.
|
| 258 |
+
|
| 259 |
+
< g r a p h i c s >
|
| 260 |
+
|
| 261 |
+
Fig. 11. The standardized coefficients of ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ on total resistance in parallel formation.
|
| 262 |
+
|
| 263 |
+
The whole data set of the total resistance of right triangle formation is split into different subsets with the same speed. The coefficients of ST and SP for total resistance in each subset are presented in Fig 12. Analyzing the standardized coefficients for right triangle formation within the same subset reveals that the impact of ST is less significant compared to SP Besides, the impact of both ST and SP on total resistance is positive. The Impacts of SP is more significant than ST. It also can be observed that the effect of ST on total resistance changes more gradually with speed compared to the impact of SP on total resistance. The correlation coefficient of ST ranges remains at 0.43, nearly unchanged, and the correlation coefficient of SP varies from 0.70 to 0.72, similar to the standardized correlation coefficient of ${\mathrm{{SP}}}_{2}$ in parallel formation.
|
| 264 |
+
|
| 265 |
+
Regression models have been developed to quantitatively assess the effects of speed, ST, and SP on total resistance for tandem, parallel, and triangle formations. This paper presents the final regression models established using the complete dataset. Multivariant polynomial and ridge regression methods are combined to build the regression model. Due to the limited sample size, k-fold cross-validation was employed to enhance the robustness of the regression model.
|
| 266 |
+
|
| 267 |
+
The 4th-order regression functions are listed as equation (3)
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{X}_{\text{ total }} = {0.01S}{P}_{1}^{4} - {0.13S}{P}_{1}^{3}S{P}_{2} + {0.81S}{P}_{1}^{3}S{T}_{1} + {0.81S}{P}_{1}^{3}S{T}_{2} + {1.6S}{P}_{1}^{3} + {0.12S}{P}_{1}^{2}S{P}_{2}^{2} + {0.6S}{P}_{1}^{2}S{P}_{2}S{T}_{1} + {0.6S}{P}_{1}^{2}S{P}_{2}S{T}_{2}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
- {0.01S}{P}_{1}^{2}S{P}_{2}U + {0.98S}{P}_{1}^{2}S{P}_{2} + {2.22S}{P}_{1}^{2}S{T}_{1}^{2} - {0.12S}{P}_{1}^{2}S{T}_{1}S{T}_{2} + {0.03S}{P}_{1}^{2}S{T}_{1}U + {0.26S}{P}_{1}^{2}S{T}_{1} - {0.19S}{P}_{1}^{2}S{T}_{2}^{2}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
+ {0.01S}{P}_{1}^{2}S{T}_{2}U + {0.26S}{P}_{1}^{2}S{T}_{2} + {0.05S}{P}_{1}^{2}U - {1.28S}{P}_{1}^{2} - {0.24S}{P}_{1}S{P}_{2}^{3} + {0.85S}{P}_{1}S{P}_{2}^{2} + {2.01S}{P}_{1}S{P}_{2}S{T}_{1}^{2}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
$$
|
| 282 |
+
- {0.52S}{P}_{1}S{P}_{2}S{T}_{1}S{T}_{2} + {0.02S}{P}_{1}S{P}_{2}S{T}_{1}U + {0.45S}{P}_{1}S{P}_{2}S{T}_{1} - {0.59S}{P}_{1}S{P}_{2}S{T}_{2}{}^{2} + {0.45S}{P}_{1}S{P}_{2}S{T}_{2} + {0.04S}{P}_{1}S{P}_{2}U \tag{3}
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
- {0.74S}{P}_{1}S{P}_{2} + {3.0S}{P}_{1}S{T}_{1}^{3} - {1.11S}{P}_{1}S{T}_{1}^{2}S{T}_{2} + {0.08S}{P}_{1}S{T}_{1}^{2}U - {2.08S}{P}_{1}S{T}_{1}^{2} - {1.19S}{P}_{1}S{T}_{1}S{T}_{2}^{2} - {0.06S}{P}_{1}S{T}_{1}S{T}_{2}U
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
+ {0.98S}{P}_{1}S{T}_{1}S{T}_{2} - {0.02S}{P}_{1}S{T}_{1}U - {0.29S}{P}_{1}S{T}_{1} - {1.29S}{P}_{1}S{T}_{2}^{3} - {0.07S}{P}_{1}S{T}_{2}^{2}U + {1.06S}{P}_{1}S{T}_{2}^{2} + {0.01S}{P}_{1}S{T}_{2}U
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
- {0.29S}{P}_{1}S{T}_{2} - {0.02S}{P}_{1}U - {0.45S}{P}_{1} + {0.1S}{P}_{2}^{4} + {0.27S}{P}_{2}^{3}S{T}_{1} + {0.27S}{P}_{2}^{3}S{T}_{2} + {0.02S}{P}_{2}^{3}U + {0.03S}{P}_{2}^{3} + {2.41S}{P}_{2}^{2}S{T}_{1}^{2}
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
- {0.33S}{P}_{2}^{2}S{T}_{1}S{T}_{2} + {0.06S}{P}_{2}^{2}S{T}_{1}U + {0.21S}{P}_{2}^{2}S{T}_{1} - {0.4S}{P}_{2}^{2}S{T}_{2}^{2} + {0.04S}{P}_{2}^{2}S{T}_{2}U + {0.21S}{P}_{2}^{2}S{T}_{2} + {0.02S}{P}_{2}^{2}U
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
- {0.35S}{P}_{2}{}^{2} + {3.26S}{P}_{2}S{T}_{1}^{3} - {1.18S}{P}_{2}S{T}_{1}{}^{2}S{T}_{2} + {0.23S}{P}_{2}S{T}_{1}^{2}U - {2.6S}{P}_{2}S{T}_{1}{}^{2} - {1.27S}{P}_{2}S{T}_{1}S{T}_{2}{}^{2} + {0.09S}{P}_{2}S{T}_{1}S{T}_{2}U
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
+ {0.7S}{P}_{2}S{T}_{1}S{T}_{2} + {0.01S}{P}_{2}S{T}_{1}{U}^{2} + {0.04S}{P}_{2}S{T}_{1}U - {0.06S}{P}_{2}S{T}_{1} - {1.38S}{P}_{2}S{T}_{2}^{3} + {0.08S}{P}_{2}S{T}_{2}^{2}U + {0.8S}{P}_{2}S{T}_{2}^{2}
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
+ {0.01S}{P}_{2}S{T}_{2}{U}^{2} + {0.07S}{P}_{2}S{T}_{2}U - {0.06S}{P}_{2}S{T}_{2} + {0.02S}{P}_{2}{U}^{2} - {0.14S}{P}_{2}U + {0.18S}{P}_{2} + {2.1S}{T}_{1}^{4} - {0.68S}{T}_{1}^{3}S{T}_{2}
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
+ {0.12S}{T}_{1}^{3}U - {4.17S}{T}_{1}^{3} - {0.75S}{T}_{1}^{2}S{T}_{2}^{2} - {0.02S}{T}_{1}^{2}S{T}_{2}U + {1.18S}{T}_{1}^{2}S{T}_{2} - {0.08S}{T}_{1}^{2}U + {2.5S}{T}_{1}^{2} - {0.76S}{T}_{1}S{T}_{2}^{3}
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
- {0.02S}{T}_{1}S{T}_{2}^{2}U + {1.29S}{T}_{1}S{T}_{2}^{2} + {0.09S}{T}_{1}S{T}_{2}U - {1.48S}{T}_{1}S{T}_{2} + {0.01S}{T}_{1}{U}^{2} + {0.01S}{T}_{1}U - {0.17S}{T}_{1} - {0.83S}{T}_{2}^{4}
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
- {0.03S}{T}_{2}^{3}U + {1.42S}{T}_{2}^{3} + {0.11S}{T}_{2}^{2}U - {1.6S}{T}_{2}^{2} - {0.02S}{T}_{2}U - {0.17S}{T}_{2} - {0.02}{U}^{4} + {0.01}{U}^{3} + {0.02}{U}^{2} + {0.15U} + {0.62}
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
The results of the estimation of the regression analysis are shown in Table 4. According to the regression analysis results, about ${98.2}\%$ of the variance in the total power of the training systems can be explained by fleet speed. ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}\left( {\mathrm{R}}^{2}\right.$ is 0.982 for the whole dataset). Besides, speed has an estimate of 0.273, indicating a positive but relatively small effect on the dependent variable.
|
| 326 |
+
|
| 327 |
+
The standard error is 0.836, which is relatively large and suggests high uncertainty in the estimate. The t-statistic is 0.327, falling below common critical values (such as 1.96), indicating that the effect of this feature may not be significant. The standardized estimate of 0.327 aligns with the t-statistic, reinforcing that the standardized impact is also relatively modest. Feature ${\mathrm{{ST}}}_{1}$ has an estimate of -0.171, reflecting a negative effect on the dependent variable. With a standard error of 0.157, the precision of this estimate is relatively high. However, the t-statistic of -1.089 is below common critical values, suggesting that the impact of ${\mathrm{{ST}}}_{1}$ might also be nonsignificant. The standardized estimate of -1.089 confirms the direction of the effect but similarly indicates that its significance is weak. Feature ${\mathrm{{ST}}}_{2}$ has an estimate of -0.167, suggesting a negative effect on the dependent variable. The standard error is 0.157, indicating high precision in the forecast. The t-statistic of -1.069 implies that this feature's impact may not be significant. The standardized estimate of -1.069 supports the direction of the effect but demonstrates that the impact is not substantial. Feature ${\mathrm{{SP}}}_{1}$ is estimated at -0.501, indicating a strong negative impact on the dependent variable.
|
| 328 |
+
|
| 329 |
+
TABLE IV. ESTIMATION RESULTS OF THE FINAL REGRESSION MODEL
|
| 330 |
+
|
| 331 |
+
max width=
|
| 332 |
+
|
| 333 |
+
X ${\mathbf{R}}^{2}$ F-state $\mathbf{{Estimate}}$ Std.error t-stat
|
| 334 |
+
|
| 335 |
+
1-6
|
| 336 |
+
X 0.982 168.045 0.603 0.089 6.759
|
| 337 |
+
|
| 338 |
+
1-6
|
| 339 |
+
${\mathrm{C}}_{\mathrm{U}}$ / / 0.273 0.836 0.327
|
| 340 |
+
|
| 341 |
+
1-6
|
| 342 |
+
${\mathrm{C}}_{\mathrm{{ST}}1}$ / / -0.171 0.157 -1.09
|
| 343 |
+
|
| 344 |
+
1-6
|
| 345 |
+
${\mathrm{C}}_{\mathrm{{ST}}2}$ / / -0.167 0.157 -1.07
|
| 346 |
+
|
| 347 |
+
1-6
|
| 348 |
+
${\mathrm{C}}_{\mathrm{{SP}}1}$ / / -0.501 0.156 -3.205
|
| 349 |
+
|
| 350 |
+
1-6
|
| 351 |
+
${\mathrm{C}}_{\mathrm{{SP}}2}$ / / 0.128 0.159 0.806
|
| 352 |
+
|
| 353 |
+
1-6
|
| 354 |
+
|
| 355 |
+
The standard error is 0.156, which is relatively small, suggesting high accuracy in the estimate. The t-statistic of - 3.205 exceeds common critical values, demonstrating that the effect of ${\mathrm{{SP}}}_{1}$ is significant. The standardized estimate of -3.205 confirms that the impact remains strong even after standardization. Feature ${\mathrm{{SP}}}_{2}$ has an estimate of 0.128, showing a positive but small effect on the dependent variable. The standard error is 0.159, which is relatively large, reflecting higher uncertainty in the estimate. The t-statistic of 0.806 is below common critical values, indicating that the effect of ${\mathrm{{SP}}}_{2}$ is insignificant. The standardized estimate of 0.806 suggests that the impact is also small after standardization.
|
| 356 |
+
|
| 357 |
+
< g r a p h i c s >
|
| 358 |
+
|
| 359 |
+
Fig. 12. The standardized coefficients of ST and SP on total resistance in triangle formation.
|
| 360 |
+
|
| 361 |
+
§ V. CONCLUSION
|
| 362 |
+
|
| 363 |
+
The paper established a regression model to analyze the effects of factors including speed, longitudinal distances $\left( {\mathrm{{ST}}}_{1}\right.$ , ${\mathrm{{ST}}}_{2}$ ), and transverse locations $\left( {{\mathrm{{SP}}}_{1},{\mathrm{{SP}}}_{2}}\right)$ on the total resistance of ship formations derived from CFD data. The variation of total resistance in tandem formation due to speed can be observed. The correlation analysis shows a strong correlation between speed and total resistance. The longitudinal spacing and transverse location impact on total resistance vary for different formation configurations. For tandem formation, both ${\mathrm{{ST}}}_{1}$ and ${\mathrm{{ST}}}_{2}$ have a more significant influence on total resistance. For parallel formation, the impact of both ${\mathrm{{SP}}}_{1}$ and ${\mathrm{{SP}}}_{2}$ slightly fluctuates with growing ship speed. However, for triangle formation, the impact of SP on total resistance shows a strong positive correlation. The ST impact on total resistance is negative. The regression analysis results revealed that about ${98.2}\%$ of the variance in the total resistance of various ship formation systems was mainly explained by the factors that influenced its formation speed, ${\mathrm{{ST}}}_{1},{\mathrm{{ST}}}_{2},{\mathrm{{SP}}}_{1}$ , and ${\mathrm{{SP}}}_{2}$ .
|
| 364 |
+
|
| 365 |
+
This paper investigates the impact of different factors in the formation of total resistance. The estimated result indicates that more CFD data should be used in the regression analysis process. More intelligent methods can be used for regression analysis.
|
| 366 |
+
|
| 367 |
+
§ ACKNOWLEDGMENT
|
| 368 |
+
|
| 369 |
+
The work presented in this study is financially supported by the National Natural Science Foundation of China under grants 52271364, 52101402, and 52271367.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,597 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Lyapunov Matrix-Based Guaranteed Cost Dynamic Positioning Control for Unmanned Marine Vehicles With Time Delay
|
| 2 |
+
|
| 3 |
+
${1}^{\text{st }}$ Xin Yang
|
| 4 |
+
|
| 5 |
+
College of Navigation
|
| 6 |
+
|
| 7 |
+
Dalian Maritime University
|
| 8 |
+
|
| 9 |
+
Dalian, China
|
| 10 |
+
|
| 11 |
+
yangxin3541@163.com
|
| 12 |
+
|
| 13 |
+
${2}^{\text{nd }}$ Li-Ying Hao*
|
| 14 |
+
|
| 15 |
+
College of
|
| 16 |
+
|
| 17 |
+
Marine Electrical Engineering
|
| 18 |
+
|
| 19 |
+
Dalian Maritime University
|
| 20 |
+
|
| 21 |
+
Dalian, China
|
| 22 |
+
|
| 23 |
+
haoliying_0305@163.com
|
| 24 |
+
|
| 25 |
+
${3}^{\text{rd }}$ Tieshan Li*
|
| 26 |
+
|
| 27 |
+
College of Automation Engineering
|
| 28 |
+
|
| 29 |
+
University of Electronic Science
|
| 30 |
+
|
| 31 |
+
and Technology of China
|
| 32 |
+
|
| 33 |
+
Chengdu, China
|
| 34 |
+
|
| 35 |
+
tieshanli@126.com
|
| 36 |
+
|
| 37 |
+
${4}^{\text{th }}$ Yang Xiao
|
| 38 |
+
|
| 39 |
+
Department of Computer Science
|
| 40 |
+
|
| 41 |
+
The University of Alabama
|
| 42 |
+
|
| 43 |
+
Tuscaloosa, USA
|
| 44 |
+
|
| 45 |
+
yangxiao@ieee.org
|
| 46 |
+
|
| 47 |
+
${5}^{\text{th }}$ Guoyong Liu
|
| 48 |
+
|
| 49 |
+
College of
|
| 50 |
+
|
| 51 |
+
Marine Electrical Engineering
|
| 52 |
+
|
| 53 |
+
Dalian Maritime University
|
| 54 |
+
|
| 55 |
+
Dalian, China
|
| 56 |
+
|
| 57 |
+
liuguoyong0806@163.com
|
| 58 |
+
|
| 59 |
+
Abstract-This paper presents a Lyapunov matrix-based guaranteed cost dynamic positioning controller for unmanned marine vehicles (UMVs) with time delays. A novel Lyapunov-Krasovskii functional (LKF) is introduced, which enhances the analysis of time delays and system states. The controller design leverages the LMI framework alongside Jensen's inequality to determine sufficient criteria for its feasibility, ensuring that the UMVs' state errors gradually reduce to zero and providing an adaptive ${H}_{\infty }$ performance guarantee. Additionally, the cost function is upper-bounded, and the effectiveness of the method is demonstrated through simulation results.
|
| 60 |
+
|
| 61 |
+
Index Terms-Lyapunov matrix, time delays, guaranteed cost control (GCC), dynamic positioning (DP), unmanned marine vehicles (UMVs)
|
| 62 |
+
|
| 63 |
+
## I. INTRODUCTION
|
| 64 |
+
|
| 65 |
+
Unmanned Marine Vehicles (UMVs) play a pivotal role in enhancing maritime safety and security by performing high-risk operations effectively without compromising human lives, thereby revolutionizing search and rescue missions and coastal surveillance [1]-[3]. Compared to traditional anchor mooring, dynamic positioning (DP) offers a more versatile, precise, and environmentally friendly method for positioning vessels, making it particularly suitable for use in complex or dynamic marine environments [4]. Over the years, numerous control strategies have been proposed to ensure robust DP control in UMVs. For instance, [5] introduces a dynamic output feedback control method, specifically tailored for DP ships to counter denial of service attacks. In [6], the design of an adaptive sliding mode fault-tolerant compensation mechanism is presented, targeting the maintenance of DP control in UMVs despite thruster faults and unknown ocean disturbances. It is crucial to recognize that time delays are typically inevitable [7]-[9]. Consequently, there is an urgent need to develop a strategy to compensate for these time delays.
|
| 66 |
+
|
| 67 |
+
In DP systems for UMVs, time delays due to network-mediated signal and control command transmission represents a significant challenge that often compromises system stability and performance [10], [11]. This issue has led to the development of various advanced time delays compensation methods [12]-[14]. Among these methods, enhanced time delays compensation approaches for autonomous underwater vehicles have shown promise [12]. In [13], model-free proportional-derivative controllers are innovatively incorporated into the Lyapunov-Krasovskii functional (LKF) framework to effectively counteract the impacts of delays. Advanced strategies utilizing Lyapunov matrix-based LKF methods have proven particularly effective. These approaches leverage comprehensive information about time delays and system states, providing control strategy that efficiently accommodates time delays systems. The primary motivation of this paper is to develop a complete LKF based on the Lyapunov matrix to mitigate the effects of time delays on UMVs.
|
| 68 |
+
|
| 69 |
+
On another research front, guaranteed cost control (GCC) has been extensively studied [15]-[17]. This strategy offers the advantage of setting an upper limit on a specified performance index, ensuring that any system performance degradation remains below this predefined cost threshold. As vessels often navigate in complex and varied ocean environments, the impact of wind and wave disturbances becomes significant [17]. In response,[18] investigated a robust ${H}_{\infty }$ guaranteed cost controller aimed at enhancing path-following performance. The GCC method presented in [19] offers a way to reduce energy consumption for surface vessels in DP, thereby increasing its practical applicability. These results have inspired our research into GCC theory, particularly its application to DP ships. Thus, how to propose a guaranteed cost controller based on the Lyapunov matrix to achieve effective DP control for UMVs is the second research motivation of this paper.
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
This work was supported by the National Natural Science Foundation of China (Grant Nos: 51939001, 52171292, 61976033); Dalian Outstanding Young Talents Program(2022RJ05)
|
| 74 |
+
|
| 75 |
+
* Corresponding authors. Emails: haoliying_0305@163.com;tieshanli@12 6.com
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
The primary objective of this paper is to design a Lyapunov matrix-based guaranteed cost dynamic positioning controller, utilizing the LMI method to ensure stability. The paper's main contributions are evaluated in comparison to recent advancements in the field.
|
| 80 |
+
|
| 81 |
+
1) We propose a novel time delays compensation method for UMVs that incorporates more detailed time delays and state information by employing a Lyapunov matrix-based complete-type LKF, which reduces conservatism compared to conventional time delays compensation techniques.
|
| 82 |
+
|
| 83 |
+
2) A novel guaranteed cost DP control strategy is designed, which ensuring the stability of DP systems for UMVs while providing an upper bound on a prespecified cost function.
|
| 84 |
+
|
| 85 |
+
The remainder of this paper is structured as follows: Section II describes the UMVs model with time delays. Section 3 reviews basic concepts and preliminary results, which serve as the theoretical basis for the proposed LKF method based on the Lyapunov matrix. A complete-type LKF based on the Lyapunov matrix is presented in Section 4. Section 5 introduces guaranteed cost dynamic positioning controller. Finally, Section 6 presents simulations to illustrate the validity of the theoretical results.
|
| 86 |
+
|
| 87 |
+
## II. UMVs MODELING AND PROBLEM DESCRIPTION
|
| 88 |
+
|
| 89 |
+
## A. Dynamic modeling for UMVs
|
| 90 |
+
|
| 91 |
+
The UMVs model typically employs a three degrees of freedom motion equation to describe its dynamic behavior in the marine environment. These three degrees of freedom include yaw, surge, and sway. Therefore, the dynamic equations of the UMVs are often simplified and expressed in the following form [20]:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\xi \dot{v}\left( t\right) + \mathcal{C}v\left( t\right) + \mathcal{D}\lambda \left( t\right) = \mathcal{G}u\left( t\right) , \tag{1}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\dot{\lambda }\left( t\right) = \mathcal{S}\left( {\theta \left( t\right) }\right) v\left( t\right) , \tag{2}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where matrix $\xi$ represents the inertia matrix, and the velocity vector $v\left( t\right) = {\left\lbrack {v}_{1}\left( t\right) ,{v}_{2}\left( t\right) ,{v}_{3}\left( t\right) \right\rbrack }^{\mathrm{T}}$ describes the ship’s motion in different directions, where ${v}_{1}\left( t\right)$ represents the surge velocity, ${v}_{2}\left( t\right)$ indicates the sway velocity, and ${v}_{3}\left( t\right)$ corresponds to the yaw rate. The position vector $\lambda \left( t\right) =$ ${\left\lbrack {x}_{o}\left( t\right) ,{y}_{o}\left( t\right) ,\theta \left( t\right) \right\rbrack }^{\mathrm{T}}$ is used to describe the ship’s position and orientation on the water surface, where ${x}_{o}\left( t\right)$ and ${y}_{o}\left( t\right)$ represent the coordinates of the ship in the horizontal plane, and $\theta \left( t\right)$ denotes the ship’s heading angle. The matrix $\mathcal{C}$ is the damping matrix. The matrix $\mathcal{D}$ represents the mooring moment matrix, which models external disturbances such as wind, waves, and ocean currents acting on the UMVs. The matrix $\mathcal{G}$ is the thrust allocation matrix, responsible for distributing thrust to the ship's propellers. Additionally, the rotation matrix $\mathcal{S}\left( {\theta \left( t\right) }\right)$ is given by:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\mathcal{S}\left( {\theta \left( t\right) }\right) = \left\lbrack \begin{matrix} \cos \left( {\theta \left( t\right) }\right) & - \sin \left( {\theta \left( t\right) }\right) & 0 \\ \sin \left( {\theta \left( t\right) }\right) & \cos \left( {\theta \left( t\right) }\right) & 0 \\ 0 & 0 & I \end{matrix}\right\rbrack ,
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
For the control of UMVs in the northern region, where the yaw angle $\theta \left( t\right)$ is small, the matrix $\mathcal{S}\left( {\theta \left( t\right) }\right)$ can be approximated by the identity matrix $I$ . We define the following matrices ${\mathcal{A}}_{1} = - {\xi }^{-1}\mathcal{C},\mathcal{B} = {\xi }^{-1}\mathcal{G}$ , and $\mathcal{F} = - {\xi }^{-1}\mathcal{D}$ . let $x\left( t\right) = {\left\lbrack {\lambda }^{\mathrm{T}}\left( t\right) ,{v}^{\mathrm{T}}\left( t\right) \right\rbrack }^{\mathrm{T}}$ . Thus, the dynamic equation of UMVs can be written as follows:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\dot{x}\left( t\right) = {Ax}\left( t\right) + {B}_{1}u\left( t\right) + {Fg}\left( {t, v\left( t\right) }\right) + \varpi \left( t\right) , \tag{3}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where $A = \left\lbrack \begin{matrix} 0 & I \\ 0 & {\mathcal{A}}_{1} \end{matrix}\right\rbrack ,{B}_{1} = \left\lbrack \begin{array}{l} 0 \\ \mathcal{B} \end{array}\right\rbrack , F = \left\lbrack \begin{matrix} 0 \\ \mathcal{F} \end{matrix}\right\rbrack .\varpi \left( t\right) \in$ ${L}_{2}\lbrack 0,\infty )$ represents disturbance. Defined reference signal ${x}_{\text{ref }} = \left\lbrack \begin{array}{l} {\lambda }_{\text{ref }} \\ {v}_{\text{ref }} \end{array}\right\rbrack$ , the error vector is defined as $e\left( t\right) = x\left( t\right) -$ ${x}_{\text{ref }}$ . The error dynamics of the UMVs can be expressed as follows:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\dot{e}\left( t\right) = {Ae}\left( t\right) + {B}_{1}u\left( t\right) + {Fg}\left( {t, e\left( t\right) }\right) + {B}_{2}\omega \left( t\right) . \tag{4}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
let $e\left( t\right) \in {\mathbb{R}}^{n}$ denote the state vector, $u \in {\mathbb{R}}^{p}$ represent the control input vector. The term ${B}_{2}\omega \left( t\right)$ is defined as $A{x}_{\text{ref }} + \varpi \left( t\right)$ , where $\omega \left( t\right) = \left\lbrack \begin{array}{l} {x}_{\text{ref }} \\ \varpi \left( t\right) \end{array}\right\rbrack$ , and ${B}_{2} = \left\lbrack \begin{array}{ll} A & I \end{array}\right\rbrack$ . Considering the unavoidable time delay during signal transmission, it follows from equation (4) that:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\dot{e}\left( t\right) = {Ae}\left( t\right) + {A}_{1}e\left( {t - d}\right) + {B}_{1}u\left( t\right) + {Fg}\left( {e\left( t\right) , e\left( {t - d}\right) }\right)
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
+ {B}_{2}\omega \left( t\right) \text{,} \tag{5}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $d > 0$ represents the time delay, and $g : {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{m}$ is assumed to satisfy the following inequality.
|
| 130 |
+
|
| 131 |
+
Assumption 1: Let matrices $\mathbb{N} > 0$ and $\mathbb{Y} > 0$ , where $\mathbb{N} \in$ ${\mathbb{R}}^{m \times m}$ and $\mathbb{Y} \in {\mathbb{R}}^{{2n} \times {2n}}$ . The nonlinear function $g\left( \cdot \right)$ satisfies the following inequality:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
{g}^{\mathrm{T}}\left( {e\left( t\right) , e\left( {t - d}\right) }\right) {\mathbb{N}}^{-1}g\left( {e\left( t\right) , e\left( {t - d}\right) }\right)
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\leq \left\lbrack \begin{array}{ll} {e}^{\mathrm{T}}\left( t\right) & {e}^{\mathrm{T}}\left( {t - d}\right) \end{array}\right\rbrack \mathbb{Y}{\left\lbrack \begin{array}{ll} {e}^{\mathrm{T}}\left( t\right) & {e}^{\mathrm{T}}\left( {t - d}\right) \end{array}\right\rbrack }^{\mathrm{T}}.
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
Remark 1: Assumption 1 ensures that the function $g\left( t\right)$ is bounded. When $e\left( t\right) = 0$ or $e\left( {t - d}\right) = 0$ , Assumption 1 in this article is the general form of Assumption 1 in reference [17].
|
| 142 |
+
|
| 143 |
+
To bring both linear and angular velocities to zero and minimize the impact of external disturbances such as wind, waves, and currents, the output $\mathcal{Z}\left( t\right)$ , can be formulated as follows:
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mathcal{Z}\left( t\right) = {C}_{z}e\left( t\right) \tag{6}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
Definition 1: [21] The system is described by
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\dot{x}\left( t\right) = {A}_{d}x\left( t\right) + {B}_{d}\omega \left( t\right) ,
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\mathcal{Z}\left( t\right) = {C}_{d}x\left( t\right) , x\left( 0\right) = 0. \tag{7}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
Given a constant ${\gamma }_{0} > 0,\omega \left( t\right) \in {L}_{2}\lbrack 0,\infty )$ , if for any $\epsilon > 0$ , the following condition
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
{\int }_{0}^{\infty }{\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) \mathrm{d}t \leq {\gamma }_{0}^{2}{\int }_{0}^{\infty }{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \mathrm{d}t + \epsilon ,
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
is satisfied, then the system (7) is said to achieve an adaptive ${H}_{\infty }$ performance index that does not exceed ${\gamma }_{0}$ .
|
| 166 |
+
|
| 167 |
+
Definition 2: The cost function related to system (5) is described as follows:
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
J = {\int }_{0}^{\infty }\left\lbrack {{e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right) }\right\rbrack \mathrm{d}t. \tag{8}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
where ${\Omega }^{\mathrm{T}} = \Omega \geq 0$ and ${\mathbb{R}}_{q}^{\mathrm{T}} = {\mathbb{R}}_{q} \geq 0$ .
|
| 174 |
+
|
| 175 |
+
A stabilization controller $u\left( t\right)$ for system (5) is called a guaranteed cost controller if it ensures that $J \leq {J}^{ * }$ , where ${J}^{ * }$ is a positive scalar. The value ${J}^{ * }$ is known as the guaranteed cost.
|
| 176 |
+
|
| 177 |
+
## B. Control Objective
|
| 178 |
+
|
| 179 |
+
For UMVs (5) affected by time delays, this paper proposes a guaranteed cost DP controller based on the Lyapunov matrix. The controller is designed to drive the state error of the UMVs asymptotically converges to zero, while also satisfying the specified ${H}_{\infty }$ performance criteria and guaranteeing an upper limit on the predefined cost function.
|
| 180 |
+
|
| 181 |
+
## III. PRELIMINARIES
|
| 182 |
+
|
| 183 |
+
We will construct a complete-type LKF for UMVs (5) based on Lyapunov matrix. In the following section, we begin by defining the Lyapunov matrix.
|
| 184 |
+
|
| 185 |
+
## A. Lyapunov matrix
|
| 186 |
+
|
| 187 |
+
We will now present relevant concepts related to linear time-delay systems as follows [22]:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\dot{e}\left( t\right) = {Ae}\left( t\right) + {A}_{1}e\left( {t - d}\right) ,
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
e\left( \iota \right) = \phi \left( \iota \right) ,\iota \in \left\lbrack {-d,0}\right\rbrack , \tag{9}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
where $e\left( t\right) \in {\mathbb{R}}^{n}$ represents the state vector, $d > 0$ is the time delay. $A,{A}_{1} \in {\mathbb{R}}^{n \times n}$ are system matrices.
|
| 198 |
+
|
| 199 |
+
Definition 3: [22] Given a matrix $\mathcal{P} > 0$ , if the matrix $Q : \left\lbrack {-d, d}\right\rbrack \rightarrow {\mathbb{R}}^{n \times n}$ meets the following conditions:
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\dot{Q}\left( \pi \right) = Q\left( \pi \right) A + Q\left( {\pi - d}\right) {A}_{1},
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
Q\left( {-\pi }\right) = {Q}^{\mathrm{T}}\left( \pi \right) ,
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
- \mathcal{P} = Q\left( 0\right) A + Q\left( {-d}\right) {A}_{1} + {A}^{\mathrm{T}}Q\left( 0\right) + {A}_{1}^{\mathrm{T}}Q\left( d\right) , \tag{10}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
Definition 4: [22] If the system (9) is asymptotically stable, there exists a Lyapunov matrix $Q\left( \cdot \right)$ associated with matrix $\mathcal{P}$ for system (9).
|
| 214 |
+
|
| 215 |
+
Lemma 1: Suppose there are matrices $H = {H}^{\mathrm{T}} > 0$ and ${K}_{11} \in {\mathbb{R}}^{p \times n}$ , and for any $U > 0$ , the following LMI condition is satisfied:
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
\left\lbrack \begin{matrix} {\Lambda }_{2} & {A}_{1}X \\ {\left( {A}_{1}X\right) }^{\mathrm{T}} & - U \end{matrix}\right\rbrack < 0 \tag{11}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
where ${\Lambda }_{2} = {AX} - {B}_{1}{Y}_{1} + {\left( AX - {B}_{1}{Y}_{1}\right) }^{\mathrm{T}} + U, X = {H}^{-1}$ , ${Y}_{1} = {K}_{11}{H}^{-1}$ , and $U = {H}^{-1}L{H}^{-1}$ , then there exists a controller ${u}_{1}\left( t\right) = - {K}_{11}e\left( t\right)$ that guarantees system (9) is asymptotically stable.
|
| 222 |
+
|
| 223 |
+
Proof 1: Select the Lyapunov function:
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
{V}_{c}\left( {e\left( t\right) }\right) = {e}^{\mathrm{T}}\left( t\right) {He}\left( t\right) + {\int }_{t - d}^{t}{e}^{\mathrm{T}}\left( \theta \right) {Le}\left( \theta \right) \mathrm{d}\theta .
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
We can derive:
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
{\left. \frac{\mathrm{d}{V}_{c}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 9\right) } = {\Lambda }_{0}^{\mathrm{T}}{\Omega }_{1}{\Lambda }_{0}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
where
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{\Lambda }_{0} = {\left\lbrack {e}^{\mathrm{T}}\left( t\right) ,{e}^{\mathrm{T}}\left( t - d\right) \right\rbrack }^{\mathrm{T}},
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
{\Omega }_{1} = \left\lbrack \begin{matrix} {\Lambda }_{2} & {A}_{1}X \\ {\left( {A}_{1}X\right) }^{\mathrm{T}} & - U \end{matrix}\right\rbrack ,
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
{\Lambda }_{2} = {AX} - {B}_{1}{Y}_{1} + {\left( AX - {B}_{1}{Y}_{1}\right) }^{\mathrm{T}} + U,
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
X = {H}^{-1},{Y}_{1} = {K}_{11}{H}^{-1}, U = {H}^{-1}L{H}^{-1}.
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
Using Lyapunov stability theory, the controller ${u}_{1}\left( t\right) =$ $- {K}_{11}e\left( t\right)$ guarantees the asymptotic stability of system (9).
|
| 254 |
+
|
| 255 |
+
### IV.A COMPLETE-TYPE LKF
|
| 256 |
+
|
| 257 |
+
We construct a LKF $\mathfrak{V}\left( \cdot \right)$ :
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
\mathfrak{V}\left( {e\left( t\right) }\right) = {\mathfrak{V}}_{1}\left( {e\left( t\right) }\right) + {\mathfrak{V}}_{2}\left( {e\left( t\right) }\right) , e \in {C}_{p}\left( {\left\lbrack {-d,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) \tag{12}
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
where
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
{\mathfrak{V}}_{1}\left( {e\left( t\right) }\right) = {e}^{\mathrm{T}}\left( t\right) Q\left( 0\right) e\left( t\right) + 2{e}^{\mathrm{T}}\left( t\right) {\Gamma }_{1}\left( {e\left( t\right) }\right)
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
+ {\int }_{-d}^{0}{\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + {\tau }_{1}}\right) {A}_{1}^{\mathrm{T}}Q\left( {{\tau }_{1} - {\tau }_{2}}\right) {A}_{1}e\left( {t + {\tau }_{2}}\right) \mathrm{d}{\tau }_{1}\mathrm{\;d}{\tau }_{2},
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
{\mathfrak{V}}_{2}\left( {e\left( t\right) }\right) = {\int }_{-d}^{0}{\int }_{\tau }^{0}{e}^{\mathrm{T}}\left( {t + s}\right) {A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right)
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
\times {A}_{1}e\left( {t + s}\right) \mathrm{d}s\mathrm{\;d}\tau + {\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + \tau }\right) {\mathcal{Q}}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau ,
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
(13)
|
| 282 |
+
|
| 283 |
+
where ${\Gamma }_{1}\left( {e\left( t\right) }\right) = {\int }_{-d}^{0}Q\left( {-d - \tau }\right) {A}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau$ and matrices $\mathcal{R},{\mathcal{Q}}_{1}$ satisfying the ${\mathcal{R}}^{\mathrm{T}} = \mathcal{R} > 0,{\mathcal{Q}}_{1}^{\mathrm{T}} = {\mathcal{Q}}_{1} > 0$ .
|
| 284 |
+
|
| 285 |
+
## V. CONTROLLER DESIGN AND STABILITY ANALYSIS
|
| 286 |
+
|
| 287 |
+
In this section, we will provide a detailed explanation of the controller design process and conduct a systematic analysis of its stability.
|
| 288 |
+
|
| 289 |
+
## A. Controller Design
|
| 290 |
+
|
| 291 |
+
We propose the following guaranteed cost DP controller for UMVs in (5):
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
u\left( t\right) = {u}_{1}\left( t\right) + {u}_{2}\left( t\right) ,
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
{u}_{1}\left( t\right) = - {K}_{11}e\left( t\right) ,
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
{u}_{2}\left( t\right) = \frac{1}{2}{K}_{21}{B}_{1}^{\mathrm{T}}\left\lbrack {Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( {e\left( t\right) }\right) }\right\rbrack + \frac{1}{2}{K}_{22}e\left( {t - d}\right) ,
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
(14)
|
| 306 |
+
|
| 307 |
+
where ${K}_{11},{K}_{21},{K}_{22}$ are feedback gain matrices. ${K}_{11}$ is already determined in Lemma 1, while ${K}_{21}$ and ${K}_{22}$ will be provided in Theorem 1.
|
| 308 |
+
|
| 309 |
+
Theorem 1: Consider the UMVs (5) under Assumption 1. The guaranteed cost DP controller is defined by (14). For the given positive definite matrices $\mathbb{N} \in {\mathbb{R}}^{m \times m},\mathbb{Y} \mathrel{\text{:=}}$ $\left\lbrack \begin{array}{ll} {\mathbb{Y}}_{11} & {\mathbb{Y}}_{12} \\ {\mathbb{Y}}_{12}^{\mathrm{T}} & {\mathbb{Y}}_{22} \end{array}\right\rbrack \in {\mathbb{R}}^{{2n} \times {2n}},\mathcal{P} \in {\mathbb{R}}^{n \times n}$ , and a positive constant ${\gamma }_{0}$ , if there exist positive definite matrices $\mathcal{R},{\mathcal{Q}}_{1} \in {\mathbb{R}}^{n \times n}$ , and matrices ${K}_{21} \in {\mathbb{R}}^{p \times p},{K}_{22} \in {\mathbb{R}}^{p \times n}$ such that $\mathcal{P} - {\mathcal{Q}}_{1} - {\mathcal{P}}_{1} > 0$ and the following inequality holds,
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
E \mathrel{\text{:=}} \left\lbrack \begin{matrix} \mathcal{P} + {\mathcal{Q}}_{1} + {\mathcal{P}}_{1} - {E}_{1} & {E}_{2} & {E}_{3} \\ {E}_{2}^{\mathrm{T}} & - {\mathcal{Q}}_{1} + {\mathbb{Y}}_{22} & \frac{1}{2}{K}_{22}^{\mathrm{T}}{B}_{1}^{\mathrm{T}} \\ {E}_{3}^{\mathrm{T}} & \frac{1}{2}{B}_{1}{K}_{22} & {E}_{4} \end{matrix}\right\rbrack < 0,
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
(15)
|
| 316 |
+
|
| 317 |
+
where
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
{E}_{1} = \frac{1}{2}Q\left( 0\right) {B}_{1}\left( {{K}_{21} + {K}_{21}^{\mathrm{T}}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) - {\mathbb{Y}}_{11} - {C}_{z}^{\mathrm{T}}{C}_{z}
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
- {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}}Q\left( 0\right) - Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}}Q\left( 0\right) ,
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
{E}_{2} = \frac{1}{2}Q\left( 0\right) {B}_{1}{K}_{22} + {\mathbb{Y}}_{12},
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
$$
|
| 332 |
+
{E}_{3} = Q\left( 0\right) {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}},
|
| 333 |
+
$$
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
{E}_{4} = - \frac{\mathcal{R}}{d} + {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}{B}_{2}{B}_{2}^{\mathrm{T}},
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
then, the state of the UMVs in system (5) asymptotically converge to zero, while maintaining an ${H}_{\infty }$ norm bound of ${\gamma }_{0}$ .
|
| 340 |
+
|
| 341 |
+
Proof 2: The time derivative of $\mathfrak{V}\left( {e\left( t\right) }\right)$ along the trajectory of the UMVs (5) can be calculated as follows:
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right)
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
= - {U}_{0}\left( {e\left( t\right) }\right) + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right)
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
+ 2{g}^{\mathrm{T}}\left( {e\left( t\right) , e\left( {t - d}\right) }\right) {F}^{\mathrm{T}}\left\lbrack {Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( {e\left( t\right) }\right) }\right\rbrack
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
+ 2{\left\lbrack Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}}{B}_{2}\omega \left( t\right)
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
+ 2{\left\lbrack Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}}{B}_{1}u\left( t\right) \tag{16}
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
where
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
{U}_{0}\left( e\right) = {e}^{\mathrm{T}}\left( t\right) \left( {\mathcal{P} - {\mathcal{Q}}_{1} - {\mathcal{P}}_{1}}\right) e\left( t\right) + {e}^{\mathrm{T}}\left( {t - d}\right) {\mathcal{Q}}_{1}e\left( {t - d}\right)
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
+ {\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + \tau }\right) {A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right) {A}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau .
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
{\mathcal{P}}_{1} = {\int }_{-d}^{0}{A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right) {A}_{1}\mathrm{\;d}\tau .
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
Substituting (14) into (16), we have
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \leq {\Gamma }^{\mathrm{T}}\left( t\right) {E\Gamma }\left( t\right)
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
(17)
|
| 384 |
+
|
| 385 |
+
where
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
\Gamma \left( t\right) = {\left\lbrack {e}^{\mathrm{T}}\left( t\right) {e}^{\mathrm{T}}\left( t - d\right) {\Gamma }_{1}^{\mathrm{T}}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}},
|
| 389 |
+
$$
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
E \mathrel{\text{:=}} \left\lbrack \begin{matrix} \mathcal{P} + {\mathcal{Q}}_{1} + {\mathcal{P}}_{1} - {E}_{1} & {E}_{2} & {E}_{3} \\ {E}_{2}^{\mathrm{T}} & - {\mathcal{Q}}_{1} + {\mathbb{Y}}_{22} & \frac{1}{2}{K}_{22}^{\mathrm{T}}{B}_{1}^{\mathrm{T}} \\ {E}_{3}^{\mathrm{T}} & \frac{1}{2}{B}_{1}{K}_{22} & {E}_{4} \end{matrix}\right\rbrack ,
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
where
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
{E}_{1} = \frac{1}{2}Q\left( 0\right) {B}_{1}\left( {{K}_{21} + {K}_{21}^{\mathrm{T}}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) - {\mathbb{Y}}_{11} - {C}_{z}^{\mathrm{T}}{C}_{z}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
$$
|
| 402 |
+
- {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}}Q\left( 0\right) - Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}}Q\left( 0\right) ,
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
{E}_{2} = \frac{1}{2}Q\left( 0\right) {B}_{1}{K}_{22} + {\mathbb{Y}}_{12},
|
| 407 |
+
$$
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
{E}_{3} = Q\left( 0\right) {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}},
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
$$
|
| 414 |
+
{E}_{4} = - \frac{\mathcal{R}}{d} + {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}{B}_{2}{B}_{2}^{\mathrm{T}},
|
| 415 |
+
$$
|
| 416 |
+
|
| 417 |
+
For $E < 0$ , it implies that
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \leq 0. \tag{18}
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
If Theorem 1 holds, then the ${\int }_{{t}_{0}}^{t}{\Gamma }^{\mathrm{T}}\left( \tau \right) {E\Gamma }\left( \tau \right) \mathrm{d}\tau < 0$ is satisfied:
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
0 \leq {\epsilon }_{\min }\parallel e\left( t\right) {\parallel }^{2} \leq \mathfrak{V}\left( e\right) \leq \mathfrak{V}\left( {e\left( {t}_{0}\right) }\right) - {\int }_{{t}_{0}}^{t}{\mathcal{Z}}^{\mathrm{T}}\left( \tau \right) \mathcal{Z}\left( \tau \right) \mathrm{d}\tau
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
$$
|
| 430 |
+
+ {\gamma }_{0}^{2}{\int }_{{t}_{0}}^{t}{\omega }^{\mathrm{T}}\left( \tau \right) \omega \left( \tau \right) \mathrm{d}\tau , t > {t}_{0}.
|
| 431 |
+
$$
|
| 432 |
+
|
| 433 |
+
(19)
|
| 434 |
+
|
| 435 |
+
Clearly
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\mathop{\lim }\limits_{{t \rightarrow \infty }}{\int }_{{t}_{0}}^{t}{\Gamma }^{\mathrm{T}}\left( \tau \right) {E\Gamma }\left( \tau \right) \mathrm{d}\tau \leq \mathfrak{V}\left( {e\left( {t}_{0}\right) }\right) . \tag{20}
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
We obtain
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
\mathop{\lim }\limits_{{t \rightarrow \infty }}\parallel e\left( t\right) \parallel = 0 \tag{21}
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
By integrating equation (18)) from 0 to $\infty$ , we obtain
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
{\int }_{0}^{\infty }{\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) \mathrm{d}t \leq {\gamma }_{0}^{2}{\int }_{0}^{\infty }{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \mathrm{d}t + \mathfrak{V}\left( 0\right) . \tag{22}
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
## B. Guaranteed Cost Analysis
|
| 454 |
+
|
| 455 |
+
When the disturbance $\omega \left( t\right)$ is absent, combining (8),(14), and (18) yields:
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
{\left. \frac{d\mathfrak{V}\left( {e\left( t\right) }\right) }{dt}\right| }_{\left( 5\right) } + {e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right)
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
$$
|
| 462 |
+
\leq {\Gamma }^{\mathrm{T}}\left( t\right) \left( {E + \operatorname{diag}\left( {\Omega ,0,0}\right) + \frac{1}{4}{O}^{\mathrm{T}}{\mathbb{R}}_{q}O}\right) \Gamma \left( t\right)
|
| 463 |
+
$$
|
| 464 |
+
|
| 465 |
+
(23)where
|
| 466 |
+
|
| 467 |
+
$$
|
| 468 |
+
O = \left\lbrack \begin{array}{lll} - \left( {\mathbb{Y} + {K}_{21}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) & {K}_{22} & - \left( {\mathbb{Y} + {K}_{21}}\right) {B}_{1}^{\mathrm{T}} \end{array}\right\rbrack .
|
| 469 |
+
$$
|
| 470 |
+
|
| 471 |
+
We have
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
\left\lbrack \begin{matrix} E + \operatorname{diag}\left( {\Omega ,0,0}\right) & {O}^{\mathrm{T}} \\ O & - 4{\mathbb{R}}_{q}^{-1} \end{matrix}\right\rbrack < 0
|
| 475 |
+
$$
|
| 476 |
+
|
| 477 |
+
Hence,
|
| 478 |
+
|
| 479 |
+
$$
|
| 480 |
+
{\int }_{0}^{\infty }\left\lbrack {{e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right) }\right\rbrack \mathrm{d}t \leq {J}^{ * }.
|
| 481 |
+
$$
|
| 482 |
+
|
| 483 |
+
where ${J}^{ * } = \mathfrak{V}\left( {e\left( t\right) }\right)$ , with $\mathfrak{V}\left( {e\left( t\right) }\right)$ defined in (12).
|
| 484 |
+
|
| 485 |
+
## VI. Simulation Example
|
| 486 |
+
|
| 487 |
+
The proposed control method's effectiveness is demonstrated through a standard floating production vessel model, as referenced in [23]. The matrices $\xi ,\mathcal{C}$ , and $\mathcal{D}$ are specified in [23], and the thruster configuration matrix $\mathcal{G}$ is derived from [24].
|
| 488 |
+
|
| 489 |
+
The initial condition is given as $\phi \left( s\right) = {\left\lbrack \begin{array}{llllll} 0 & 0 & 0 & 0 & 0 & {0.2} \end{array}\right\rbrack }^{\mathrm{T}}$ , with the reference signal set to ${x}_{\text{ref }} = \left\lbrack \begin{array}{ll} {0.01} & - \end{array}\right.$ ${0.010.050.010.040.01}{\rbrack }^{\mathrm{T}}$ . The time delay is $d = 1$ , and the ${H}_{\infty }$ performance index ${\gamma }_{0} = 2$ .
|
| 490 |
+
|
| 491 |
+
The controller gain matrix ${K}_{11}$ is obtained by solving the LMI (11) from Lemma 1, as follows:
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
{K}_{11} = \left\lbrack \begin{matrix} {3.7401} & - {1.0550} & {1.6703} \\ {3.5625} & - {0.3782} & {0.8900} \\ - {1.8457} & {7.7381} & - {7.8852} \\ - {1.7986} & {7.5585} & - {7.6782} \\ - {0.2156} & {1.5274} & - {0.7243} \\ - {0.4379} & {2.3744} & - {1.7009} \end{matrix}\right.
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
$$
|
| 498 |
+
\left. \begin{matrix} {3.8794} & - {0.4071} & {0.6533} \\ {3.8305} & {0.0888} & {0.2145} \\ - {0.4836} & {5.9344} & - {4.2105} \\ - {0.4706} & {5.8028} & - {4.0941} \\ - {0.0351} & {1.3831} & - {0.1833} \\ - {0.0963} & {2.0038} & - {0.7325} \end{matrix}\right\rbrack
|
| 499 |
+
$$
|
| 500 |
+
|
| 501 |
+
We set the matrix $\mathcal{Q} = I$ . The(i, j)-th element of the matrix $Q\left( \theta \right)$ , denoted as ${Q}_{ij}\left( \theta \right)$ , is determined using the method proposed in [22]. Figures 1-2 show the values of ${Q}_{ij}\left( \theta \right)$ for $\theta \in \left\lbrack {0,1}\right\rbrack$ .
|
| 502 |
+
|
| 503 |
+
Finally, by solving LMI (15) as described in Theorem 1, the controller gain matrices ${K}_{21}$ and ${K}_{22}$ are computed as:
|
| 504 |
+
|
| 505 |
+
$$
|
| 506 |
+
{K}_{21} = 1 \times {10}^{4}\left\lbrack \begin{matrix} {0.0284} & {0.0561} & {0.0446} \\ - {0.0249} & - {0.0535} & - {0.0615} \\ - {0.0160} & {0.0215} & {0.0366} \\ {0.0187} & - {0.0010} & - {0.0542} \\ - {0.2113} & {0.2496} & - {0.1101} \\ - {0.0871} & {0.0356} & - {0.0328} \end{matrix}\right.
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
$$
|
| 510 |
+
\left. \begin{matrix} {0.0381} & - {0.0108} & - {0.0257} \\ - {0.0140} & {0.0119} & {0.0273} \\ {0.0723} & - {0.0709} & - {0.0315} \\ - {0.0511} & {0.0035} & {0.1249} \\ - {0.0808} & - {0.9459} & {1.2040} \\ {0.1207} & {0.5283} & - {0.6940} \end{matrix}\right\rbrack
|
| 511 |
+
$$
|
| 512 |
+
|
| 513 |
+

|
| 514 |
+
|
| 515 |
+
Figure 1. Lyapunov matrix ${Q}_{ij}\left( \theta \right) ,\left( {\mathrm{i} = 1,2,3\mathrm{j} = 1,2,3,4,5,6}\right)$ .
|
| 516 |
+
|
| 517 |
+

|
| 518 |
+
|
| 519 |
+
Figure 2. Lyapunov matrix ${Q}_{ij}\left( \theta \right) ,\left( {\mathrm{i} = 4,5,6\mathrm{j} = 1,2,3,4,5,6}\right)$ .
|
| 520 |
+
|
| 521 |
+
$$
|
| 522 |
+
{K}_{22} = \left\lbrack \begin{matrix} - {15.4416} & {8.9036} & {66.6063} \\ - {18.9989} & - {43.3441} & - {101.0469} \\ {22.5784} & {53.9648} & {21.5477} \\ - {82.2859} & - {141.7415} & {16.5537} \\ - {118.7051} & - {303.3277} & {414.2256} \\ {118.3731} & {331.0541} & - {512.3389} \end{matrix}\right.
|
| 523 |
+
$$
|
| 524 |
+
|
| 525 |
+
$$
|
| 526 |
+
\left. \begin{matrix} {35.6011} & {15.1773} & - {22.0347} \\ - {70.0417} & - {49.6179} & - {12.4059} \\ - {26.9017} & {10.6883} & {43.2947} \\ - {69.8399} & - {34.0587} & - {92.0165} \\ - {396.6715} & {76.7366} & - {41.8016} \\ {433.3628} & - {113.3949} & {30.4866} \end{matrix}\right\rbrack .
|
| 527 |
+
$$
|
| 528 |
+
|
| 529 |
+
Figures 3-4 illustrate the trajectories of the position error, yaw angle error, and velocity error for UMVs (5). Figure 5 shows the control inputs produced by the controller as defined in (14).
|
| 530 |
+
|
| 531 |
+

|
| 532 |
+
|
| 533 |
+
Figure 3. Response curves of UMVs position and yaw angle error.
|
| 534 |
+
|
| 535 |
+

|
| 536 |
+
|
| 537 |
+
Figure 4. Response curves of UMVs velocity error.
|
| 538 |
+
|
| 539 |
+

|
| 540 |
+
|
| 541 |
+
Figure 5. The comparison of response curves for $u\left( t\right)$
|
| 542 |
+
|
| 543 |
+
In Figure 3, it is clear that the error curves under the proposed control initially exhibit small fluctuations before gradually converging to zero. This demonstrates the effectiveness of the proposed control strategy. Figure 5 illustrates the response curves of the guaranteed cost DP controller $u\left( t\right)$ .
|
| 544 |
+
|
| 545 |
+
## CONCLUSION
|
| 546 |
+
|
| 547 |
+
In this paper, we have addressed the guaranteed cost dynamic positioning control problem for UMVs with time delays. First, we propose a complete-type LKF for UMVs with time delays, which leads to less conservativeness. Furthermore, a novel approach for designing guaranteed cost dynamic positioning controller for DP systems is proposed. The specific form of this controller is derived from feasible solutions of LMIs. The proposed method was validated through simulation, demonstrating its effectiveness. Future work will focus on extending the control strategy to systems with time-varying delays, further enhancing the robustness of DP control for UMVs.
|
| 548 |
+
|
| 549 |
+
## REFERENCES
|
| 550 |
+
|
| 551 |
+
[1] X. Hu, G. Zhu, Y. Ma, Z. Li, R. Malekian, and M. Á. Sotelo, "Event-triggered adaptive fuzzy setpoint regulation of surface vessels with unmeasured velocities under thruster saturation constraints," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 13463-13472, 2021.
|
| 552 |
+
|
| 553 |
+
[2] V. Bertram, "Unmanned surface vehicles-a survey," Skibsteknisk Selskab, Copenhagen, Denmark, vol. 1, pp. 1-14, 2008.
|
| 554 |
+
|
| 555 |
+
[3] L.-Y. Hao, H. Zhang, T.-S. Li, B. Lin, and C. P. Chen, "Fault tolerant control for dynamic positioning of unmanned marine vehicles based on ts fuzzy model with unknown membership functions," IEEE Transactions on Vehicular Technology, vol. 70, no. 1, pp. 146-157, 2021.
|
| 556 |
+
|
| 557 |
+
[4] Y.-L. Wang, Q.-L. Han, M.-R. Fei, and C. Peng, "Network-based t-s fuzzy dynamic positioning controller design for unmanned marine vehicles," IEEE transactions on cybernetics, vol. 48, no. 9, pp. 2750- 2763, 2018.
|
| 558 |
+
|
| 559 |
+
[5] Z. Ye, D. Zhang, and Z.-G. Wu, "Adaptive event-based tracking control of unmanned marine vehicle systems with dos attack," Journal of the Franklin Institute, vol. 358, no. 3, pp. 1915-1939, 2021.
|
| 560 |
+
|
| 561 |
+
[6] L.-Y. Hao, H. Zhang, W. Yue, and H. Li, "Fault-tolerant compensation control based on sliding mode technique of unmanned marine vehicles subject to unknown persistent ocean disturbances," International Journal of Control, Automation and Systems, vol. 18, no. 3, pp. 739-752, 2020.
|
| 562 |
+
|
| 563 |
+
[7] X. Yang, Y. Wang, and X. Zhang, "Lyapunov matrix-based method to guaranteed cost control for a class of delayed continuous-time nonlinear systems," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 1, pp. 554-560, 2020.
|
| 564 |
+
|
| 565 |
+
[8] X. Wang and G.-H. Yang, "Fault-tolerant consensus tracking control for linear multiagent systems under switching directed network," IEEE transactions on cybernetics, vol. 50, no. 5, pp. 1921-1930, 2019.
|
| 566 |
+
|
| 567 |
+
[9] X. Yang, Y. Wang, X. Yang, and X. Zhang, "Lyapunov matrix-based method to global robust practical exponential r-stability for a class of delayed continuous-time nonlinear systems: Theory and applications," International Journal of Robust and Nonlinear Control, vol. 32, no. 18, pp. 10234-10250, 2022.
|
| 568 |
+
|
| 569 |
+
[10] L.-Y. Hao, H. Zhang, H. Li, and T.-S. Li, "Sliding mode fault-tolerant control for unmanned marine vehicles with signal quantization and time-delay," Ocean Engineering, vol. 215, p. 107882, 2020.
|
| 570 |
+
|
| 571 |
+
[11] X. Yang, L.-Y. Hao, T. Li, and Y. Xiao, "Dynamic positioning control for unmanned marine vehicles with thruster faults and time delay: A lyapunov matrix-based method," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024.
|
| 572 |
+
|
| 573 |
+
[12] J. Kim, H. Joe, S.-c. Yu, J. S. Lee, and M. Kim, "Time-delay controller design for position control of autonomous underwater vehicle under disturbances," IEEE Transactions on Industrial Electronics, vol. 63, no. 2, pp. 1052-1061, 2015.
|
| 574 |
+
|
| 575 |
+
[13] J. Yan, J. Gao, X. Yang, X. Luo, and X. Guan, "Position tracking control of remotely operated underwater vehicles with communication delay," IEEE Transactions on Control Systems Technology, vol. 28, no. 6, pp. 2506-2514, 2019.
|
| 576 |
+
|
| 577 |
+
[14] T. Zhang and G. Liu, "Predictive tracking control of network-based agents with communication delays," IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 6, pp. 1150-1156, 2018.
|
| 578 |
+
|
| 579 |
+
[15] S. Chang and T. Peng, "Adaptive guaranteed cost control of systems with uncertain parameters," IEEE Transactions on Automatic Control, vol. 17, no. 4, pp. 474-483, 1972.
|
| 580 |
+
|
| 581 |
+
[16] D. Wang and D. Liu, "Learning and guaranteed cost control with event-based adaptive critic implementation," IEEE transactions on neural networks and learning systems, vol. 29, no. 12, pp. 6004-6014, 2018.
|
| 582 |
+
|
| 583 |
+
[17] J.-Q. Wang, Z.-J. Zou, and T. Wang, "Path following of a surface ship sailing in restricted waters under wind effect using robust ${h}_{\infty }$ guaranteed cost control," International Journal of Naval Architecture and Ocean Engineering, vol. 11, no. 1, pp. 606-623, 2019.
|
| 584 |
+
|
| 585 |
+
[18] R. Lu, H. Cheng, and J. Bai, "Fuzzy-model-based quantized guaranteed cost control of nonlinear networked systems," IEEE Transactions on Fuzzy Systems, vol. 23, no. 3, pp. 567-575, 2014.
|
| 586 |
+
|
| 587 |
+
[19] T. Liu, Y. Xiao, Y. Feng, J. Li, and B. Huang, "Guaranteed cost control for dynamic positioning of marine surface vessels with input saturation," Applied Ocean Research, vol. 116, p. 102868, 2021.
|
| 588 |
+
|
| 589 |
+
[20] L.-Y. Hao, H. Zhang, G. Guo, and H. Li, "Quantized sliding mode control of unmanned marine vehicles: Various thruster faults tolerated with a unified model," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 3, pp. 2012-2026, 2019.
|
| 590 |
+
|
| 591 |
+
[21] X. Wang and G.-H. Yang, "Cooperative adaptive fault-tolerant tracking control for a class of multi-agent systems with actuator failures and mismatched parameter uncertainties," IET Control Theory & Applications, vol. 9, no. 8, pp. 1274-1284, 2015.
|
| 592 |
+
|
| 593 |
+
[22] V. Kharitonov, Time-delay systems: Lyapunov functionals and matrices. Springer Science & Business Media, 2012.
|
| 594 |
+
|
| 595 |
+
[23] M. Breivik and T. I. Fossen, "Guidance laws for autonomous underwater vehicles," Underwater vehicles, vol. 4, pp. 51-76, 2009.
|
| 596 |
+
|
| 597 |
+
[24] T. I. Fossen, S. I. Sagatun, and A. J. Sørensen, "Identification of dynamically positioned ships," Control Engineering Practice, vol. 4, no. 3, pp. 369-376, 1996.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/HFrWfFXFQo/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,543 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LYAPUNOV MATRIX-BASED GUARANTEED COST DYNAMIC POSITIONING CONTROL FOR UNMANNED MARINE VEHICLES WITH TIME DELAY
|
| 2 |
+
|
| 3 |
+
${1}^{\text{ st }}$ Xin Yang
|
| 4 |
+
|
| 5 |
+
College of Navigation
|
| 6 |
+
|
| 7 |
+
Dalian Maritime University
|
| 8 |
+
|
| 9 |
+
Dalian, China
|
| 10 |
+
|
| 11 |
+
yangxin3541@163.com
|
| 12 |
+
|
| 13 |
+
${2}^{\text{ nd }}$ Li-Ying Hao*
|
| 14 |
+
|
| 15 |
+
College of
|
| 16 |
+
|
| 17 |
+
Marine Electrical Engineering
|
| 18 |
+
|
| 19 |
+
Dalian Maritime University
|
| 20 |
+
|
| 21 |
+
Dalian, China
|
| 22 |
+
|
| 23 |
+
haoliying_0305@163.com
|
| 24 |
+
|
| 25 |
+
${3}^{\text{ rd }}$ Tieshan Li*
|
| 26 |
+
|
| 27 |
+
College of Automation Engineering
|
| 28 |
+
|
| 29 |
+
University of Electronic Science
|
| 30 |
+
|
| 31 |
+
and Technology of China
|
| 32 |
+
|
| 33 |
+
Chengdu, China
|
| 34 |
+
|
| 35 |
+
tieshanli@126.com
|
| 36 |
+
|
| 37 |
+
${4}^{\text{ th }}$ Yang Xiao
|
| 38 |
+
|
| 39 |
+
Department of Computer Science
|
| 40 |
+
|
| 41 |
+
The University of Alabama
|
| 42 |
+
|
| 43 |
+
Tuscaloosa, USA
|
| 44 |
+
|
| 45 |
+
yangxiao@ieee.org
|
| 46 |
+
|
| 47 |
+
${5}^{\text{ th }}$ Guoyong Liu
|
| 48 |
+
|
| 49 |
+
College of
|
| 50 |
+
|
| 51 |
+
Marine Electrical Engineering
|
| 52 |
+
|
| 53 |
+
Dalian Maritime University
|
| 54 |
+
|
| 55 |
+
Dalian, China
|
| 56 |
+
|
| 57 |
+
liuguoyong0806@163.com
|
| 58 |
+
|
| 59 |
+
Abstract-This paper presents a Lyapunov matrix-based guaranteed cost dynamic positioning controller for unmanned marine vehicles (UMVs) with time delays. A novel Lyapunov-Krasovskii functional (LKF) is introduced, which enhances the analysis of time delays and system states. The controller design leverages the LMI framework alongside Jensen's inequality to determine sufficient criteria for its feasibility, ensuring that the UMVs' state errors gradually reduce to zero and providing an adaptive ${H}_{\infty }$ performance guarantee. Additionally, the cost function is upper-bounded, and the effectiveness of the method is demonstrated through simulation results.
|
| 60 |
+
|
| 61 |
+
Index Terms-Lyapunov matrix, time delays, guaranteed cost control (GCC), dynamic positioning (DP), unmanned marine vehicles (UMVs)
|
| 62 |
+
|
| 63 |
+
§ I. INTRODUCTION
|
| 64 |
+
|
| 65 |
+
Unmanned Marine Vehicles (UMVs) play a pivotal role in enhancing maritime safety and security by performing high-risk operations effectively without compromising human lives, thereby revolutionizing search and rescue missions and coastal surveillance [1]-[3]. Compared to traditional anchor mooring, dynamic positioning (DP) offers a more versatile, precise, and environmentally friendly method for positioning vessels, making it particularly suitable for use in complex or dynamic marine environments [4]. Over the years, numerous control strategies have been proposed to ensure robust DP control in UMVs. For instance, [5] introduces a dynamic output feedback control method, specifically tailored for DP ships to counter denial of service attacks. In [6], the design of an adaptive sliding mode fault-tolerant compensation mechanism is presented, targeting the maintenance of DP control in UMVs despite thruster faults and unknown ocean disturbances. It is crucial to recognize that time delays are typically inevitable [7]-[9]. Consequently, there is an urgent need to develop a strategy to compensate for these time delays.
|
| 66 |
+
|
| 67 |
+
In DP systems for UMVs, time delays due to network-mediated signal and control command transmission represents a significant challenge that often compromises system stability and performance [10], [11]. This issue has led to the development of various advanced time delays compensation methods [12]-[14]. Among these methods, enhanced time delays compensation approaches for autonomous underwater vehicles have shown promise [12]. In [13], model-free proportional-derivative controllers are innovatively incorporated into the Lyapunov-Krasovskii functional (LKF) framework to effectively counteract the impacts of delays. Advanced strategies utilizing Lyapunov matrix-based LKF methods have proven particularly effective. These approaches leverage comprehensive information about time delays and system states, providing control strategy that efficiently accommodates time delays systems. The primary motivation of this paper is to develop a complete LKF based on the Lyapunov matrix to mitigate the effects of time delays on UMVs.
|
| 68 |
+
|
| 69 |
+
On another research front, guaranteed cost control (GCC) has been extensively studied [15]-[17]. This strategy offers the advantage of setting an upper limit on a specified performance index, ensuring that any system performance degradation remains below this predefined cost threshold. As vessels often navigate in complex and varied ocean environments, the impact of wind and wave disturbances becomes significant [17]. In response,[18] investigated a robust ${H}_{\infty }$ guaranteed cost controller aimed at enhancing path-following performance. The GCC method presented in [19] offers a way to reduce energy consumption for surface vessels in DP, thereby increasing its practical applicability. These results have inspired our research into GCC theory, particularly its application to DP ships. Thus, how to propose a guaranteed cost controller based on the Lyapunov matrix to achieve effective DP control for UMVs is the second research motivation of this paper.
|
| 70 |
+
|
| 71 |
+
This work was supported by the National Natural Science Foundation of China (Grant Nos: 51939001, 52171292, 61976033); Dalian Outstanding Young Talents Program(2022RJ05)
|
| 72 |
+
|
| 73 |
+
* Corresponding authors. Emails: haoliying_0305@163.com;tieshanli@12 6.com
|
| 74 |
+
|
| 75 |
+
The primary objective of this paper is to design a Lyapunov matrix-based guaranteed cost dynamic positioning controller, utilizing the LMI method to ensure stability. The paper's main contributions are evaluated in comparison to recent advancements in the field.
|
| 76 |
+
|
| 77 |
+
1) We propose a novel time delays compensation method for UMVs that incorporates more detailed time delays and state information by employing a Lyapunov matrix-based complete-type LKF, which reduces conservatism compared to conventional time delays compensation techniques.
|
| 78 |
+
|
| 79 |
+
2) A novel guaranteed cost DP control strategy is designed, which ensuring the stability of DP systems for UMVs while providing an upper bound on a prespecified cost function.
|
| 80 |
+
|
| 81 |
+
The remainder of this paper is structured as follows: Section II describes the UMVs model with time delays. Section 3 reviews basic concepts and preliminary results, which serve as the theoretical basis for the proposed LKF method based on the Lyapunov matrix. A complete-type LKF based on the Lyapunov matrix is presented in Section 4. Section 5 introduces guaranteed cost dynamic positioning controller. Finally, Section 6 presents simulations to illustrate the validity of the theoretical results.
|
| 82 |
+
|
| 83 |
+
§ II. UMVS MODELING AND PROBLEM DESCRIPTION
|
| 84 |
+
|
| 85 |
+
§ A. DYNAMIC MODELING FOR UMVS
|
| 86 |
+
|
| 87 |
+
The UMVs model typically employs a three degrees of freedom motion equation to describe its dynamic behavior in the marine environment. These three degrees of freedom include yaw, surge, and sway. Therefore, the dynamic equations of the UMVs are often simplified and expressed in the following form [20]:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\xi \dot{v}\left( t\right) + \mathcal{C}v\left( t\right) + \mathcal{D}\lambda \left( t\right) = \mathcal{G}u\left( t\right) , \tag{1}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\dot{\lambda }\left( t\right) = \mathcal{S}\left( {\theta \left( t\right) }\right) v\left( t\right) , \tag{2}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where matrix $\xi$ represents the inertia matrix, and the velocity vector $v\left( t\right) = {\left\lbrack {v}_{1}\left( t\right) ,{v}_{2}\left( t\right) ,{v}_{3}\left( t\right) \right\rbrack }^{\mathrm{T}}$ describes the ship’s motion in different directions, where ${v}_{1}\left( t\right)$ represents the surge velocity, ${v}_{2}\left( t\right)$ indicates the sway velocity, and ${v}_{3}\left( t\right)$ corresponds to the yaw rate. The position vector $\lambda \left( t\right) =$ ${\left\lbrack {x}_{o}\left( t\right) ,{y}_{o}\left( t\right) ,\theta \left( t\right) \right\rbrack }^{\mathrm{T}}$ is used to describe the ship’s position and orientation on the water surface, where ${x}_{o}\left( t\right)$ and ${y}_{o}\left( t\right)$ represent the coordinates of the ship in the horizontal plane, and $\theta \left( t\right)$ denotes the ship’s heading angle. The matrix $\mathcal{C}$ is the damping matrix. The matrix $\mathcal{D}$ represents the mooring moment matrix, which models external disturbances such as wind, waves, and ocean currents acting on the UMVs. The matrix $\mathcal{G}$ is the thrust allocation matrix, responsible for distributing thrust to the ship's propellers. Additionally, the rotation matrix $\mathcal{S}\left( {\theta \left( t\right) }\right)$ is given by:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\mathcal{S}\left( {\theta \left( t\right) }\right) = \left\lbrack \begin{matrix} \cos \left( {\theta \left( t\right) }\right) & - \sin \left( {\theta \left( t\right) }\right) & 0 \\ \sin \left( {\theta \left( t\right) }\right) & \cos \left( {\theta \left( t\right) }\right) & 0 \\ 0 & 0 & I \end{matrix}\right\rbrack ,
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
For the control of UMVs in the northern region, where the yaw angle $\theta \left( t\right)$ is small, the matrix $\mathcal{S}\left( {\theta \left( t\right) }\right)$ can be approximated by the identity matrix $I$ . We define the following matrices ${\mathcal{A}}_{1} = - {\xi }^{-1}\mathcal{C},\mathcal{B} = {\xi }^{-1}\mathcal{G}$ , and $\mathcal{F} = - {\xi }^{-1}\mathcal{D}$ . let $x\left( t\right) = {\left\lbrack {\lambda }^{\mathrm{T}}\left( t\right) ,{v}^{\mathrm{T}}\left( t\right) \right\rbrack }^{\mathrm{T}}$ . Thus, the dynamic equation of UMVs can be written as follows:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\dot{x}\left( t\right) = {Ax}\left( t\right) + {B}_{1}u\left( t\right) + {Fg}\left( {t,v\left( t\right) }\right) + \varpi \left( t\right) , \tag{3}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $A = \left\lbrack \begin{matrix} 0 & I \\ 0 & {\mathcal{A}}_{1} \end{matrix}\right\rbrack ,{B}_{1} = \left\lbrack \begin{array}{l} 0 \\ \mathcal{B} \end{array}\right\rbrack ,F = \left\lbrack \begin{matrix} 0 \\ \mathcal{F} \end{matrix}\right\rbrack .\varpi \left( t\right) \in$ ${L}_{2}\lbrack 0,\infty )$ represents disturbance. Defined reference signal ${x}_{\text{ ref }} = \left\lbrack \begin{array}{l} {\lambda }_{\text{ ref }} \\ {v}_{\text{ ref }} \end{array}\right\rbrack$ , the error vector is defined as $e\left( t\right) = x\left( t\right) -$ ${x}_{\text{ ref }}$ . The error dynamics of the UMVs can be expressed as follows:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\dot{e}\left( t\right) = {Ae}\left( t\right) + {B}_{1}u\left( t\right) + {Fg}\left( {t,e\left( t\right) }\right) + {B}_{2}\omega \left( t\right) . \tag{4}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
let $e\left( t\right) \in {\mathbb{R}}^{n}$ denote the state vector, $u \in {\mathbb{R}}^{p}$ represent the control input vector. The term ${B}_{2}\omega \left( t\right)$ is defined as $A{x}_{\text{ ref }} + \varpi \left( t\right)$ , where $\omega \left( t\right) = \left\lbrack \begin{array}{l} {x}_{\text{ ref }} \\ \varpi \left( t\right) \end{array}\right\rbrack$ , and ${B}_{2} = \left\lbrack \begin{array}{ll} A & I \end{array}\right\rbrack$ . Considering the unavoidable time delay during signal transmission, it follows from equation (4) that:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\dot{e}\left( t\right) = {Ae}\left( t\right) + {A}_{1}e\left( {t - d}\right) + {B}_{1}u\left( t\right) + {Fg}\left( {e\left( t\right) ,e\left( {t - d}\right) }\right)
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
+ {B}_{2}\omega \left( t\right) \text{ , } \tag{5}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $d > 0$ represents the time delay, and $g : {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{m}$ is assumed to satisfy the following inequality.
|
| 126 |
+
|
| 127 |
+
Assumption 1: Let matrices $\mathbb{N} > 0$ and $\mathbb{Y} > 0$ , where $\mathbb{N} \in$ ${\mathbb{R}}^{m \times m}$ and $\mathbb{Y} \in {\mathbb{R}}^{{2n} \times {2n}}$ . The nonlinear function $g\left( \cdot \right)$ satisfies the following inequality:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{g}^{\mathrm{T}}\left( {e\left( t\right) ,e\left( {t - d}\right) }\right) {\mathbb{N}}^{-1}g\left( {e\left( t\right) ,e\left( {t - d}\right) }\right)
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\leq \left\lbrack \begin{array}{ll} {e}^{\mathrm{T}}\left( t\right) & {e}^{\mathrm{T}}\left( {t - d}\right) \end{array}\right\rbrack \mathbb{Y}{\left\lbrack \begin{array}{ll} {e}^{\mathrm{T}}\left( t\right) & {e}^{\mathrm{T}}\left( {t - d}\right) \end{array}\right\rbrack }^{\mathrm{T}}.
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
Remark 1: Assumption 1 ensures that the function $g\left( t\right)$ is bounded. When $e\left( t\right) = 0$ or $e\left( {t - d}\right) = 0$ , Assumption 1 in this article is the general form of Assumption 1 in reference [17].
|
| 138 |
+
|
| 139 |
+
To bring both linear and angular velocities to zero and minimize the impact of external disturbances such as wind, waves, and currents, the output $\mathcal{Z}\left( t\right)$ , can be formulated as follows:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\mathcal{Z}\left( t\right) = {C}_{z}e\left( t\right) \tag{6}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
Definition 1: [21] The system is described by
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\dot{x}\left( t\right) = {A}_{d}x\left( t\right) + {B}_{d}\omega \left( t\right) ,
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mathcal{Z}\left( t\right) = {C}_{d}x\left( t\right) ,x\left( 0\right) = 0. \tag{7}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Given a constant ${\gamma }_{0} > 0,\omega \left( t\right) \in {L}_{2}\lbrack 0,\infty )$ , if for any $\epsilon > 0$ , the following condition
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{\int }_{0}^{\infty }{\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) \mathrm{d}t \leq {\gamma }_{0}^{2}{\int }_{0}^{\infty }{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \mathrm{d}t + \epsilon ,
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
is satisfied, then the system (7) is said to achieve an adaptive ${H}_{\infty }$ performance index that does not exceed ${\gamma }_{0}$ .
|
| 162 |
+
|
| 163 |
+
Definition 2: The cost function related to system (5) is described as follows:
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
J = {\int }_{0}^{\infty }\left\lbrack {{e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right) }\right\rbrack \mathrm{d}t. \tag{8}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
where ${\Omega }^{\mathrm{T}} = \Omega \geq 0$ and ${\mathbb{R}}_{q}^{\mathrm{T}} = {\mathbb{R}}_{q} \geq 0$ .
|
| 170 |
+
|
| 171 |
+
A stabilization controller $u\left( t\right)$ for system (5) is called a guaranteed cost controller if it ensures that $J \leq {J}^{ * }$ , where ${J}^{ * }$ is a positive scalar. The value ${J}^{ * }$ is known as the guaranteed cost.
|
| 172 |
+
|
| 173 |
+
§ B. CONTROL OBJECTIVE
|
| 174 |
+
|
| 175 |
+
For UMVs (5) affected by time delays, this paper proposes a guaranteed cost DP controller based on the Lyapunov matrix. The controller is designed to drive the state error of the UMVs asymptotically converges to zero, while also satisfying the specified ${H}_{\infty }$ performance criteria and guaranteeing an upper limit on the predefined cost function.
|
| 176 |
+
|
| 177 |
+
§ III. PRELIMINARIES
|
| 178 |
+
|
| 179 |
+
We will construct a complete-type LKF for UMVs (5) based on Lyapunov matrix. In the following section, we begin by defining the Lyapunov matrix.
|
| 180 |
+
|
| 181 |
+
§ A. LYAPUNOV MATRIX
|
| 182 |
+
|
| 183 |
+
We will now present relevant concepts related to linear time-delay systems as follows [22]:
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\dot{e}\left( t\right) = {Ae}\left( t\right) + {A}_{1}e\left( {t - d}\right) ,
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
e\left( \iota \right) = \phi \left( \iota \right) ,\iota \in \left\lbrack {-d,0}\right\rbrack , \tag{9}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
where $e\left( t\right) \in {\mathbb{R}}^{n}$ represents the state vector, $d > 0$ is the time delay. $A,{A}_{1} \in {\mathbb{R}}^{n \times n}$ are system matrices.
|
| 194 |
+
|
| 195 |
+
Definition 3: [22] Given a matrix $\mathcal{P} > 0$ , if the matrix $Q : \left\lbrack {-d,d}\right\rbrack \rightarrow {\mathbb{R}}^{n \times n}$ meets the following conditions:
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\dot{Q}\left( \pi \right) = Q\left( \pi \right) A + Q\left( {\pi - d}\right) {A}_{1},
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
Q\left( {-\pi }\right) = {Q}^{\mathrm{T}}\left( \pi \right) ,
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
- \mathcal{P} = Q\left( 0\right) A + Q\left( {-d}\right) {A}_{1} + {A}^{\mathrm{T}}Q\left( 0\right) + {A}_{1}^{\mathrm{T}}Q\left( d\right) , \tag{10}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
Definition 4: [22] If the system (9) is asymptotically stable, there exists a Lyapunov matrix $Q\left( \cdot \right)$ associated with matrix $\mathcal{P}$ for system (9).
|
| 210 |
+
|
| 211 |
+
Lemma 1: Suppose there are matrices $H = {H}^{\mathrm{T}} > 0$ and ${K}_{11} \in {\mathbb{R}}^{p \times n}$ , and for any $U > 0$ , the following LMI condition is satisfied:
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\left\lbrack \begin{matrix} {\Lambda }_{2} & {A}_{1}X \\ {\left( {A}_{1}X\right) }^{\mathrm{T}} & - U \end{matrix}\right\rbrack < 0 \tag{11}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
where ${\Lambda }_{2} = {AX} - {B}_{1}{Y}_{1} + {\left( AX - {B}_{1}{Y}_{1}\right) }^{\mathrm{T}} + U,X = {H}^{-1}$ , ${Y}_{1} = {K}_{11}{H}^{-1}$ , and $U = {H}^{-1}L{H}^{-1}$ , then there exists a controller ${u}_{1}\left( t\right) = - {K}_{11}e\left( t\right)$ that guarantees system (9) is asymptotically stable.
|
| 218 |
+
|
| 219 |
+
Proof 1: Select the Lyapunov function:
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
{V}_{c}\left( {e\left( t\right) }\right) = {e}^{\mathrm{T}}\left( t\right) {He}\left( t\right) + {\int }_{t - d}^{t}{e}^{\mathrm{T}}\left( \theta \right) {Le}\left( \theta \right) \mathrm{d}\theta .
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
We can derive:
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
{\left. \frac{\mathrm{d}{V}_{c}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 9\right) } = {\Lambda }_{0}^{\mathrm{T}}{\Omega }_{1}{\Lambda }_{0}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
where
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
{\Lambda }_{0} = {\left\lbrack {e}^{\mathrm{T}}\left( t\right) ,{e}^{\mathrm{T}}\left( t - d\right) \right\rbrack }^{\mathrm{T}},
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{\Omega }_{1} = \left\lbrack \begin{matrix} {\Lambda }_{2} & {A}_{1}X \\ {\left( {A}_{1}X\right) }^{\mathrm{T}} & - U \end{matrix}\right\rbrack ,
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
{\Lambda }_{2} = {AX} - {B}_{1}{Y}_{1} + {\left( AX - {B}_{1}{Y}_{1}\right) }^{\mathrm{T}} + U,
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
X = {H}^{-1},{Y}_{1} = {K}_{11}{H}^{-1},U = {H}^{-1}L{H}^{-1}.
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
Using Lyapunov stability theory, the controller ${u}_{1}\left( t\right) =$ $- {K}_{11}e\left( t\right)$ guarantees the asymptotic stability of system (9).
|
| 250 |
+
|
| 251 |
+
§ IV.A COMPLETE-TYPE LKF
|
| 252 |
+
|
| 253 |
+
We construct a LKF $\mathfrak{V}\left( \cdot \right)$ :
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\mathfrak{V}\left( {e\left( t\right) }\right) = {\mathfrak{V}}_{1}\left( {e\left( t\right) }\right) + {\mathfrak{V}}_{2}\left( {e\left( t\right) }\right) ,e \in {C}_{p}\left( {\left\lbrack {-d,0}\right\rbrack ,{\mathbb{R}}^{n}}\right) \tag{12}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
where
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
{\mathfrak{V}}_{1}\left( {e\left( t\right) }\right) = {e}^{\mathrm{T}}\left( t\right) Q\left( 0\right) e\left( t\right) + 2{e}^{\mathrm{T}}\left( t\right) {\Gamma }_{1}\left( {e\left( t\right) }\right)
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
+ {\int }_{-d}^{0}{\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + {\tau }_{1}}\right) {A}_{1}^{\mathrm{T}}Q\left( {{\tau }_{1} - {\tau }_{2}}\right) {A}_{1}e\left( {t + {\tau }_{2}}\right) \mathrm{d}{\tau }_{1}\mathrm{\;d}{\tau }_{2},
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{\mathfrak{V}}_{2}\left( {e\left( t\right) }\right) = {\int }_{-d}^{0}{\int }_{\tau }^{0}{e}^{\mathrm{T}}\left( {t + s}\right) {A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right)
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
\times {A}_{1}e\left( {t + s}\right) \mathrm{d}s\mathrm{\;d}\tau + {\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + \tau }\right) {\mathcal{Q}}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau ,
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
(13)
|
| 278 |
+
|
| 279 |
+
where ${\Gamma }_{1}\left( {e\left( t\right) }\right) = {\int }_{-d}^{0}Q\left( {-d - \tau }\right) {A}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau$ and matrices $\mathcal{R},{\mathcal{Q}}_{1}$ satisfying the ${\mathcal{R}}^{\mathrm{T}} = \mathcal{R} > 0,{\mathcal{Q}}_{1}^{\mathrm{T}} = {\mathcal{Q}}_{1} > 0$ .
|
| 280 |
+
|
| 281 |
+
§ V. CONTROLLER DESIGN AND STABILITY ANALYSIS
|
| 282 |
+
|
| 283 |
+
In this section, we will provide a detailed explanation of the controller design process and conduct a systematic analysis of its stability.
|
| 284 |
+
|
| 285 |
+
§ A. CONTROLLER DESIGN
|
| 286 |
+
|
| 287 |
+
We propose the following guaranteed cost DP controller for UMVs in (5):
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
u\left( t\right) = {u}_{1}\left( t\right) + {u}_{2}\left( t\right) ,
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
{u}_{1}\left( t\right) = - {K}_{11}e\left( t\right) ,
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
{u}_{2}\left( t\right) = \frac{1}{2}{K}_{21}{B}_{1}^{\mathrm{T}}\left\lbrack {Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( {e\left( t\right) }\right) }\right\rbrack + \frac{1}{2}{K}_{22}e\left( {t - d}\right) ,
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
(14)
|
| 302 |
+
|
| 303 |
+
where ${K}_{11},{K}_{21},{K}_{22}$ are feedback gain matrices. ${K}_{11}$ is already determined in Lemma 1, while ${K}_{21}$ and ${K}_{22}$ will be provided in Theorem 1.
|
| 304 |
+
|
| 305 |
+
Theorem 1: Consider the UMVs (5) under Assumption 1. The guaranteed cost DP controller is defined by (14). For the given positive definite matrices $\mathbb{N} \in {\mathbb{R}}^{m \times m},\mathbb{Y} \mathrel{\text{ := }}$ $\left\lbrack \begin{array}{ll} {\mathbb{Y}}_{11} & {\mathbb{Y}}_{12} \\ {\mathbb{Y}}_{12}^{\mathrm{T}} & {\mathbb{Y}}_{22} \end{array}\right\rbrack \in {\mathbb{R}}^{{2n} \times {2n}},\mathcal{P} \in {\mathbb{R}}^{n \times n}$ , and a positive constant ${\gamma }_{0}$ , if there exist positive definite matrices $\mathcal{R},{\mathcal{Q}}_{1} \in {\mathbb{R}}^{n \times n}$ , and matrices ${K}_{21} \in {\mathbb{R}}^{p \times p},{K}_{22} \in {\mathbb{R}}^{p \times n}$ such that $\mathcal{P} - {\mathcal{Q}}_{1} - {\mathcal{P}}_{1} > 0$ and the following inequality holds,
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
E \mathrel{\text{ := }} \left\lbrack \begin{matrix} \mathcal{P} + {\mathcal{Q}}_{1} + {\mathcal{P}}_{1} - {E}_{1} & {E}_{2} & {E}_{3} \\ {E}_{2}^{\mathrm{T}} & - {\mathcal{Q}}_{1} + {\mathbb{Y}}_{22} & \frac{1}{2}{K}_{22}^{\mathrm{T}}{B}_{1}^{\mathrm{T}} \\ {E}_{3}^{\mathrm{T}} & \frac{1}{2}{B}_{1}{K}_{22} & {E}_{4} \end{matrix}\right\rbrack < 0,
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
(15)
|
| 312 |
+
|
| 313 |
+
where
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
{E}_{1} = \frac{1}{2}Q\left( 0\right) {B}_{1}\left( {{K}_{21} + {K}_{21}^{\mathrm{T}}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) - {\mathbb{Y}}_{11} - {C}_{z}^{\mathrm{T}}{C}_{z}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
- {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}}Q\left( 0\right) - Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}}Q\left( 0\right) ,
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
{E}_{2} = \frac{1}{2}Q\left( 0\right) {B}_{1}{K}_{22} + {\mathbb{Y}}_{12},
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
{E}_{3} = Q\left( 0\right) {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}},
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
$$
|
| 332 |
+
{E}_{4} = - \frac{\mathcal{R}}{d} + {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}{B}_{2}{B}_{2}^{\mathrm{T}},
|
| 333 |
+
$$
|
| 334 |
+
|
| 335 |
+
then, the state of the UMVs in system (5) asymptotically converge to zero, while maintaining an ${H}_{\infty }$ norm bound of ${\gamma }_{0}$ .
|
| 336 |
+
|
| 337 |
+
Proof 2: The time derivative of $\mathfrak{V}\left( {e\left( t\right) }\right)$ along the trajectory of the UMVs (5) can be calculated as follows:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right)
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
= - {U}_{0}\left( {e\left( t\right) }\right) + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right)
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
+ 2{g}^{\mathrm{T}}\left( {e\left( t\right) ,e\left( {t - d}\right) }\right) {F}^{\mathrm{T}}\left\lbrack {Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( {e\left( t\right) }\right) }\right\rbrack
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
+ 2{\left\lbrack Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}}{B}_{2}\omega \left( t\right)
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
+ 2{\left\lbrack Q\left( 0\right) e\left( t\right) + {\Gamma }_{1}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}}{B}_{1}u\left( t\right) \tag{16}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
where
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
{U}_{0}\left( e\right) = {e}^{\mathrm{T}}\left( t\right) \left( {\mathcal{P} - {\mathcal{Q}}_{1} - {\mathcal{P}}_{1}}\right) e\left( t\right) + {e}^{\mathrm{T}}\left( {t - d}\right) {\mathcal{Q}}_{1}e\left( {t - d}\right)
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
+ {\int }_{-d}^{0}{e}^{\mathrm{T}}\left( {t + \tau }\right) {A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right) {A}_{1}e\left( {t + \tau }\right) \mathrm{d}\tau .
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
{\mathcal{P}}_{1} = {\int }_{-d}^{0}{A}_{1}^{\mathrm{T}}{Q}^{\mathrm{T}}\left( {-d - \tau }\right) \mathcal{R}Q\left( {-d - \tau }\right) {A}_{1}\mathrm{\;d}\tau .
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
Substituting (14) into (16), we have
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \leq {\Gamma }^{\mathrm{T}}\left( t\right) {E\Gamma }\left( t\right)
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
(17)
|
| 380 |
+
|
| 381 |
+
where
|
| 382 |
+
|
| 383 |
+
$$
|
| 384 |
+
\Gamma \left( t\right) = {\left\lbrack {e}^{\mathrm{T}}\left( t\right) {e}^{\mathrm{T}}\left( t - d\right) {\Gamma }_{1}^{\mathrm{T}}\left( e\left( t\right) \right) \right\rbrack }^{\mathrm{T}},
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
E \mathrel{\text{ := }} \left\lbrack \begin{matrix} \mathcal{P} + {\mathcal{Q}}_{1} + {\mathcal{P}}_{1} - {E}_{1} & {E}_{2} & {E}_{3} \\ {E}_{2}^{\mathrm{T}} & - {\mathcal{Q}}_{1} + {\mathbb{Y}}_{22} & \frac{1}{2}{K}_{22}^{\mathrm{T}}{B}_{1}^{\mathrm{T}} \\ {E}_{3}^{\mathrm{T}} & \frac{1}{2}{B}_{1}{K}_{22} & {E}_{4} \end{matrix}\right\rbrack ,
|
| 389 |
+
$$
|
| 390 |
+
|
| 391 |
+
where
|
| 392 |
+
|
| 393 |
+
$$
|
| 394 |
+
{E}_{1} = \frac{1}{2}Q\left( 0\right) {B}_{1}\left( {{K}_{21} + {K}_{21}^{\mathrm{T}}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) - {\mathbb{Y}}_{11} - {C}_{z}^{\mathrm{T}}{C}_{z}
|
| 395 |
+
$$
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
- {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}}Q\left( 0\right) - Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}}Q\left( 0\right) ,
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
$$
|
| 402 |
+
{E}_{2} = \frac{1}{2}Q\left( 0\right) {B}_{1}{K}_{22} + {\mathbb{Y}}_{12},
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
{E}_{3} = Q\left( 0\right) {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + Q\left( 0\right) F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}Q\left( 0\right) {B}_{2}{B}_{2}^{\mathrm{T}},
|
| 407 |
+
$$
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
{E}_{4} = - \frac{\mathcal{R}}{d} + {B}_{1}{K}_{21}{B}_{1}^{\mathrm{T}} + F\mathbb{N}{F}^{\mathrm{T}} + {\gamma }_{0}^{-2}{B}_{2}{B}_{2}^{\mathrm{T}},
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
For $E < 0$ , it implies that
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
{\left. \frac{\mathrm{d}\mathfrak{V}\left( {e\left( t\right) }\right) }{\mathrm{d}t}\right| }_{\left( 5\right) } + {\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) - {\gamma }_{0}^{2}{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \leq 0. \tag{18}
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
If Theorem 1 holds, then the ${\int }_{{t}_{0}}^{t}{\Gamma }^{\mathrm{T}}\left( \tau \right) {E\Gamma }\left( \tau \right) \mathrm{d}\tau < 0$ is satisfied:
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
0 \leq {\epsilon }_{\min }\parallel e\left( t\right) {\parallel }^{2} \leq \mathfrak{V}\left( e\right) \leq \mathfrak{V}\left( {e\left( {t}_{0}\right) }\right) - {\int }_{{t}_{0}}^{t}{\mathcal{Z}}^{\mathrm{T}}\left( \tau \right) \mathcal{Z}\left( \tau \right) \mathrm{d}\tau
|
| 423 |
+
$$
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
+ {\gamma }_{0}^{2}{\int }_{{t}_{0}}^{t}{\omega }^{\mathrm{T}}\left( \tau \right) \omega \left( \tau \right) \mathrm{d}\tau ,t > {t}_{0}.
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
(19)
|
| 430 |
+
|
| 431 |
+
Clearly
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\mathop{\lim }\limits_{{t \rightarrow \infty }}{\int }_{{t}_{0}}^{t}{\Gamma }^{\mathrm{T}}\left( \tau \right) {E\Gamma }\left( \tau \right) \mathrm{d}\tau \leq \mathfrak{V}\left( {e\left( {t}_{0}\right) }\right) . \tag{20}
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
We obtain
|
| 438 |
+
|
| 439 |
+
$$
|
| 440 |
+
\mathop{\lim }\limits_{{t \rightarrow \infty }}\parallel e\left( t\right) \parallel = 0 \tag{21}
|
| 441 |
+
$$
|
| 442 |
+
|
| 443 |
+
By integrating equation (18)) from 0 to $\infty$ , we obtain
|
| 444 |
+
|
| 445 |
+
$$
|
| 446 |
+
{\int }_{0}^{\infty }{\mathcal{Z}}^{\mathrm{T}}\left( t\right) \mathcal{Z}\left( t\right) \mathrm{d}t \leq {\gamma }_{0}^{2}{\int }_{0}^{\infty }{\omega }^{\mathrm{T}}\left( t\right) \omega \left( t\right) \mathrm{d}t + \mathfrak{V}\left( 0\right) . \tag{22}
|
| 447 |
+
$$
|
| 448 |
+
|
| 449 |
+
§ B. GUARANTEED COST ANALYSIS
|
| 450 |
+
|
| 451 |
+
When the disturbance $\omega \left( t\right)$ is absent, combining (8),(14), and (18) yields:
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
{\left. \frac{d\mathfrak{V}\left( {e\left( t\right) }\right) }{dt}\right| }_{\left( 5\right) } + {e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right)
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
\leq {\Gamma }^{\mathrm{T}}\left( t\right) \left( {E + \operatorname{diag}\left( {\Omega ,0,0}\right) + \frac{1}{4}{O}^{\mathrm{T}}{\mathbb{R}}_{q}O}\right) \Gamma \left( t\right)
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
(23)where
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
O = \left\lbrack \begin{array}{lll} - \left( {\mathbb{Y} + {K}_{21}}\right) {B}_{1}^{\mathrm{T}}Q\left( 0\right) & {K}_{22} & - \left( {\mathbb{Y} + {K}_{21}}\right) {B}_{1}^{\mathrm{T}} \end{array}\right\rbrack .
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
We have
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\left\lbrack \begin{matrix} E + \operatorname{diag}\left( {\Omega ,0,0}\right) & {O}^{\mathrm{T}} \\ O & - 4{\mathbb{R}}_{q}^{-1} \end{matrix}\right\rbrack < 0
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
Hence,
|
| 474 |
+
|
| 475 |
+
$$
|
| 476 |
+
{\int }_{0}^{\infty }\left\lbrack {{e}^{\mathrm{T}}\left( t\right) {\Omega e}\left( t\right) + {u}^{\mathrm{T}}\left( t\right) {\mathbb{R}}_{q}u\left( t\right) }\right\rbrack \mathrm{d}t \leq {J}^{ * }.
|
| 477 |
+
$$
|
| 478 |
+
|
| 479 |
+
where ${J}^{ * } = \mathfrak{V}\left( {e\left( t\right) }\right)$ , with $\mathfrak{V}\left( {e\left( t\right) }\right)$ defined in (12).
|
| 480 |
+
|
| 481 |
+
§ VI. SIMULATION EXAMPLE
|
| 482 |
+
|
| 483 |
+
The proposed control method's effectiveness is demonstrated through a standard floating production vessel model, as referenced in [23]. The matrices $\xi ,\mathcal{C}$ , and $\mathcal{D}$ are specified in [23], and the thruster configuration matrix $\mathcal{G}$ is derived from [24].
|
| 484 |
+
|
| 485 |
+
The initial condition is given as $\phi \left( s\right) = {\left\lbrack \begin{array}{llllll} 0 & 0 & 0 & 0 & 0 & {0.2} \end{array}\right\rbrack }^{\mathrm{T}}$ , with the reference signal set to ${x}_{\text{ ref }} = \left\lbrack \begin{array}{ll} {0.01} & - \end{array}\right.$ ${0.010.050.010.040.01}{\rbrack }^{\mathrm{T}}$ . The time delay is $d = 1$ , and the ${H}_{\infty }$ performance index ${\gamma }_{0} = 2$ .
|
| 486 |
+
|
| 487 |
+
The controller gain matrix ${K}_{11}$ is obtained by solving the LMI (11) from Lemma 1, as follows:
|
| 488 |
+
|
| 489 |
+
$$
|
| 490 |
+
{K}_{11} = \left\lbrack \begin{matrix} {3.7401} & - {1.0550} & {1.6703} \\ {3.5625} & - {0.3782} & {0.8900} \\ - {1.8457} & {7.7381} & - {7.8852} \\ - {1.7986} & {7.5585} & - {7.6782} \\ - {0.2156} & {1.5274} & - {0.7243} \\ - {0.4379} & {2.3744} & - {1.7009} \end{matrix}\right.
|
| 491 |
+
$$
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
\left. \begin{matrix} {3.8794} & - {0.4071} & {0.6533} \\ {3.8305} & {0.0888} & {0.2145} \\ - {0.4836} & {5.9344} & - {4.2105} \\ - {0.4706} & {5.8028} & - {4.0941} \\ - {0.0351} & {1.3831} & - {0.1833} \\ - {0.0963} & {2.0038} & - {0.7325} \end{matrix}\right\rbrack
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
We set the matrix $\mathcal{Q} = I$ . The(i, j)-th element of the matrix $Q\left( \theta \right)$ , denoted as ${Q}_{ij}\left( \theta \right)$ , is determined using the method proposed in [22]. Figures 1-2 show the values of ${Q}_{ij}\left( \theta \right)$ for $\theta \in \left\lbrack {0,1}\right\rbrack$ .
|
| 498 |
+
|
| 499 |
+
Finally, by solving LMI (15) as described in Theorem 1, the controller gain matrices ${K}_{21}$ and ${K}_{22}$ are computed as:
|
| 500 |
+
|
| 501 |
+
$$
|
| 502 |
+
{K}_{21} = 1 \times {10}^{4}\left\lbrack \begin{matrix} {0.0284} & {0.0561} & {0.0446} \\ - {0.0249} & - {0.0535} & - {0.0615} \\ - {0.0160} & {0.0215} & {0.0366} \\ {0.0187} & - {0.0010} & - {0.0542} \\ - {0.2113} & {0.2496} & - {0.1101} \\ - {0.0871} & {0.0356} & - {0.0328} \end{matrix}\right.
|
| 503 |
+
$$
|
| 504 |
+
|
| 505 |
+
$$
|
| 506 |
+
\left. \begin{matrix} {0.0381} & - {0.0108} & - {0.0257} \\ - {0.0140} & {0.0119} & {0.0273} \\ {0.0723} & - {0.0709} & - {0.0315} \\ - {0.0511} & {0.0035} & {0.1249} \\ - {0.0808} & - {0.9459} & {1.2040} \\ {0.1207} & {0.5283} & - {0.6940} \end{matrix}\right\rbrack
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
< g r a p h i c s >
|
| 510 |
+
|
| 511 |
+
Figure 1. Lyapunov matrix ${Q}_{ij}\left( \theta \right) ,\left( {\mathrm{i} = 1,2,3\mathrm{j} = 1,2,3,4,5,6}\right)$ .
|
| 512 |
+
|
| 513 |
+
< g r a p h i c s >
|
| 514 |
+
|
| 515 |
+
Figure 2. Lyapunov matrix ${Q}_{ij}\left( \theta \right) ,\left( {\mathrm{i} = 4,5,6\mathrm{j} = 1,2,3,4,5,6}\right)$ .
|
| 516 |
+
|
| 517 |
+
$$
|
| 518 |
+
{K}_{22} = \left\lbrack \begin{matrix} - {15.4416} & {8.9036} & {66.6063} \\ - {18.9989} & - {43.3441} & - {101.0469} \\ {22.5784} & {53.9648} & {21.5477} \\ - {82.2859} & - {141.7415} & {16.5537} \\ - {118.7051} & - {303.3277} & {414.2256} \\ {118.3731} & {331.0541} & - {512.3389} \end{matrix}\right.
|
| 519 |
+
$$
|
| 520 |
+
|
| 521 |
+
$$
|
| 522 |
+
\left. \begin{matrix} {35.6011} & {15.1773} & - {22.0347} \\ - {70.0417} & - {49.6179} & - {12.4059} \\ - {26.9017} & {10.6883} & {43.2947} \\ - {69.8399} & - {34.0587} & - {92.0165} \\ - {396.6715} & {76.7366} & - {41.8016} \\ {433.3628} & - {113.3949} & {30.4866} \end{matrix}\right\rbrack .
|
| 523 |
+
$$
|
| 524 |
+
|
| 525 |
+
Figures 3-4 illustrate the trajectories of the position error, yaw angle error, and velocity error for UMVs (5). Figure 5 shows the control inputs produced by the controller as defined in (14).
|
| 526 |
+
|
| 527 |
+
< g r a p h i c s >
|
| 528 |
+
|
| 529 |
+
Figure 3. Response curves of UMVs position and yaw angle error.
|
| 530 |
+
|
| 531 |
+
< g r a p h i c s >
|
| 532 |
+
|
| 533 |
+
Figure 4. Response curves of UMVs velocity error.
|
| 534 |
+
|
| 535 |
+
< g r a p h i c s >
|
| 536 |
+
|
| 537 |
+
Figure 5. The comparison of response curves for $u\left( t\right)$
|
| 538 |
+
|
| 539 |
+
In Figure 3, it is clear that the error curves under the proposed control initially exhibit small fluctuations before gradually converging to zero. This demonstrates the effectiveness of the proposed control strategy. Figure 5 illustrates the response curves of the guaranteed cost DP controller $u\left( t\right)$ .
|
| 540 |
+
|
| 541 |
+
§ CONCLUSION
|
| 542 |
+
|
| 543 |
+
In this paper, we have addressed the guaranteed cost dynamic positioning control problem for UMVs with time delays. First, we propose a complete-type LKF for UMVs with time delays, which leads to less conservativeness. Furthermore, a novel approach for designing guaranteed cost dynamic positioning controller for DP systems is proposed. The specific form of this controller is derived from feasible solutions of LMIs. The proposed method was validated through simulation, demonstrating its effectiveness. Future work will focus on extending the control strategy to systems with time-varying delays, further enhancing the robustness of DP control for UMVs.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,347 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UVMS Trajectory Tracking Based on RBFNN and Sliding Mode Control
|
| 2 |
+
|
| 3 |
+
Huiyi Luo
|
| 4 |
+
|
| 5 |
+
Fuzhou Institute of Oceanography, Fuzhou University, Fuzhou 350108, China College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China 18278811826@163.com
|
| 6 |
+
|
| 7 |
+
Weilin Luo
|
| 8 |
+
|
| 9 |
+
Fuzhou Institute of Oceanography, Fuzhou U niversity, Fuzhou 350108, China College of Mechanical Engineering and Auto mation, Fuzhou University,
|
| 10 |
+
|
| 11 |
+
Fuzhou 350108, China;
|
| 12 |
+
|
| 13 |
+
wlluo@fzu.edu.cn
|
| 14 |
+
|
| 15 |
+
Yuanjing Wang
|
| 16 |
+
|
| 17 |
+
College of Mechanical Engineering and
|
| 18 |
+
|
| 19 |
+
Automation, Fuzhou University,
|
| 20 |
+
|
| 21 |
+
Fuzhou 350108, China
|
| 22 |
+
|
| 23 |
+
xyjw325@163.com
|
| 24 |
+
|
| 25 |
+
${Abstract}$ -This article spells out the UVMS trajectory tracking control problem under electric drive. Firstly, based a claim on Radial Basis Function Neural Networks (RBFNN) and Nonsingular Fast Terminal Sliding Mode (NFTSM) methods, the tracking strategy for UVMS is designed. Further, for singularity problem, a saturation-based tracking controller is obtained by means of the methods mentioned above. Lyapunov design is adopted to guarantee the asymptotic stability of the proposed controller. Simulation results show that the tracking consequence of NN-NFTSM is more thoroughly than PD approach and NN approach. The validity and advantages of the proposed controller is testified.
|
| 26 |
+
|
| 27 |
+
Keywords-UVMS, electric drive, trajectory tracking, fast nonsingular terminal sliding mode, RBF neural network
|
| 28 |
+
|
| 29 |
+
## I. INTRODUCTION
|
| 30 |
+
|
| 31 |
+
Underwater Vehicle-Manipulator Systems (UVMS), which can control the underwater manipulator to complete the underwater task instead of human beings, is an effective means to develop underwater ocean energy at present. Usually, UVMS is constituted if there are n-link manipulators connected to an underwater robot for instance ROV (Remotely Operated Vehicle) and AUV (Autonomous Underwater Vehicle). As a vital tool of underwater vehicle, UVMS is pretty significant for these underwater operations, for example underwater real time shooting, underwater target reconnaissance and surveillance, marine resource exploitation, marine bioprospecting, etc. UVMS plays a supporting role in various marine underwater missions, and becomes the research focus of many scholars.
|
| 32 |
+
|
| 33 |
+
How to settle the uncertainties in an underwater condition, like current, oceanic internal wave, is the biggest challenge for an UVMS to reach an ideal performance controller. For this reason, the effectiveness and robustness of controller is pretty crucial. Xu, et al. adopted fuzzy based control techniques to study a 6-DOF AUV which has a 3-DOF on-board manipulator [1]. Wei, et al. have applied a nonlinear disturbance observation for an UVMS to evaluate the external unpredictable disturbance in real time, and an adaptive sliding mode approach is utilized for compensating things [2]. Mobayen et al. adopted a continuous nonsingular fast terminal sliding mode control with timing delay evaluation, which can make full sure the satisfactory of tracking control performance and the sufficiency of robustness on an UVMS [3]. Wang et al. have selected the control plan which mixed sliding mode control and adaptive fuzzy control to constitute a multi-strategy fusion control that addressed the motion variable control issue of UVMS [4]. Luo et al. applied neural networks to a 3-link UVMS's tracking, the robustness of controller is verified by compared with PD control method [5]. Mofid et al. applied a fuzzy terminal sliding mode control approach with timing delay evaluation, which puts the focus on using fuzzy rules to adaptively fit the terminal sliding mode surface to eliminate the unpredictable internal and external disturbance running on manipulator [6]. Woolfrey et al. applied model predictive control plan to study kinematics things on UVMS which is affected by fluctuations, and the results show that the approach has excellent predictive consequence [7]. Han and Chung exposed an approach which uses restoring moments to explore the motion control under external disturbance of an UVMS [8].
|
| 34 |
+
|
| 35 |
+
This article proposes a fast nonsingular terminal sliding mode cascade controller combined with RBF neural network method for manipulator control problem of UVMS. Due to the interaction induced by vehicle and manipulator, there is an external disturbance working on UVMS, which is the main source of external disturbance. Lyapunov approach is applied to verify the stability of the cascade controller. The effectiveness and robustness of the controller designed in this article is guaranteed by numerical simulation.
|
| 36 |
+
|
| 37 |
+
## II. Problem formulation
|
| 38 |
+
|
| 39 |
+
When UVMS moves to working area, it is sometimes necessary for the underwater robot body to maintain a stable hover state while the mechanical arm works according to the work requirements. At this time, the body-fixed reference coordinate system attached to the underwater vehicle body can be viewed as the inertial reference coordinate, which is constructed with the earth, and the motion of the entire UVMS can be regarded as the motion control of underwater robotic manipulators considering disturbance.
|
| 40 |
+
|
| 41 |
+
Since the influence of underwater robot on underwater robotic manipulators is difficult to be expressed by mathematical model, it can be regarded as disturbance on underwater robotic manipulators. The nonlinear dynamics of the underwater robotic manipulators is written as
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
M\left( q\right) \ddot{q} + C\left( {q,\dot{q}}\right) \dot{q} + D\left( {q,\dot{q}}\right) \dot{q} + G\left( q\right) + \Delta = {\tau }_{ms} \tag{1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where $\Delta$ denotes the uncertainty induced by the interaction of underwater vehicle and manipulator, $M$ denotes the inertial matrix, $C$ denotes the Coriolis-centripetal matrix, $D$ denotes the water resistance coefficient matrix, $G$ denotes the equivalent gravity vector matrix, ${\tau }_{ms}$ denotes the input of control.
|
| 48 |
+
|
| 49 |
+
---
|
| 50 |
+
|
| 51 |
+
Corresponding Author: W. Luo
|
| 52 |
+
|
| 53 |
+
This work was supported by the Natural Science Foundation of Fujian Province, China through Grant 2023J011572, and Fuzhou Institute of Oceanography through Grants 2021F11 & 2022F13.
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
Fig. 1 displays underwater robotic manipulator combined with underwater vehicle to form a three-link UVMS. Fig. 1 shows the starting position of the underwater robotic manipulator, in which the joint at the hinge of the connecting rod is driven by a motor, so as to achieve the operational requirements of the three degree of freedom underwater robotic manipulator.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Fig. 1 Three-link manipulator UVMS
|
| 62 |
+
|
| 63 |
+
Since the underwater robotic manipulator's joint is run by a DC motor, the motor driving force can be described as
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{\tau }_{me} = {K}_{me}I \tag{2}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
where $I$ denotes the electrical current, ${K}_{me}$ denotes the coefficient matrix during the process of electrical current change to torque.
|
| 70 |
+
|
| 71 |
+
The dynamics of the electrical circuit can be described as
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{\tau }_{e} = {L}_{e}\dot{I} + {R}_{e}I + {K}_{e}\dot{q} \tag{3}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where ${\tau }_{e},{L}_{e},{R}_{e}$ denotes the vector matrix of the motor coil’s voltage, inductance and resistance, respectively. ${K}_{e}$ denotes the constant matrix of its voltage.
|
| 78 |
+
|
| 79 |
+
Then, a cascaded system containing the subsystem of machinery and electricity consists of Equations (1) and (3).
|
| 80 |
+
|
| 81 |
+
## III. CONTROLLER DESIGN
|
| 82 |
+
|
| 83 |
+
### A.NN Based Controller
|
| 84 |
+
|
| 85 |
+
According to Equation (2), an ideal trajectory design is carried out for the desired joint angle of the underwater robotic manipulator. By defining the desired joint angle as ${q}_{d}$ and considering Equations (1) and (3), the desired input signal of the electrical current can be described as
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{I}_{d} = {K}_{me}^{-1}\left( {M{\ddot{q}}_{d} + C\dot{q} + D\dot{q} + G + \Delta + {\tau }_{1}}\right) \tag{4}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where ${\tau }_{1}$ denotes the auxiliary controller for dynamics of underwater robotic manipulator. Similarly, the auxiliary controller of electrical system can be designed as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\tau }_{e} = {R}_{e}{I}_{d} + {K}_{e}{\dot{q}}_{d} + {\tau }_{2} \tag{5}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where ${\tau }_{2}$ represents the auxiliary controller for electrical system.
|
| 98 |
+
|
| 99 |
+
Further, define joint tracking error as
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
e = {q}_{d} - q \tag{6}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
To guarantee convergent quality, design the fast terminal sliding surface as
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
s = \dot{e} + {\alpha }_{1}{\operatorname{sign}}^{{\gamma }_{1}}\left( e\right) + {\alpha }_{2}{\operatorname{sign}}^{{\gamma }_{2}}\left( e\right) \tag{7}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${\operatorname{sign}}^{\Delta }\left( \cdot \right) = {\left| \cdot \right| }^{\Delta }\operatorname{sign}\left( \cdot \right) ,{\gamma }_{1} \geq 1,0 \leq {\gamma }_{2} \leq 1,{\alpha }_{1}$ and ${\alpha }_{2}$ are introduced as positive gain matrix.
|
| 112 |
+
|
| 113 |
+
Derivative of fast terminal sliding surface is
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\dot{s} = \ddot{e} + \left( {{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}}\right) \dot{e} \tag{8}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
To facilitate calculation, auxiliary variables are introduced as
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\left\{ \begin{array}{l} \vartheta = {\alpha }_{1}{\operatorname{sign}}^{{\gamma }_{1}}\left( e\right) + {\alpha }_{2}{\operatorname{sign}}^{{\gamma }_{2}}\left( e\right) \\ \mu = {\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1} \end{array}\right. \tag{9}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
Substituting Equation (9) into Equations (7) and (8) yields
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
\left\{ \begin{array}{l} s = \dot{e} + \vartheta \\ \dot{s} = \ddot{e} + \mu \dot{e} \end{array}\right. \tag{10}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
Define electrical current error as $\eta = {I}_{d} - I$ , one has
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
M\left( q\right) \dot{s} = {M\mu }\dot{e} + M\ddot{e}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
= {M\mu }\dot{e} + M\left( {{\ddot{q}}_{d} - \ddot{q}}\right) \tag{11}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
= {M\mu }\dot{e} + {K}_{me}\eta - C\dot{e} + \Delta - {\tau }_{1}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
and
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
L\dot{\eta } = L{\dot{I}}_{d} - L\dot{I} = - {R\eta } - K\left( {s - \vartheta }\right) - {\tau }_{2} + L{\dot{I}}_{d}. \tag{12}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
To reach the goal of letting the error Equation (11) and (12) attain to zero, Lyapunov design theorem is utilized and a positively definite Lyapunov function can be written as
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
{V}_{1} = \frac{1}{2}\left( {{e}^{\mathrm{T}}e + {s}^{\mathrm{T}}{Ms} + {\eta }^{\mathrm{T}}{L\eta }}\right) \tag{13}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
The time derivative of keeps
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
{\dot{V}}_{1} = {s}^{T}\left( {e + {M\mu }\dot{e} + {C\vartheta } + \Delta - {\tau }_{1}}\right) - {e}^{T}\vartheta \tag{14}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
+ {\eta }^{T}\left\lbrack {-{R}_{e}\eta + {K}_{me}s + K\left( {s - \vartheta }\right) + {L}_{e}{\dot{I}}_{d} - {\tau }_{2}}\right\rbrack
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
Since Equation (14) contains nonlinear terms, and for the trajectory tracking control of underwater robotic manipulator, the nonlinear terms have an impact on the control results. For this reason, RBF neural network is adopted to estimate the nonlinear term. In detail, let
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
\left\{ \begin{array}{l} {f}_{1} = e + {\mu M}\dot{e} + {C\vartheta } + \Delta = {W}_{1}^{\mathrm{T}}{h}_{1}\left( x\right) + {\varepsilon }_{1} \\ {f}_{2} = {K}_{me}s - {R}_{e}\eta + {K}_{e}\left( {s - \vartheta }\right) + {L}_{e}{\dot{I}}_{d} = {W}_{2}^{\mathrm{T}}{h}_{2}\left( x\right) + {\varepsilon }_{2} \end{array}\right. \tag{15}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
where ${W}_{i},{h}_{i},{\varepsilon }_{i}$ denote weights, inputs and regression errors, respectively.
|
| 174 |
+
|
| 175 |
+
The controllers ${\tau }_{1}$ and ${\tau }_{2}$ can be given as
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\left\{ \begin{array}{l} {\tau }_{1} = {W}_{1e}^{\mathrm{T}}{h}_{1}\left( x\right) + {\alpha }_{1}{Ms} \\ {\tau }_{2} = {W}_{2e}^{\mathrm{T}}{h}_{2}\left( x\right) + {\alpha }_{2}{L\eta } \end{array}\right. \tag{16}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
where ${W}_{ie}$ denote updated weight matrices.
|
| 182 |
+
|
| 183 |
+
In order to achieve the excellent robustness of neural network controller, the weight is written as
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\left\{ \begin{array}{l} {\dot{W}}_{1e} = {k}_{1}{h}_{1}\left( {X}_{1}\right) {s}^{\mathrm{T}} - {k}_{2}{W}_{1e} \\ {\dot{W}}_{2e} = {k}_{1}{h}_{2}\left( {X}_{2}\right) {\eta }^{\mathrm{T}} - {k}_{2}{W}_{2e} \end{array}\right. \tag{17}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
As pointed out [9], in a conventional sliding approach, since the item ${\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}\dot{e}$ in Equation (8) exists, it happens that ${e}_{x} \rightarrow 0$ . In order to deal with the singular phenomena, one might use the following saturation
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\operatorname{sat}\left( {v}_{z}\right) = \left\{ \begin{matrix} {v}_{z} & \left| {v}_{z}\right| \leq \bar{w} \\ \bar{w}\operatorname{sign}\left( {v}_{z}\right) & \left| {v}_{z}\right| \geq \bar{w} \end{matrix}\right. \tag{18}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where ${v}_{z} = {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}\dot{e},\bar{w}$ is a positive number.
|
| 196 |
+
|
| 197 |
+
Substituting Equation (18) into Equations (7), and replacing the fast terminal sliding surface (FTSM) to the nonsingular fast terminal sliding surface (NFTSM) yields
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
{\dot{s}}_{2} = \ddot{e} + {\alpha }_{1}\gamma {\dot{e}}_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} \tag{19}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
Similarly, we can get
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
M\left( q\right) {\dot{s}}_{2} = M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + M\ddot{e}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
= M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + M\left( {{\ddot{q}}_{d} - \ddot{q}}\right) \tag{20}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
= M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + {K}_{me}\eta - C\dot{e} + \Delta - {\tau }_{1}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
To guarantee the stability, Lyapunov function is defined as
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
{V}_{2} = \frac{1}{2}\left( {{e}^{T}e + {s}_{2}{}^{T}M{s}_{2} + {\eta }^{T}{L\eta }}\right) \tag{21}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
Its derivative is
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
{\dot{V}}_{2} = {s}_{2}^{T}\left( {e + M{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1}\dot{e} + {v}_{z} + {C\vartheta } + \Delta - {\tau }_{1}}\right) - {e}^{T}\vartheta \tag{22}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
+ {\eta }^{T}\left\lbrack {-{R\eta } + {K}_{m}s + k\left( {s - \vartheta }\right) + L{\dot{I}}_{d} - {\tau }_{2}}\right\rbrack
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
Combined with (15), the nonlinear term in the above expression can be cast as
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
{f}_{3} = e + M{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1}\dot{e} + {C\vartheta } + \Delta = {W}_{1N}^{\mathrm{T}}{h}_{1}\left( x\right) + {\varepsilon }_{1} \tag{23}
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
The auxiliary controllers ${\bar{\tau }}_{1}$ and ${\tau }_{2}$ can be described as
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
\left\{ \begin{array}{l} {\bar{\tau }}_{1} = {W}_{1Ne}^{T}{h}_{1}\left( x\right) + {\alpha }_{1}M{s}_{2} - M{v}_{z} \\ {\tau }_{2} = {W}_{2e}^{\mathrm{T}}{h}_{2}\left( x\right) + {\alpha }_{2}{L\eta } \end{array}\right. \tag{24}
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
B. Stability analysis
|
| 246 |
+
|
| 247 |
+
A Lyapunov function is designed as
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
{V}_{3} = {V}_{2} + \frac{1}{2{k}_{1}}\mathop{\sum }\limits_{{i = 1}}^{2}{\begin{Vmatrix}{\widetilde{W}}_{i}\end{Vmatrix}}_{F}^{2} \tag{25}
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
where ${\widetilde{W}}_{i} = {W}_{i} - {W}_{ie}$ represents weight error.
|
| 254 |
+
|
| 255 |
+
Its derivative is
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
{\dot{V}}_{3} \leq - 2{\alpha }_{0}{V}_{3} + {s}^{T}{\varepsilon }_{1} + {\eta }^{T}{\varepsilon }_{2} - a\left( {{\alpha }_{1}{s}^{T}{Ms} + {\alpha }_{2}{\eta }^{T}{L\eta }}\right)
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
+ {k}_{2}\left( {\mathop{\sum }\limits_{{i = 1}}^{2}{\left( {\widetilde{W}}_{i},{W}_{i}\right) }_{F} - a\mathop{\sum }\limits_{{i = 1}}^{2}{\begin{Vmatrix}{\widetilde{W}}_{i}\end{Vmatrix}}_{F}^{2}}\right) \tag{26}
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
in which $0 \leq a \leq 1,{\alpha }_{0} = \min \left\{ {\left( {1 - a}\right) {\alpha }_{1},\left( {1 - a}\right) {\alpha }_{2},\left( {1 - a}\right) {k}_{2}}\right\}$ .
|
| 266 |
+
|
| 267 |
+
In accordance with [10], it holds that
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{\dot{V}}_{2} \leq - 2{\alpha }_{0}{V}_{2} + \lambda ,\left( {\lambda > 0}\right) \tag{27}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
Further, shrink Equation (27) as
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
{\dot{V}}_{2} \leq - 2{\alpha }_{0}{V}_{2} \leq 0 \tag{28}
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
From Equation (27) and (28), a conclusion can be made that the tracking system is stable. Thus, the effectiveness of controller in the control of UVMS underwater robotic manipulator is verified.
|
| 280 |
+
|
| 281 |
+
## IV. SIMULATION
|
| 282 |
+
|
| 283 |
+
For the purpose of testifying the validity and advantages of the designed tracking controller, i.e., NN based the nonsingular fast terminal sliding mode (NN-NFTSM) controller, comparison is conducted with traditional PD control and neural network control approaches. TABLE I. displays the parameters of robotic manipulator and controller.
|
| 284 |
+
|
| 285 |
+
TABLE I. PARAMETERS OF THE UVMS
|
| 286 |
+
|
| 287 |
+
<table><tr><td>Items</td><td>Rod1</td><td>Rod2</td><td>Rod3</td></tr><tr><td>Length(m)</td><td>1</td><td>1</td><td>1</td></tr><tr><td>Mass(kg)</td><td>1</td><td>1</td><td>2</td></tr><tr><td>${L}_{e}$</td><td>0.1</td><td>0.1</td><td>0.1</td></tr><tr><td>${R}_{e}$</td><td>1</td><td>1</td><td>1</td></tr><tr><td>${K}_{e}$</td><td>0.5</td><td>0.5</td><td>0.5</td></tr><tr><td>${K}_{me}$</td><td>1</td><td>1</td><td>1</td></tr><tr><td>$\bar{w}$</td><td>0.5</td><td>${\alpha }_{1},{\alpha }_{2}$</td><td>200</td></tr><tr><td>${k}_{p},{k}_{d}$</td><td>300</td><td>${k}_{1},{k}_{2}$</td><td>50,0.8</td></tr></table>
|
| 288 |
+
|
| 289 |
+
Since the underwater robotic manipulator is mounted on underwater vehicle to form the UVMS system, the first hinge of the underwater robotic manipulator has direct interference with the vehicle. In the simulation process, it is assumed that this interference is a transient interference signal: a force of ${200}\mathrm{\;N}$ is applied to the vehicle at $\mathrm{t} = {1.7}\mathrm{\;s}$ .
|
| 290 |
+
|
| 291 |
+
Fig. 2 displays the spatial tracking effect of UVMS end effector. It can be seen that the proposed NN-NFTSM controller is obviously better than traditional PD control and neural network control methods.
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
|
| 295 |
+
Fig. 2 Spatial tracking effect of UVMS end effector
|
| 296 |
+
|
| 297 |
+
Fig. 3 shows the results of joint angle tracking control. It is obvious to get the result that both the nonsingular fast terminal sliding mode surface based on neural network and the proposed sliding mode controller based on neural network have higher tracking stability than PD control.
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
|
| 301 |
+
Fig. 3 Results of joint angle tracking control
|
| 302 |
+
|
| 303 |
+
In Fig. 4 and Fig. 5, the tracking effect of UVMS end effector in $x, y, z$ directions are displayed. It is easy to get that in the method with neural network control, the tracking effect of three directions can reach stability. The proposed nonsingular fast terminal sliding mode control method combined with the RBF neural network can track the desired trajectory more quickly and stably.
|
| 304 |
+
|
| 305 |
+

|
| 306 |
+
|
| 307 |
+
Fig. 4 Tracking effect of UVMS end effector in x, y, z directions.
|
| 308 |
+
|
| 309 |
+

|
| 310 |
+
|
| 311 |
+
Fig. 5 UVMS end effector tracking error
|
| 312 |
+
|
| 313 |
+
Fig. 6 - Fig. 8 show the comparison of MAE and RMSE under three control schemes. It is easy to get that NN-NFTSM has higher accuracy than RBF neural network (NN) and PD control.
|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
|
| 317 |
+
Fig. 8 Error in z direction
|
| 318 |
+
|
| 319 |
+
## V. CONCLUSION
|
| 320 |
+
|
| 321 |
+
In this article, a RBFNN based fast nonsingular terminal sliding mode controller is designed for UVMS. Singular items of the UVMS system are approximated by RBF neural network. Lyapunov design is selected to test the stability and feasibility of the proposed controller. It is proved that the convergence of tracking errors falls into a small zero neighborhood within finite time. Finally, the simulation results confirm that the proposed controller performs an excellent role in UVMS system.
|
| 322 |
+
|
| 323 |
+
## ACKNOWLEDGMENT
|
| 324 |
+
|
| 325 |
+
The work in the paper is partly supported by the Natural Science Foundation of Fujian Province of China, Grant 2023J011572, and partly supported by Fuzhou Institute of Oceanography, Grants 2021F11 & 2022F13.
|
| 326 |
+
|
| 327 |
+
## REFERENCES
|
| 328 |
+
|
| 329 |
+
[1] Xu, B. Pandian, S.R. Sakagami, N. Petry, F. Neuro-fuzzy control of underwater vehicle-manipulator systems (Article)[J]. Journal of the Franklin Institute, 2012, Vol. 349(3): 1125-1138.
|
| 330 |
+
|
| 331 |
+
[2] Wei Chen, Ming Wei, Yuhang Zhang, Di Lu and Shilin Hu. Research on Adaptive Sliding Mode Control of UVMS Based on Nonlinear Disturbance Observation[J]. Mathematical Problems in Engineering, 2022, Vol.
|
| 332 |
+
|
| 333 |
+
[3] S. Mobayen, O. Mofid, S. U. Din, and ABartoszewicz, "Finite time tracking controller design of perturbed robotic manipulator based on adaptive second-order sliding mode control method," IEEE Access, vol. 9, Article ID 71159, 2021.
|
| 334 |
+
|
| 335 |
+
[4] Y. Wang, B. Chen, and H. Wu," Joint space tracking control of underwater vehicle-manipulator systems using continuous nonsingular fast terminal sliding mode," Proceedings of the Institution of Mechanical Engineers- Part M: Journal of Engineering for the Maritime Environment, vol. 232, no. 4, pp. 448-458, 2018.
|
| 336 |
+
|
| 337 |
+
[5] Luo, WL; Cong, HC. Robust NN Control of the Manipulator in the Underwater Vehicle-Manipulator System[J]. ADVANCES IN NEURAL NETWORKS, PT II,2017, Vol. 10262: 75-82.
|
| 338 |
+
|
| 339 |
+
[6] O. Mofid, S. Mobayen, and A. Fekih, "Adaptive integral-type terminal sliding mode control for unmanned aerial vehicle under model uncertainties and external disturbances," IEEE Access, vol. 9, Article ID 53255, 2021.
|
| 340 |
+
|
| 341 |
+
[7] Woolfrey, J., Liu, D., Carmichael, M. Kinematic control of an autonomous underwater vehicle-manipulator system (AUVMS) using autoregressive prediction of vehicle motion and model predictive control. In: 2016 IEEE International Conference on Robotics and Automation, pp. 4591-4596. IEEE Press, New York (2016)
|
| 342 |
+
|
| 343 |
+
[8] Han, J., Chung, W.K., Sakagami, N., Petry, F.: Active use of restoring moments for motion control of an underwater vehicle-manipulator system. IEEE J. Ocean. Eng. 39(1), 100-109 (2014)
|
| 344 |
+
|
| 345 |
+
[9] Chen Z, Yang X, Liu X. RBFNN-based nonsingular fast terminal sliding mode control for robotic manipulators including actuator dynamics[J]. Neurocomputing, 2019, 362: 72-82.
|
| 346 |
+
|
| 347 |
+
[10] Luo, WL. A new neural network control method for electrically driven rigid manipulator [D]. Fuzhou University,2002.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/ImUUzCj4k8/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,349 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ UVMS TRAJECTORY TRACKING BASED ON RBFNN AND SLIDING MODE CONTROL
|
| 2 |
+
|
| 3 |
+
Huiyi Luo
|
| 4 |
+
|
| 5 |
+
Fuzhou Institute of Oceanography, Fuzhou University, Fuzhou 350108, China College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China 18278811826@163.com
|
| 6 |
+
|
| 7 |
+
Weilin Luo
|
| 8 |
+
|
| 9 |
+
Fuzhou Institute of Oceanography, Fuzhou U niversity, Fuzhou 350108, China College of Mechanical Engineering and Auto mation, Fuzhou University,
|
| 10 |
+
|
| 11 |
+
Fuzhou 350108, China;
|
| 12 |
+
|
| 13 |
+
wlluo@fzu.edu.cn
|
| 14 |
+
|
| 15 |
+
Yuanjing Wang
|
| 16 |
+
|
| 17 |
+
College of Mechanical Engineering and
|
| 18 |
+
|
| 19 |
+
Automation, Fuzhou University,
|
| 20 |
+
|
| 21 |
+
Fuzhou 350108, China
|
| 22 |
+
|
| 23 |
+
xyjw325@163.com
|
| 24 |
+
|
| 25 |
+
${Abstract}$ -This article spells out the UVMS trajectory tracking control problem under electric drive. Firstly, based a claim on Radial Basis Function Neural Networks (RBFNN) and Nonsingular Fast Terminal Sliding Mode (NFTSM) methods, the tracking strategy for UVMS is designed. Further, for singularity problem, a saturation-based tracking controller is obtained by means of the methods mentioned above. Lyapunov design is adopted to guarantee the asymptotic stability of the proposed controller. Simulation results show that the tracking consequence of NN-NFTSM is more thoroughly than PD approach and NN approach. The validity and advantages of the proposed controller is testified.
|
| 26 |
+
|
| 27 |
+
Keywords-UVMS, electric drive, trajectory tracking, fast nonsingular terminal sliding mode, RBF neural network
|
| 28 |
+
|
| 29 |
+
§ I. INTRODUCTION
|
| 30 |
+
|
| 31 |
+
Underwater Vehicle-Manipulator Systems (UVMS), which can control the underwater manipulator to complete the underwater task instead of human beings, is an effective means to develop underwater ocean energy at present. Usually, UVMS is constituted if there are n-link manipulators connected to an underwater robot for instance ROV (Remotely Operated Vehicle) and AUV (Autonomous Underwater Vehicle). As a vital tool of underwater vehicle, UVMS is pretty significant for these underwater operations, for example underwater real time shooting, underwater target reconnaissance and surveillance, marine resource exploitation, marine bioprospecting, etc. UVMS plays a supporting role in various marine underwater missions, and becomes the research focus of many scholars.
|
| 32 |
+
|
| 33 |
+
How to settle the uncertainties in an underwater condition, like current, oceanic internal wave, is the biggest challenge for an UVMS to reach an ideal performance controller. For this reason, the effectiveness and robustness of controller is pretty crucial. Xu, et al. adopted fuzzy based control techniques to study a 6-DOF AUV which has a 3-DOF on-board manipulator [1]. Wei, et al. have applied a nonlinear disturbance observation for an UVMS to evaluate the external unpredictable disturbance in real time, and an adaptive sliding mode approach is utilized for compensating things [2]. Mobayen et al. adopted a continuous nonsingular fast terminal sliding mode control with timing delay evaluation, which can make full sure the satisfactory of tracking control performance and the sufficiency of robustness on an UVMS [3]. Wang et al. have selected the control plan which mixed sliding mode control and adaptive fuzzy control to constitute a multi-strategy fusion control that addressed the motion variable control issue of UVMS [4]. Luo et al. applied neural networks to a 3-link UVMS's tracking, the robustness of controller is verified by compared with PD control method [5]. Mofid et al. applied a fuzzy terminal sliding mode control approach with timing delay evaluation, which puts the focus on using fuzzy rules to adaptively fit the terminal sliding mode surface to eliminate the unpredictable internal and external disturbance running on manipulator [6]. Woolfrey et al. applied model predictive control plan to study kinematics things on UVMS which is affected by fluctuations, and the results show that the approach has excellent predictive consequence [7]. Han and Chung exposed an approach which uses restoring moments to explore the motion control under external disturbance of an UVMS [8].
|
| 34 |
+
|
| 35 |
+
This article proposes a fast nonsingular terminal sliding mode cascade controller combined with RBF neural network method for manipulator control problem of UVMS. Due to the interaction induced by vehicle and manipulator, there is an external disturbance working on UVMS, which is the main source of external disturbance. Lyapunov approach is applied to verify the stability of the cascade controller. The effectiveness and robustness of the controller designed in this article is guaranteed by numerical simulation.
|
| 36 |
+
|
| 37 |
+
§ II. PROBLEM FORMULATION
|
| 38 |
+
|
| 39 |
+
When UVMS moves to working area, it is sometimes necessary for the underwater robot body to maintain a stable hover state while the mechanical arm works according to the work requirements. At this time, the body-fixed reference coordinate system attached to the underwater vehicle body can be viewed as the inertial reference coordinate, which is constructed with the earth, and the motion of the entire UVMS can be regarded as the motion control of underwater robotic manipulators considering disturbance.
|
| 40 |
+
|
| 41 |
+
Since the influence of underwater robot on underwater robotic manipulators is difficult to be expressed by mathematical model, it can be regarded as disturbance on underwater robotic manipulators. The nonlinear dynamics of the underwater robotic manipulators is written as
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
M\left( q\right) \ddot{q} + C\left( {q,\dot{q}}\right) \dot{q} + D\left( {q,\dot{q}}\right) \dot{q} + G\left( q\right) + \Delta = {\tau }_{ms} \tag{1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where $\Delta$ denotes the uncertainty induced by the interaction of underwater vehicle and manipulator, $M$ denotes the inertial matrix, $C$ denotes the Coriolis-centripetal matrix, $D$ denotes the water resistance coefficient matrix, $G$ denotes the equivalent gravity vector matrix, ${\tau }_{ms}$ denotes the input of control.
|
| 48 |
+
|
| 49 |
+
Corresponding Author: W. Luo
|
| 50 |
+
|
| 51 |
+
This work was supported by the Natural Science Foundation of Fujian Province, China through Grant 2023J011572, and Fuzhou Institute of Oceanography through Grants 2021F11 & 2022F13.
|
| 52 |
+
|
| 53 |
+
Fig. 1 displays underwater robotic manipulator combined with underwater vehicle to form a three-link UVMS. Fig. 1 shows the starting position of the underwater robotic manipulator, in which the joint at the hinge of the connecting rod is driven by a motor, so as to achieve the operational requirements of the three degree of freedom underwater robotic manipulator.
|
| 54 |
+
|
| 55 |
+
< g r a p h i c s >
|
| 56 |
+
|
| 57 |
+
Fig. 1 Three-link manipulator UVMS
|
| 58 |
+
|
| 59 |
+
Since the underwater robotic manipulator's joint is run by a DC motor, the motor driving force can be described as
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{\tau }_{me} = {K}_{me}I \tag{2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where $I$ denotes the electrical current, ${K}_{me}$ denotes the coefficient matrix during the process of electrical current change to torque.
|
| 66 |
+
|
| 67 |
+
The dynamics of the electrical circuit can be described as
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\tau }_{e} = {L}_{e}\dot{I} + {R}_{e}I + {K}_{e}\dot{q} \tag{3}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where ${\tau }_{e},{L}_{e},{R}_{e}$ denotes the vector matrix of the motor coil’s voltage, inductance and resistance, respectively. ${K}_{e}$ denotes the constant matrix of its voltage.
|
| 74 |
+
|
| 75 |
+
Then, a cascaded system containing the subsystem of machinery and electricity consists of Equations (1) and (3).
|
| 76 |
+
|
| 77 |
+
§ III. CONTROLLER DESIGN
|
| 78 |
+
|
| 79 |
+
§ A.NN BASED CONTROLLER
|
| 80 |
+
|
| 81 |
+
According to Equation (2), an ideal trajectory design is carried out for the desired joint angle of the underwater robotic manipulator. By defining the desired joint angle as ${q}_{d}$ and considering Equations (1) and (3), the desired input signal of the electrical current can be described as
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{I}_{d} = {K}_{me}^{-1}\left( {M{\ddot{q}}_{d} + C\dot{q} + D\dot{q} + G + \Delta + {\tau }_{1}}\right) \tag{4}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where ${\tau }_{1}$ denotes the auxiliary controller for dynamics of underwater robotic manipulator. Similarly, the auxiliary controller of electrical system can be designed as
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
{\tau }_{e} = {R}_{e}{I}_{d} + {K}_{e}{\dot{q}}_{d} + {\tau }_{2} \tag{5}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where ${\tau }_{2}$ represents the auxiliary controller for electrical system.
|
| 94 |
+
|
| 95 |
+
Further, define joint tracking error as
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
e = {q}_{d} - q \tag{6}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
To guarantee convergent quality, design the fast terminal sliding surface as
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
s = \dot{e} + {\alpha }_{1}{\operatorname{sign}}^{{\gamma }_{1}}\left( e\right) + {\alpha }_{2}{\operatorname{sign}}^{{\gamma }_{2}}\left( e\right) \tag{7}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where ${\operatorname{sign}}^{\Delta }\left( \cdot \right) = {\left| \cdot \right| }^{\Delta }\operatorname{sign}\left( \cdot \right) ,{\gamma }_{1} \geq 1,0 \leq {\gamma }_{2} \leq 1,{\alpha }_{1}$ and ${\alpha }_{2}$ are introduced as positive gain matrix.
|
| 108 |
+
|
| 109 |
+
Derivative of fast terminal sliding surface is
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\dot{s} = \ddot{e} + \left( {{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}}\right) \dot{e} \tag{8}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
To facilitate calculation, auxiliary variables are introduced as
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\left\{ \begin{array}{l} \vartheta = {\alpha }_{1}{\operatorname{sign}}^{{\gamma }_{1}}\left( e\right) + {\alpha }_{2}{\operatorname{sign}}^{{\gamma }_{2}}\left( e\right) \\ \mu = {\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1} \end{array}\right. \tag{9}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
Substituting Equation (9) into Equations (7) and (8) yields
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\left\{ \begin{array}{l} s = \dot{e} + \vartheta \\ \dot{s} = \ddot{e} + \mu \dot{e} \end{array}\right. \tag{10}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
Define electrical current error as $\eta = {I}_{d} - I$ , one has
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
M\left( q\right) \dot{s} = {M\mu }\dot{e} + M\ddot{e}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
= {M\mu }\dot{e} + M\left( {{\ddot{q}}_{d} - \ddot{q}}\right) \tag{11}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
= {M\mu }\dot{e} + {K}_{me}\eta - C\dot{e} + \Delta - {\tau }_{1}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
and
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
L\dot{\eta } = L{\dot{I}}_{d} - L\dot{I} = - {R\eta } - K\left( {s - \vartheta }\right) - {\tau }_{2} + L{\dot{I}}_{d}. \tag{12}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
To reach the goal of letting the error Equation (11) and (12) attain to zero, Lyapunov design theorem is utilized and a positively definite Lyapunov function can be written as
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
{V}_{1} = \frac{1}{2}\left( {{e}^{\mathrm{T}}e + {s}^{\mathrm{T}}{Ms} + {\eta }^{\mathrm{T}}{L\eta }}\right) \tag{13}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
The time derivative of keeps
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
{\dot{V}}_{1} = {s}^{T}\left( {e + {M\mu }\dot{e} + {C\vartheta } + \Delta - {\tau }_{1}}\right) - {e}^{T}\vartheta \tag{14}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
+ {\eta }^{T}\left\lbrack {-{R}_{e}\eta + {K}_{me}s + K\left( {s - \vartheta }\right) + {L}_{e}{\dot{I}}_{d} - {\tau }_{2}}\right\rbrack
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
Since Equation (14) contains nonlinear terms, and for the trajectory tracking control of underwater robotic manipulator, the nonlinear terms have an impact on the control results. For this reason, RBF neural network is adopted to estimate the nonlinear term. In detail, let
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
\left\{ \begin{array}{l} {f}_{1} = e + {\mu M}\dot{e} + {C\vartheta } + \Delta = {W}_{1}^{\mathrm{T}}{h}_{1}\left( x\right) + {\varepsilon }_{1} \\ {f}_{2} = {K}_{me}s - {R}_{e}\eta + {K}_{e}\left( {s - \vartheta }\right) + {L}_{e}{\dot{I}}_{d} = {W}_{2}^{\mathrm{T}}{h}_{2}\left( x\right) + {\varepsilon }_{2} \end{array}\right. \tag{15}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
where ${W}_{i},{h}_{i},{\varepsilon }_{i}$ denote weights, inputs and regression errors, respectively.
|
| 170 |
+
|
| 171 |
+
The controllers ${\tau }_{1}$ and ${\tau }_{2}$ can be given as
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\left\{ \begin{array}{l} {\tau }_{1} = {W}_{1e}^{\mathrm{T}}{h}_{1}\left( x\right) + {\alpha }_{1}{Ms} \\ {\tau }_{2} = {W}_{2e}^{\mathrm{T}}{h}_{2}\left( x\right) + {\alpha }_{2}{L\eta } \end{array}\right. \tag{16}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
where ${W}_{ie}$ denote updated weight matrices.
|
| 178 |
+
|
| 179 |
+
In order to achieve the excellent robustness of neural network controller, the weight is written as
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\left\{ \begin{array}{l} {\dot{W}}_{1e} = {k}_{1}{h}_{1}\left( {X}_{1}\right) {s}^{\mathrm{T}} - {k}_{2}{W}_{1e} \\ {\dot{W}}_{2e} = {k}_{1}{h}_{2}\left( {X}_{2}\right) {\eta }^{\mathrm{T}} - {k}_{2}{W}_{2e} \end{array}\right. \tag{17}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
As pointed out [9], in a conventional sliding approach, since the item ${\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}\dot{e}$ in Equation (8) exists, it happens that ${e}_{x} \rightarrow 0$ . In order to deal with the singular phenomena, one might use the following saturation
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\operatorname{sat}\left( {v}_{z}\right) = \left\{ \begin{matrix} {v}_{z} & \left| {v}_{z}\right| \leq \bar{w} \\ \bar{w}\operatorname{sign}\left( {v}_{z}\right) & \left| {v}_{z}\right| \geq \bar{w} \end{matrix}\right. \tag{18}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where ${v}_{z} = {\alpha }_{2}{\gamma }_{2}{\left| e\right| }^{{\gamma }_{2} - 1}\dot{e},\bar{w}$ is a positive number.
|
| 192 |
+
|
| 193 |
+
Substituting Equation (18) into Equations (7), and replacing the fast terminal sliding surface (FTSM) to the nonsingular fast terminal sliding surface (NFTSM) yields
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
{\dot{s}}_{2} = \ddot{e} + {\alpha }_{1}\gamma {\dot{e}}_{1}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} \tag{19}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
Similarly, we can get
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
M\left( q\right) {\dot{s}}_{2} = M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + M\ddot{e}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
= M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + M\left( {{\ddot{q}}_{d} - \ddot{q}}\right) \tag{20}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
= M{\alpha }_{1}{\gamma }_{1}\dot{e}{\left| e\right| }^{{\gamma }_{1} - 1} + {v}_{z} + {K}_{me}\eta - C\dot{e} + \Delta - {\tau }_{1}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
To guarantee the stability, Lyapunov function is defined as
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
{V}_{2} = \frac{1}{2}\left( {{e}^{T}e + {s}_{2}{}^{T}M{s}_{2} + {\eta }^{T}{L\eta }}\right) \tag{21}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
Its derivative is
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
{\dot{V}}_{2} = {s}_{2}^{T}\left( {e + M{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1}\dot{e} + {v}_{z} + {C\vartheta } + \Delta - {\tau }_{1}}\right) - {e}^{T}\vartheta \tag{22}
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
+ {\eta }^{T}\left\lbrack {-{R\eta } + {K}_{m}s + k\left( {s - \vartheta }\right) + L{\dot{I}}_{d} - {\tau }_{2}}\right\rbrack
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
Combined with (15), the nonlinear term in the above expression can be cast as
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
{f}_{3} = e + M{\alpha }_{1}{\gamma }_{1}{\left| e\right| }^{{\gamma }_{1} - 1}\dot{e} + {C\vartheta } + \Delta = {W}_{1N}^{\mathrm{T}}{h}_{1}\left( x\right) + {\varepsilon }_{1} \tag{23}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
The auxiliary controllers ${\bar{\tau }}_{1}$ and ${\tau }_{2}$ can be described as
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\left\{ \begin{array}{l} {\bar{\tau }}_{1} = {W}_{1Ne}^{T}{h}_{1}\left( x\right) + {\alpha }_{1}M{s}_{2} - M{v}_{z} \\ {\tau }_{2} = {W}_{2e}^{\mathrm{T}}{h}_{2}\left( x\right) + {\alpha }_{2}{L\eta } \end{array}\right. \tag{24}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
B. Stability analysis
|
| 242 |
+
|
| 243 |
+
A Lyapunov function is designed as
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
{V}_{3} = {V}_{2} + \frac{1}{2{k}_{1}}\mathop{\sum }\limits_{{i = 1}}^{2}{\begin{Vmatrix}{\widetilde{W}}_{i}\end{Vmatrix}}_{F}^{2} \tag{25}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
where ${\widetilde{W}}_{i} = {W}_{i} - {W}_{ie}$ represents weight error.
|
| 250 |
+
|
| 251 |
+
Its derivative is
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
{\dot{V}}_{3} \leq - 2{\alpha }_{0}{V}_{3} + {s}^{T}{\varepsilon }_{1} + {\eta }^{T}{\varepsilon }_{2} - a\left( {{\alpha }_{1}{s}^{T}{Ms} + {\alpha }_{2}{\eta }^{T}{L\eta }}\right)
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
+ {k}_{2}\left( {\mathop{\sum }\limits_{{i = 1}}^{2}{\left( {\widetilde{W}}_{i},{W}_{i}\right) }_{F} - a\mathop{\sum }\limits_{{i = 1}}^{2}{\begin{Vmatrix}{\widetilde{W}}_{i}\end{Vmatrix}}_{F}^{2}}\right) \tag{26}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
in which $0 \leq a \leq 1,{\alpha }_{0} = \min \left\{ {\left( {1 - a}\right) {\alpha }_{1},\left( {1 - a}\right) {\alpha }_{2},\left( {1 - a}\right) {k}_{2}}\right\}$ .
|
| 262 |
+
|
| 263 |
+
In accordance with [10], it holds that
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
{\dot{V}}_{2} \leq - 2{\alpha }_{0}{V}_{2} + \lambda ,\left( {\lambda > 0}\right) \tag{27}
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
Further, shrink Equation (27) as
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
{\dot{V}}_{2} \leq - 2{\alpha }_{0}{V}_{2} \leq 0 \tag{28}
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
From Equation (27) and (28), a conclusion can be made that the tracking system is stable. Thus, the effectiveness of controller in the control of UVMS underwater robotic manipulator is verified.
|
| 276 |
+
|
| 277 |
+
§ IV. SIMULATION
|
| 278 |
+
|
| 279 |
+
For the purpose of testifying the validity and advantages of the designed tracking controller, i.e., NN based the nonsingular fast terminal sliding mode (NN-NFTSM) controller, comparison is conducted with traditional PD control and neural network control approaches. TABLE I. displays the parameters of robotic manipulator and controller.
|
| 280 |
+
|
| 281 |
+
TABLE I. PARAMETERS OF THE UVMS
|
| 282 |
+
|
| 283 |
+
max width=
|
| 284 |
+
|
| 285 |
+
Items Rod1 Rod2 Rod3
|
| 286 |
+
|
| 287 |
+
1-4
|
| 288 |
+
Length(m) 1 1 1
|
| 289 |
+
|
| 290 |
+
1-4
|
| 291 |
+
Mass(kg) 1 1 2
|
| 292 |
+
|
| 293 |
+
1-4
|
| 294 |
+
${L}_{e}$ 0.1 0.1 0.1
|
| 295 |
+
|
| 296 |
+
1-4
|
| 297 |
+
${R}_{e}$ 1 1 1
|
| 298 |
+
|
| 299 |
+
1-4
|
| 300 |
+
${K}_{e}$ 0.5 0.5 0.5
|
| 301 |
+
|
| 302 |
+
1-4
|
| 303 |
+
${K}_{me}$ 1 1 1
|
| 304 |
+
|
| 305 |
+
1-4
|
| 306 |
+
$\bar{w}$ 0.5 ${\alpha }_{1},{\alpha }_{2}$ 200
|
| 307 |
+
|
| 308 |
+
1-4
|
| 309 |
+
${k}_{p},{k}_{d}$ 300 ${k}_{1},{k}_{2}$ 50,0.8
|
| 310 |
+
|
| 311 |
+
1-4
|
| 312 |
+
|
| 313 |
+
Since the underwater robotic manipulator is mounted on underwater vehicle to form the UVMS system, the first hinge of the underwater robotic manipulator has direct interference with the vehicle. In the simulation process, it is assumed that this interference is a transient interference signal: a force of ${200}\mathrm{\;N}$ is applied to the vehicle at $\mathrm{t} = {1.7}\mathrm{\;s}$ .
|
| 314 |
+
|
| 315 |
+
Fig. 2 displays the spatial tracking effect of UVMS end effector. It can be seen that the proposed NN-NFTSM controller is obviously better than traditional PD control and neural network control methods.
|
| 316 |
+
|
| 317 |
+
< g r a p h i c s >
|
| 318 |
+
|
| 319 |
+
Fig. 2 Spatial tracking effect of UVMS end effector
|
| 320 |
+
|
| 321 |
+
Fig. 3 shows the results of joint angle tracking control. It is obvious to get the result that both the nonsingular fast terminal sliding mode surface based on neural network and the proposed sliding mode controller based on neural network have higher tracking stability than PD control.
|
| 322 |
+
|
| 323 |
+
< g r a p h i c s >
|
| 324 |
+
|
| 325 |
+
Fig. 3 Results of joint angle tracking control
|
| 326 |
+
|
| 327 |
+
In Fig. 4 and Fig. 5, the tracking effect of UVMS end effector in $x,y,z$ directions are displayed. It is easy to get that in the method with neural network control, the tracking effect of three directions can reach stability. The proposed nonsingular fast terminal sliding mode control method combined with the RBF neural network can track the desired trajectory more quickly and stably.
|
| 328 |
+
|
| 329 |
+
< g r a p h i c s >
|
| 330 |
+
|
| 331 |
+
Fig. 4 Tracking effect of UVMS end effector in x, y, z directions.
|
| 332 |
+
|
| 333 |
+
< g r a p h i c s >
|
| 334 |
+
|
| 335 |
+
Fig. 5 UVMS end effector tracking error
|
| 336 |
+
|
| 337 |
+
Fig. 6 - Fig. 8 show the comparison of MAE and RMSE under three control schemes. It is easy to get that NN-NFTSM has higher accuracy than RBF neural network (NN) and PD control.
|
| 338 |
+
|
| 339 |
+
< g r a p h i c s >
|
| 340 |
+
|
| 341 |
+
Fig. 8 Error in z direction
|
| 342 |
+
|
| 343 |
+
§ V. CONCLUSION
|
| 344 |
+
|
| 345 |
+
In this article, a RBFNN based fast nonsingular terminal sliding mode controller is designed for UVMS. Singular items of the UVMS system are approximated by RBF neural network. Lyapunov design is selected to test the stability and feasibility of the proposed controller. It is proved that the convergence of tracking errors falls into a small zero neighborhood within finite time. Finally, the simulation results confirm that the proposed controller performs an excellent role in UVMS system.
|
| 346 |
+
|
| 347 |
+
§ ACKNOWLEDGMENT
|
| 348 |
+
|
| 349 |
+
The work in the paper is partly supported by the Natural Science Foundation of Fujian Province of China, Grant 2023J011572, and partly supported by Fuzhou Institute of Oceanography, Grants 2021F11 & 2022F13.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,527 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Performance-Based Human-in-the-Loop Optimal Bipartite Consensus Control for Multi-Agent Systems via Reinforcement Learning
|
| 2 |
+
|
| 3 |
+
Zongsheng Huang
|
| 4 |
+
|
| 5 |
+
School of Automation Engineering
|
| 6 |
+
|
| 7 |
+
University of Electronic Science and Technology of China Chengdu 611731, China
|
| 8 |
+
|
| 9 |
+
zs_Huang@163.com
|
| 10 |
+
|
| 11 |
+
Tieshan Li
|
| 12 |
+
|
| 13 |
+
School of Automation Engineering
|
| 14 |
+
|
| 15 |
+
University of Electronic Science and Technology of China Chengdu 611731, China
|
| 16 |
+
|
| 17 |
+
tieshanli@126.com
|
| 18 |
+
|
| 19 |
+
Yue Long
|
| 20 |
+
|
| 21 |
+
School of Automation Engineering
|
| 22 |
+
|
| 23 |
+
University of Electronic Science and Technology of China
|
| 24 |
+
|
| 25 |
+
Chengdu 611731, China
|
| 26 |
+
|
| 27 |
+
longyue@uestc.edu.cn
|
| 28 |
+
|
| 29 |
+
Hanqing Yang
|
| 30 |
+
|
| 31 |
+
School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
|
| 32 |
+
|
| 33 |
+
hqyang5517@uestc.edu.cn
|
| 34 |
+
|
| 35 |
+
${Abstract}$ -This paper investigates the performance-based human-in-the-loop (HiTL) optimal bipartite consensus control problem for nonlinear multi-agent systems (MASs) under signed topology. First, to respond to any emergencies and guarantee the safety of MASs, the MASs are monitored by human operator sending command signals to the non-autonomous leader. Then, under the joint design architecture of prescribe-time performance function and error transformation, a novel performance index function involving transformed error and control input is developed to achieve optimal bipartite consensus with prescribed-time. Subsequently, the reinforcement learning (RL) method is utilized to learn the solution to Hamilton-Jacobian-Bellman (HJB) equation, in which the fuzzy logic systems (FLSs) are employed to implement the method. Finally, the simulation results depict the effectiveness of the constructed control scheme.
|
| 36 |
+
|
| 37 |
+
Index Terms-Human-in-the-loop control, prescribed-time control, reinforcement learning, nonlinear multi-agent systems.
|
| 38 |
+
|
| 39 |
+
## I. INTRODUCTION
|
| 40 |
+
|
| 41 |
+
In recent years, with the rapid development of multiple unmanned aerial vehicles (UAVs) [1], multiple unmanned ground vehicles (UGVs) [2] and other fields, multi-agent systems (MASs) have been paid more and more attention by scholars. As one of the hot issues in control problems of MASs, consensus control problems have been widely studied. As a branch of consensus control, bipartite consensus was first introduced in [3] taking both competition and cooperation relationships between agents into consideration. For bipartite consensus, the agents eventually converge to two states of opposite sign but equal size. In [4]-[6], the various control strategies of bipartite consensus have been designed broadly.
|
| 42 |
+
|
| 43 |
+
Notably, the MASs mentioned above are fully autonomous. However, incidents with Boeing 737 jetliners and Tesla's autonomous driving systems have raised serious concerns and highlighted the challenges that fully autonomous MASs face in making judgments during in uncertain and complex environments. Therefore, it is urgent to develop monitoring schemes to complete tasks when MASs encounter unexpected situations [7]. Fortunately, the human-in-the-loop (HiTL) control approach was introduced in MASs to supervise the entire system to respond to sudden changes by sending commands to the leader agent [8]. Later, many studies on HiTL control for MASs have emerged in [9]-[15]. In [9], the HiTL formation tracking control scheme together with edge-based event-driven mechanism was constructed for MASs. Considering stochastic actuation attacks, in [13], the prescribed-time and prescribed-accuracy HiTL cluster consensus control problem has been solved. In view of the ability to deal with emergencies, the HiTL control approach has also been favored by multi-UAV systems in [14], [15].
|
| 44 |
+
|
| 45 |
+
Optimal control, a widely used control method, has garnered significant attention. For nonlinear systems, the optimal solution is derived from the Hamilton-Jacobian-Bellman (HJB) equation. However, obtaining the solution of HJB equation through numerical methods is infeasible. To overcome this challenge, reinforcement learning (RL) that motivated by animal behaviors was proposed as a powerful tool [16]. The core idea of RL is to approximate the solution of the HJB equation using a function approximation structure. The value iteration algorithm, one of the valuable algorithms in RL, was developed by Murray et al. in [17], in which the convergence analysis was also detailed. In [18], the policy iteration algorithm, as another equally important algorithm, was designed to obtain the optimal saturation controller for nonlinear systems. Based on the previous work, RL method has been used to solve the optimal problem for MASs. In [19], an optimal control protocol based on RL was designed to achieve containment control without prior knowledge of the system dynamics. For unknown discrete-time MASs, in [20], the optimal bipartite consensus control problem was solved. Nevertheless, the above results only conclude that the optimal controller is globally asymptotically stable. It is important to note that achieving specified accuracy within a given time is crucial in many fields.
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
This work was supported in part by the National Natural Science Foundation of China under Grant 51939001, Grant 62273072, and Grant 62203088, in part by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0903.(Corresponding author: Tieshan Li)
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
Fortunately, the prescribed-time control (PTC) was firstly proposed by Song et al. [21]. The PTC distinguishes from finite-time control and fixed-time control, in which the preset settling time is not related to the initial values of the system. Depending on [21], in [22], the convergence rate can be predetermined as needed, and a general method for constructing the time-varying rate function was provided. In [23], a novel time-varying constraint function was devised to guarantee that the system remains operational beyond the prescribed time, leading to a global result. In particular, the PTC-based HiTL control scheme was developed to realize the cluster consensus within given time in [13]. However, to the best of the authors' knowledge, the bipartite consensus control scheme considering both optimal performance and prescribed-time performance under the framework of HiTL control has not been fully explored, which promotes our research.
|
| 54 |
+
|
| 55 |
+
Driven by these observations, this paper focuses on investigating the performance-based HiTL optimal bipartite consensus control problem. The main contributions are summarized below.
|
| 56 |
+
|
| 57 |
+
(1) Unlike the autonomous leader described in [4]-[6] which lacked intelligent decision-making, this paper aims to improve the security, stability, and emergency response capabilities of the system by designing the leader of the MASs to be non-autonomous, where the time-varying control input is governed by a human operator.
|
| 58 |
+
|
| 59 |
+
(2) Compared with the existing optimal results for MASs in [19], [20], to realize both optimal performance and prescribed-time performance, a unified design framework of PTC and RL method is proposed, where the settling time and accuracy can be preset without initial values.
|
| 60 |
+
|
| 61 |
+
The structure is given below. In Section II, the considered system and some assumptions are given. In Section III, the main results including the PTC performance function and optimal controller are designed. In Section IV, the convergence analysis is provided. The simulation results is given in Section V. Finally, the conclusion is presented in Section VI.
|
| 62 |
+
|
| 63 |
+
## II. Problem Formulation and Preliminaries
|
| 64 |
+
|
| 65 |
+
## A. Signed Communication Topologies
|
| 66 |
+
|
| 67 |
+
The structurally balanced bipartition communication topology containing $N$ followers is represented by a directed graph $\mathcal{G} = \{ \mathcal{V},\varepsilon ,\mathcal{A}\}$ , where $\mathcal{V} = \left\{ {{\mathcal{V}}_{1},{\mathcal{V}}_{2},\cdots ,{\mathcal{V}}_{N}}\right\}$ represents the vertex set, which is divided into the cooperative set ${\mathcal{V}}_{\alpha }$ and competitive set ${\mathcal{V}}_{\beta }$ such that ${\mathcal{V}}_{\alpha } \cap {\mathcal{V}}_{\beta } = 0$ and ${\mathcal{V}}_{\alpha } \cup {\mathcal{V}}_{\beta } = \mathcal{V}$ . $\varepsilon \subseteq \mathcal{V} \times \mathcal{V}$ represents the edge set of $N$ followers. Let $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{N \times N}$ be the signed weight matrix, where ${a}_{ij} > 0$ if $\left( {{\mathcal{V}}_{i},{\mathcal{V}}_{j}}\right) \in {\mathcal{V}}_{m}, m \in \{ \alpha ,\beta \}$ and ${a}_{ij} < 0$ if ${\mathcal{V}}_{i} \in {\mathcal{V}}_{m},{\mathcal{V}}_{j} \in {\mathcal{V}}_{n}, m \neq n, m, n \in \{ \alpha ,\beta \}$ . The neighbor set of $i$ th follower is defined as ${\mathcal{N}}_{i} = \left\{ {j \in \mathcal{V} : {a}_{ij} \neq 0}\right\}$ . Define $\mathcal{L} = \mathcal{D} - \mathcal{A} \in {\mathbb{R}}^{N \times N}$ as the Laplacian matrix of $\mathcal{G}$ , where $\mathcal{D} = \operatorname{diag}\left( {{d}_{1},{d}_{2},\cdots ,{d}_{N}}\right) \in {\mathbb{R}}^{N \times N}$ denotes the degree matrix with ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}\left| {a}_{ij}\right|$ .
|
| 68 |
+
|
| 69 |
+
The argument graph consisting of one leader and $N$ followers is denoted as $\widetilde{\mathcal{G}} = \{ \widetilde{\mathcal{V}},\widetilde{\varepsilon }\}$ , in which $\widetilde{\mathcal{V}} =$ $\left\{ {{\mathcal{V}}_{0},{\mathcal{V}}_{1},{\mathcal{V}}_{2},\cdots ,{\mathcal{V}}_{N}}\right\}$ and $\widetilde{\varepsilon } \subseteq \widetilde{\mathcal{V}} \times \widetilde{\mathcal{V}}$ . Let $\mathcal{B} =$ $\operatorname{diag}\left\{ {\left| {b}_{1}\right| ,\left| {b}_{2}\right| ,\cdots ,\left| {b}_{N}\right| }\right\} \in {\mathbb{R}}^{N \times N}$ , where ${b}_{i} = 1$ indicates that the information of the leader is available for the $i$ th node and ${b}_{i} > 0$ represents cooperative relation, ${b}_{i} < 0$ represents competitive relation.
|
| 70 |
+
|
| 71 |
+
## B. Problem Formulation
|
| 72 |
+
|
| 73 |
+
Assume that the nonlinear MAS is composed of $N\left( { \geq 2}\right)$ followers and one leader. The dynamics model of $i$ th follower is provided as
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{\dot{x}}_{i} = {f}_{i}\left( {x}_{i}\right) + {g}_{i}\left( {x}_{i}\right) {u}_{i}, i = 1,2,\cdots , N \tag{1}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where ${x}_{i}\left( t\right) \in {\mathbb{R}}^{n}$ denotes state, ${u}_{i}\left( t\right) \in {\mathbb{R}}^{m}$ is control input, ${f}_{i}\left( {x}_{i}\right) \in {\mathbb{R}}^{n}$ is internal dynamics and ${g}_{i}\left( {x}_{i}\right) \in {\mathbb{R}}^{n \times m}$ is input dynamics.
|
| 80 |
+
|
| 81 |
+
Next, the dynamics of the human-manipulated leader is given as
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{\dot{x}}_{0}^{h} = {f}_{0}^{h}\left( {x}_{0}^{h}\right) + {u}_{0}^{h}, \tag{2}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where ${x}_{0}^{h}\left( t\right) \in {\mathbb{R}}^{n}$ denotes state and ${u}_{0}^{h}\left( t\right) \in {\mathbb{R}}^{m}$ is nonzero control input of human operator sending to leader, ${f}_{0}^{h}\left( {x}_{0}^{h}\right) \in$ ${\mathbb{R}}^{n}$ represents internal dynamics.
|
| 88 |
+
|
| 89 |
+
The following assumptions and lemma are imposed.
|
| 90 |
+
|
| 91 |
+
Assumption 1. [19] The signed graph $\mathcal{G}$ has a directed spanning tree.
|
| 92 |
+
|
| 93 |
+
Assumption 2. [24] The input of human operator always makes the leader (2) stable.
|
| 94 |
+
|
| 95 |
+
Lemma 1. [25]: The FLS can estimate a nonlinear continuous function $f\left( \mathfrak{x}\right) \in \mathbb{R}$ on a compact set ${\Omega }_{f} \in {\mathbb{R}}^{n}$ as
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\mathop{\sup }\limits_{{\mathfrak{x} \in {\Omega }_{f}}}\left| {f\left( \mathfrak{x}\right) - {\Theta }^{T}\phi \left( \mathfrak{x}\right) }\right| \leq b \tag{3}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
with $b > 0$ .
|
| 102 |
+
|
| 103 |
+
## III. Main Results
|
| 104 |
+
|
| 105 |
+
## A. Prescribed-Time Function and Error Transformation
|
| 106 |
+
|
| 107 |
+
To achieve prescribed-time (PT) performance for MASs, the PT performance function $\vartheta \left( t\right)$ is given as
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\vartheta \left( t\right) = \left\{ \begin{array}{ll} \iota {e}^{-\beta {\left( \frac{T}{T - t}\right) }^{h}} + {\vartheta }_{{T}_{r}}, & 0 < t < {T}_{r} \\ {\vartheta }_{{T}_{r}}, & t \geq {T}_{r} \end{array}\right. \tag{4}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where $h > 0,\iota > 0,\beta > 0,{\vartheta }_{{T}_{r}} > 0,0 < {T}_{r} < \infty$ and $0 < {\vartheta }_{{T}_{r}} < \infty$ represent the user-defined settling time and steady-state tracking accuracy, respectively.
|
| 114 |
+
|
| 115 |
+
Construct the bipartite consensus error as ${e}_{i} =$ $\mathop{\sum }\limits_{{j = 1}}^{N}\left| {a}_{ij}\right| \left( {{x}_{i} - \operatorname{sign}\left( {a}_{ij}\right) {x}_{j}}\right) + \left| {b}_{i}\right| \left( {{x}_{i} - \operatorname{sign}\left( {b}_{i}\right) {x}_{0}^{h}}\right) ,{e}_{i} =$ ${\left\lbrack {e}_{i,1},\cdots ,{e}_{i, n}\right\rbrack }^{T} \in {\mathbb{R}}^{n}$ and adopt the error transformation function as
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{\varrho }_{i,\imath } = \tan \left( {\frac{\pi }{2}\frac{{e}_{i,\imath }}{\vartheta }}\right) ,\imath = 1,\cdots , n, \tag{5}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where $\left| {{e}_{i,\iota }\left( 0\right) }\right| < \vartheta \left( 0\right)$ .
|
| 122 |
+
|
| 123 |
+
Based on (5), it yields
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{e}_{i,\imath } = \frac{2\vartheta }{\pi }\arctan \left( {\varrho }_{i,\imath }\right) ,\imath = 1,\cdots , n, i = 1,\cdots , N. \tag{6}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
Remark 1. From (5), the inequality $- \vartheta \leq {e}_{i,\iota } \leq \vartheta ,\forall t \geq 0$ holds. Combined the definition in (4), it further observes that $- {\vartheta }_{{T}_{r}} \leq {e}_{i,\iota } \leq {\vartheta }_{{T}_{r}},\forall t \geq {T}_{r}$ if ${\varrho }_{i,\iota }$ is bounded, which means the PT performance of ${e}_{i}$ can be ensured.
|
| 130 |
+
|
| 131 |
+
## B. Optimal control Scheme Design
|
| 132 |
+
|
| 133 |
+
Define the performance index function as
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
{J}_{i} = {\int }_{t}^{\infty }\left( {{e}_{i}^{T}{\mathcal{Q}}_{i}{e}_{i} + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i}}\right) {d\tau } \tag{7}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
= {\int }_{t}^{\infty }\left( {{\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i}}\right) {d\tau },
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
where ${\mathcal{Q}}_{i}$ and ${\mathcal{R}}_{i}$ are symmetric positive definite matrices with suitable dimensions, ${\mathcal{A}}_{i} = {\left\lbrack {\mathcal{A}}_{i,1},\cdots ,{\mathcal{A}}_{i, n}\right\rbrack }^{T} =$ ${\left\lbrack \arctan \left( {\varrho }_{i,1}\right) ,\cdots ,\arctan \left( {\varrho }_{i, n}\right) \right\rbrack }^{T}$ .
|
| 144 |
+
|
| 145 |
+
Taking the time derivative of ${\mathcal{A}}_{i, i}$ , one has
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
{\dot{\mathcal{A}}}_{i,\iota } = \frac{1}{1 + {\varrho }_{i,\iota }^{2}}{\chi }_{i,\iota }\left( {{\dot{e}}_{i,\iota } - {\nu }_{i,\iota }}\right) , \tag{8}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
where ${\chi }_{i,\imath } = \frac{\pi }{{2\vartheta }{\cos }^{2}\left( {\frac{\pi }{2}\frac{{e}_{i,\imath }}{\vartheta }}\right) },{\nu }_{i,\imath } = \frac{{e}_{i,\imath }\dot{\vartheta }}{\vartheta },{\dot{e}}_{i} = {\Gamma }_{i}\left( {{f}_{i} + {g}_{i}{u}_{i}}\right) -$ $\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\dot{x}}_{j} - {b}_{i}{\dot{x}}_{0}^{h}$ and ${\Gamma }_{i} = {d}_{i} + \left| {b}_{i}\right|$ .
|
| 152 |
+
|
| 153 |
+
Then, define the Hamiltonian function as
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
{H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i},\frac{\partial {J}_{i}}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}}{\partial \vartheta }}\right) = {\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right)
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
+ {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i} + \frac{\partial {J}_{i}}{\partial {\mathcal{A}}_{i}}\left\lbrack {{\bar{\chi }}_{i}\left( {{\dot{e}}_{i} - {\nu }_{i}}\right) }\right\rbrack + \frac{\partial {J}_{i}}{\partial \vartheta }\frac{\partial \vartheta }{\partial t} \tag{9}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
= {\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i} + \frac{\partial {J}_{i}}{\partial {\varrho }_{i}}\left\lbrack {{\chi }_{i}\left( {{\dot{e}}_{i} - {\nu }_{i}}\right) }\right\rbrack
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
+ \frac{\partial {J}_{i}}{\partial \vartheta }\frac{\partial \vartheta }{\partial t},
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
where ${\bar{\chi }}_{i} = \operatorname{diag}\left\{ {\frac{{\chi }_{i,1}}{1 + {\varrho }_{i,1}^{2}},\cdots ,\frac{{\chi }_{i, n}}{1 + {\varrho }_{i, n}^{2}}}\right\} ,{\nu }_{i} = \left\lbrack {{\nu }_{i,1},\cdots ,{\nu }_{i, n}}\right\rbrack$ and ${\chi }_{i} = \operatorname{diag}\left\{ {{\chi }_{i,1},\cdots ,{\chi }_{i, n}}\right\}$ .
|
| 172 |
+
|
| 173 |
+
The corresponding HJB equation is given as
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\mathop{\min }\limits_{{u}_{i}}{H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i}^{ * },\frac{\partial {J}_{i}^{ * }}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}^{ * }}{\partial \vartheta }}\right) = 0. \tag{10}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
Differentiating the (10) with respect to ${u}_{i}$ , one has
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
{u}_{i}^{ * } = - \frac{{\Gamma }_{i}}{2}{\mathcal{R}}_{i}^{-1}{g}_{i}^{T}{\chi }_{i}^{T}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}. \tag{11}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
Substituting (11) into (10), (10) becomes
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
{\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + \frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}\left\lbrack {{\chi }_{i}\left( {{\Gamma }_{i}{f}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\dot{x}}_{i} - {b}_{i}{\dot{x}}_{0}^{h}}\right. }\right.
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\left. \left. {-{\nu }_{i}}\right) \right\rbrack + \frac{\partial {J}_{i}^{ * }}{\partial \vartheta }\frac{\partial \vartheta }{\partial t} - \frac{{\Gamma }_{i}^{2}}{4}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}^{T}}{g}_{i}{\chi }_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{T}{g}_{i}^{T}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}} = 0.
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
Inspired by [26], $\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}$ can be segmented as
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}} = \frac{2{k}_{i}}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\varrho }_{i} + \frac{2}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) + \frac{1}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) , \tag{12}
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
where ${k}_{i} > 0,{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) = {\mathcal{R}}_{i}{\chi }_{i}\left( {{f}_{i}\left( {x}_{i}\right) - {\dot{x}}_{0}^{h} - {o}^{-1}{\nu }_{i}}\right)$ with $o = {\lambda }_{\max }\left( {\mathcal{L} + \mathcal{B}}\right) ,{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) = - 2{k}_{i}{\varrho }_{i}^{2} - 2{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) + {k}_{i}{\chi }_{i}^{2}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}.$
|
| 202 |
+
|
| 203 |
+
Substituting (12) into (11), one has
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
{u}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) . \tag{13}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
### C.PI Algorithm and FLSs-Based Implementation
|
| 214 |
+
|
| 215 |
+
Obviously, the HJB equation can not be acquired by numerical methods. Therefore, the PI approach is given in Algorithm 1 to find the optimal result.
|
| 216 |
+
|
| 217 |
+
Algorithm 1: PI Algorithm for Solving PT Optimal
|
| 218 |
+
|
| 219 |
+
Consensus Control Policy
|
| 220 |
+
|
| 221 |
+
---
|
| 222 |
+
|
| 223 |
+
1 Step 1: Initialization. Give an initial control protocols
|
| 224 |
+
|
| 225 |
+
${u}_{i}^{\left( 0\right) },\forall i$ .
|
| 226 |
+
|
| 227 |
+
2 Step 2: Policy evaluation. Solve the cost function ${J}_{i}^{l}$
|
| 228 |
+
|
| 229 |
+
as: ${H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i}^{ * },\frac{\partial {J}_{i}^{l}}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}^{l}}{\partial \vartheta }}\right) = 0$ .
|
| 230 |
+
|
| 231 |
+
3 Step 3: Policy improvement. Update optimal control
|
| 232 |
+
|
| 233 |
+
input ${u}_{i}^{\left( l + 1\right) }$ as Eq. (13).
|
| 234 |
+
|
| 235 |
+
Step 4: If $\begin{Vmatrix}{{J}_{i}^{\left( l + 1\right) } - {J}_{i}^{\left( l\right) }}\end{Vmatrix} \leq \aleph$ with the predefined
|
| 236 |
+
|
| 237 |
+
parameter $\aleph > 0$ , stop; otherwise, set $l = l + 1$ and
|
| 238 |
+
|
| 239 |
+
return to Step 2.
|
| 240 |
+
|
| 241 |
+
---
|
| 242 |
+
|
| 243 |
+
The convergence and optimality of Algorithm 1 have been proved in [27] and are omitted here.
|
| 244 |
+
|
| 245 |
+
In view of the unknown term ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)$ and ${\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right)$ in (13), the FLSs is used to approximate these terms as.
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) = {\omega }_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{14}
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) = {\omega }_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{15}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
where ${\omega }_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1} \times n}$ and ${\omega }_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2} \times n}$ represent ideal weight matrices with ${h}_{c1}$ and ${h}_{c2}$ are the number of fuzzy rules; ${\phi }_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1}}$ and ${\phi }_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2}}$ are fuzzy basis functions; ${\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right)$ and ${\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right)$ denote bounded approximation errors.
|
| 256 |
+
|
| 257 |
+
Thus, (13) becomes
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
{u}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\omega }_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{F}}_{i}}(\mathcal{X}}\right.
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\omega }_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) .
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
However, the ${\omega }_{{\mathcal{F}}_{i}}$ and ${\omega }_{{\mathcal{J}}_{i}}$ are unknown, the estimation forms of (14) and (15) are
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{\widehat{\mathcal{F}}}_{i}\left( {\mathcal{X}}_{i}\right) = {\widehat{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{16}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
{\widehat{\mathcal{J}}}_{i}\left( {\mathcal{X}}_{i}\right) = {\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{17}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
where ${\widehat{\omega }}_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1} \times n}$ and ${\widehat{\omega }}_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2} \times n}$ represent estimated weight matrices.
|
| 278 |
+
|
| 279 |
+
According to (16) and (17), one has
|
| 280 |
+
|
| 281 |
+
$$
|
| 282 |
+
{\widehat{u}}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\widehat{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right)
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) . \tag{18}
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
The updating laws are constructed as
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
{\dot{\widehat{\omega }}}_{{\mathcal{F}}_{i}} = {\mathcal{C}}_{i}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1} - {r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) , \tag{19}
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{\dot{\widehat{\omega }}}_{{\mathcal{J}}_{i}} = - {r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}, \tag{20}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
where ${\mathcal{C}}_{i} \in {\mathbb{R}}^{{h}_{c1} \times {h}_{c1}}$ is a positive-definite matrix, ${r}_{{\mathcal{F}}_{i}} >$ $0,{r}_{{\mathcal{J}}_{i}} > 0, r > 0$ are design parameters.
|
| 300 |
+
|
| 301 |
+
## IV. STABILITY ANALYSIS
|
| 302 |
+
|
| 303 |
+
Theorem 1. Consider the MAS consisting of followers (1) and leader (1) under Assumption 1-3, by choosing ${k}_{i} > \frac{3}{4}$ and adopting optimal control input (18) and adaptive law (19) and (20), then the consensus error can converge to the prescribed accuracy within prescribed time.
|
| 304 |
+
|
| 305 |
+
Proof. Develop the Lyapunov function as
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
V = \frac{1}{2}{\varrho }^{T}\varrho + \frac{1}{2}\mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\mathcal{C}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} + {\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}}\right) \tag{21}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
where $\varrho = {\left\lbrack {\varrho }_{1}^{T},\cdots ,{\varrho }_{n}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{N \times n}$ , estimation error ${\widetilde{\omega }}_{{\mathcal{F}}_{i}} =$ ${\omega }_{{\mathcal{F}}_{i}} - {\widehat{\omega }}_{{\mathcal{F}}_{i}}$ and ${\widetilde{\omega }}_{{\mathcal{J}}_{i}} = {\omega }_{{\mathcal{J}}_{i}} - {\widehat{\omega }}_{{\mathcal{J}}_{i}}$ . Invoking (5), (19) and (20), it yields
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\dot{V} = {\varrho }^{T}\left\lbrack {\chi \left( {\mathcal{L} + \mathcal{B}}\right) \dot{e} - {\chi \nu }}\right\rbrack - \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}}\right. }\right.
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
\left. {-{r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}}\right)
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
\leq \mathop{\sum }\limits_{{j = 1}}^{N}{\varrho }_{i}^{T}o\left( {-{k}_{i}{\mathcal{R}}_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right.
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
\left. {-\frac{1}{2}{\mathcal{R}}_{i}^{-1}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) - \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}}\right. }\right.
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\left. {-{r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}}\right)
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\leq \mathop{\sum }\limits_{{j = 1}}^{N}{\varrho }_{i}^{T}o\left( {-{k}_{i}{\mathcal{R}}_{i}^{-1}{\varrho }_{i} + {\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) - \frac{{\mathcal{R}}_{i}^{-1}}{2}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right)
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
+ \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{r}_{{\mathcal{F}}_{i}}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) }\right. }\right.
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
\left. {\left. {+r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) \left. {\widehat{\omega }}_{{\mathcal{J}}_{i}}\right) \text{.}
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
(22)
|
| 346 |
+
|
| 347 |
+
Using Young's inequality, we have
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
o{\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}} \leq \frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} + \frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\epsilon }_{{\mathcal{F}}_{i}}\end{Vmatrix}}^{2}, \tag{23}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\left. {\left. {-\frac{o{\mathcal{R}}_{i}^{-1}}{2}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) \leq \frac{o{\mathcal{R}}_{i}^{-1}}{4}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) ){\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) }\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
+ \frac{o{\mathcal{R}}_{i}^{-1}}{4}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2},
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
(24)
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widehat{\omega }}_{{\mathcal{F}}_{i}} \leq - \frac{1}{2}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} + \frac{1}{2}{\omega }_{{\mathcal{F}}_{i}}^{T}{\omega }_{{\mathcal{F}}_{i}}, \tag{25}
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}} \leq \frac{-{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right.
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
\left. {+r{\mathcal{I}}_{{h}_{c2}}}\right) {\widetilde{\omega }}_{{\mathcal{J}}_{i}} + \frac{{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}.
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
(26)
|
| 376 |
+
|
| 377 |
+
Calculating (22) by bringing (23)-(26), one has
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
\dot{V} \leq - \mathop{\sum }\limits_{{j = 1}}^{N}o{\mathcal{R}}_{i}^{-1}\left( {{k}_{i} - \frac{3}{4}}\right) {\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} - \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{r}_{{\mathcal{F}}_{i}}}{2}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
$$
|
| 384 |
+
\left. {-\mathop{\sum }\limits_{{j = 1}}^{N}\left( {\frac{{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widetilde{\omega }}_{{\mathcal{J}}_{i}}}\right) + \Lambda }\right)
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
\leq - \frac{{\kappa }_{1}}{2}\mathop{\sum }\limits_{{j = 1}}^{N}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} - \frac{{\kappa }_{2}}{2}\mathop{\sum }\limits_{{j = 1}}^{N}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\mathcal{C}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} - \frac{{\kappa }_{3}}{2}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}
|
| 389 |
+
$$
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
+ \Lambda
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
$$
|
| 396 |
+
\leq - {\kappa V} + \Lambda ,
|
| 397 |
+
$$
|
| 398 |
+
|
| 399 |
+
(27)
|
| 400 |
+
|
| 401 |
+
where $\;\Lambda \; = \;\mathop{\sum }\limits_{{j = 1}}^{N}\frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\epsilon }_{{\mathcal{F}}_{i}}\end{Vmatrix}}^{2} +$ $\left. {\left. {\mathop{\sum }\limits_{{j = 1}}^{N}\frac{o{\mathcal{R}}_{i}^{-1}}{4}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) {\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) }\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}} + \mathop{\sum }\limits_{{j = 1}}^{N}\frac{o{\mathcal{R}}_{i}^{-1}}{4}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} +$ $\mathop{\sum }\limits_{{j = 1}}^{N}\frac{{r}_{{\mathcal{F}}_{i}}}{2}{\omega }_{{\mathcal{F}}_{i}}^{T}{\omega }_{{\mathcal{F}}_{i}} + \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}},$ ${\kappa }_{1}\; = \;\mathop{\min }\limits_{{i = 1,\cdots , N}}\left\{ {{2o}{\mathcal{R}}_{i}^{-1}\left( \begin{array}{lll} {k}_{i} & - & \frac{3}{4} \end{array}\right) }\right\} ,\;{\kappa }_{2}\; =$ $\mathop{\min }\limits_{{i = 1,\cdots , N}}\left\{ \frac{{r}_{{\mathcal{F}}_{i}}}{{\lambda }_{\max }\left( {\mathcal{C}}_{i}^{-1}\right) }\right\} ,{\kappa }_{3} = \mathop{\min }\limits_{{i = 1,\cdots , N}}\left\{ {{r}_{{\mathcal{J}}_{i}}{\lambda }_{\min }\left( {\phi }_{i}\right) }\right\} ,$ $\kappa = \min \left\{ {{\kappa }_{1},{\kappa }_{2},{\kappa }_{3}}\right\} ,{\lambda }_{\min }\left( {\phi }_{i}\right)$ is the minimal eigenvalue of ${\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right)$ .
|
| 402 |
+
|
| 403 |
+
## V. SIMULATION
|
| 404 |
+
|
| 405 |
+
A nonlinear MAS composed by four single-link robot arms (three followers and one human-controlled leader) is given to verify the effectiveness of the proposed control scheme. The model of agent is given as [12]
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
{J}_{i}{\ddot{q}}_{i} + {D}_{i}{\dot{q}}_{i} + {M}_{i}g{d}_{i}\sin \left( {q}_{i}\right) = {u}_{i}, i = 1,\cdots ,3,
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
the physical parameters of $g,{M}_{i},{D}_{i},{J}_{i}$ and ${d}_{i}$ can be found in [12] for details. ${u}_{0}^{h}$ is set as
|
| 412 |
+
|
| 413 |
+
$$
|
| 414 |
+
{u}_{0}^{h} = \left\{ \begin{array}{l} {0.3} * \sin \left( t\right) * \sin \left( t\right) ,0 \leq t < {15} \\ 0,{15} \leq t < {30} \\ \sin \left( t\right) * \cos \left( t\right) ,{30} \leq t \leq {50}. \end{array}\right.
|
| 415 |
+
$$
|
| 416 |
+
|
| 417 |
+
The communication graph is shown below
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
|
| 421 |
+
Fig. 1: Communication graph.
|
| 422 |
+
|
| 423 |
+
As shown in Fig. 1, it can be obtained that
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
\mathcal{A} = \left\lbrack \begin{matrix} 0 & - 1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack ,\mathcal{L} = \left\lbrack \begin{matrix} 2 & 1 & - 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack ,
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
$\mathcal{B} = \operatorname{diag}\{ 1,0,0,0,0\} .$
|
| 430 |
+
|
| 431 |
+
For PT performance function, select ${\vartheta }_{{T}_{r}} = {0.06},{T}_{r} = {3s}$ . The initial state values of followers and leader are presented in Table 1.
|
| 432 |
+
|
| 433 |
+
TABLE I: Initial state values of followers and leader.
|
| 434 |
+
|
| 435 |
+
<table><tr><td>State</td><td>$i = 0$</td><td>$i = 1$</td><td>$i = 2$</td><td>$i = 3$</td></tr><tr><td>${x}_{i,1}\left( 0\right)$</td><td>1</td><td>0.8</td><td>0.5</td><td>0.8</td></tr><tr><td>${x}_{i,2}\left( 0\right)$</td><td>-1</td><td>0.8</td><td>-0.5</td><td>-0.8</td></tr></table>
|
| 436 |
+
|
| 437 |
+
For the unknown term ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) ,{\mathcal{X}}_{i} = {\left\lbrack {x}_{i},{x}_{0}^{h},{\dot{x}}_{0}^{h},\vartheta ,\dot{\vartheta }\right\rbrack }^{T}$ and defined over $\left\lbrack {-6,6}\right\rbrack$ . Choose ${\mathcal{X}}_{i}^{0} =$ ${\left\lbrack \underset{5}{\underbrace{{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T},\cdots ,{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T}}}\right\rbrack }^{T}$ ${\phi }_{{\mathcal{F}}_{i}^{\mathcal{L}}}\left( {\mathcal{X}}_{i}\right) = \exp \left( {-\frac{{\left( {\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}\right) }^{T}\left( {{\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}}\right) }{2}}\right) .$
|
| 438 |
+
|
| 439 |
+
For the unknown term ${\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) ,{\mathcal{X}}_{i} = {\left\lbrack {x}_{i},{\varrho }_{i},{x}_{0}^{h},{\dot{x}}_{0}^{h},\vartheta ,\dot{\vartheta }\right\rbrack }^{T}$ and defined over $\left\lbrack {-6,6}\right\rbrack$ . Choose ${\mathcal{X}}_{i}^{0} =$ ${\left\lbrack \underset{6}{\underbrace{{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T},\cdots ,{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T}}}\right\rbrack }^{T}\;$ and ${\phi }_{{\mathcal{J}}_{i}}{\left( {\mathcal{X}}_{i}\right) }^{\mathcal{L}}\left( {\mathcal{X}}_{i}\right) = \exp \left( {-\frac{{\left( {\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}\right) }^{T}\left( {{\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}}\right) }{2}}\right) .$
|
| 440 |
+
|
| 441 |
+
For updating law (19) and (20), ${\widehat{\omega }}_{{\mathcal{F}}_{1}}\left( 0\right) =$ ${\widehat{\omega }}_{{\mathcal{F}}_{2}}\left( 0\right) = {\widehat{\omega }}_{{\mathcal{F}}_{3}}\left( 0\right) = {\left\lbrack {0.1}\right\rbrack }_{{12} \times 2},{\widehat{\omega }}_{{\mathcal{J}}_{1}}\left( 0\right) = {\widehat{\omega }}_{{\mathcal{J}}_{2}}\left( 0\right) =$ ${\widehat{\omega }}_{{\mathcal{J}}_{3}}\left( 0\right) = {\left\lbrack {0.92}\right\rbrack }_{{12} \times 2},{\mathcal{C}}_{1} =$ diag $\{ {0.5},\cdots ,{0.5}\} ,{\mathcal{C}}_{2} =$ diag $\underset{12}{\underbrace{\{ {0.7},\cdots ,{0.7}\} }},{\mathcal{C}}_{3} = \;$ diag $\underset{12}{\underbrace{\{ {0.3},\cdots ,{0.3}\} }}$ ,
|
| 442 |
+
|
| 443 |
+
${\mathcal{R}}_{i} = \operatorname{diag}\{ {0.8},{0.8}\} ,{r}_{{\mathcal{F}}_{i}} = 2,{k}_{i} = {45},{r}_{{\mathcal{J}}_{i}} = 1.$
|
| 444 |
+
|
| 445 |
+

|
| 446 |
+
|
| 447 |
+
Fig. 2: Curves of ${\widetilde{x}}_{i,1},{x}_{0,1}^{h}$ and $- {x}_{0,1}^{h}$ .
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
|
| 451 |
+
Fig. 3: Curves of ${\widetilde{x}}_{i,2},{x}_{0,2}^{h}$ and $- {x}_{0,2}^{h}$ .
|
| 452 |
+
|
| 453 |
+

|
| 454 |
+
|
| 455 |
+
Fig. 4: Curves of errors and performance bounds.
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
|
| 459 |
+
Fig. 5: Curves of optimal control input.
|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
|
| 463 |
+
Fig. 6: Curves of $\begin{Vmatrix}{\omega }_{{\mathcal{F}}_{i}}\end{Vmatrix}$ .
|
| 464 |
+
|
| 465 |
+
From Fig. 2 and Fig. 3, the bipartite consensus can be achieved and the leader, followers 1 and 2 belong to a group while the follower 3 is geared to another group with opposite sign. Fig. 4 shows the bipartite consensus and the PT performance bounds. It can be obtained that the consensus error can reach the given accuracy 0.06 with the prescribed time ${3s}$ . The optimal control input for each agent is depicted in Fig. 5, in which ${u}_{i}$ rapidly converges to a small region of zero. The norm of updating weights in unknown terms ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)$ are given in Fig. 6.
|
| 466 |
+
|
| 467 |
+
## VI. CONCLUSION
|
| 468 |
+
|
| 469 |
+
In this article, the problem of performance-based HiTL optimal bipartite consensus control for nonlinear MASs has been studied. First, the MASs have been monitored by human operator sending command signals to the non-autonomous leader to respond to any emergencies and guarantee the safety of MASs. Then, under the joint design architecture of prescribe-time performance function and error transformation, a novel performance index function has been developed to achieve optimal bipartite consensus with prescribed-time. Subsequently, the RL has been utilized to learn the solution to HJB equation, in which the FLSs are employed to implement the algorithm. The validity of the designed control scheme has been confirmed by simulation.
|
| 470 |
+
|
| 471 |
+
## REFERENCES
|
| 472 |
+
|
| 473 |
+
[1] M. Qian, Z. Wu, and B. Jiang, "Cerebellar model articulation neural network-based distributed fault tolerant tracking control with obstacle avoidance for fixed-wing UAVs," IEEE Transactions on Aerospace and Electronic Systems, vol. 59, no. 5, pp. 6841-6852, 2023.
|
| 474 |
+
|
| 475 |
+
[2] S. Liu, B. Jiang, Z. Mao, and Y. Zhang, "Decentralized adaptive event-triggered fault-tolerant synchronization tracking control of multiple UAVs and UGVs with prescribed performance," IEEE Transactions on Vehicular Technology, vol. 73, no. 7, pp. 9656-9665, 2024.
|
| 476 |
+
|
| 477 |
+
[3] C. Altafini, "Consensus problems on networks with antagonistic interactions," IEEE Transactions on Automatic Control, vol. 58, no. 4, pp. 935-946, 2013.
|
| 478 |
+
|
| 479 |
+
[4] B. Ning, Q. Han, and Z. Zuo, "Bipartite consensus tracking for second-order multiagent systems: A time-varying function-based preset-time approach," IEEE Transactions on Automatic Control, vol. 66, no. 6, pp. 2739-2745, 2021.
|
| 480 |
+
|
| 481 |
+
[5] S. Miao and H. Su, "Bipartite consensus for second-order multiagent systems with matrix-weighted signed network," IEEE Transactions on Cybernetics, vol. 52, no. 12, pp. 13038-13047, 2022.
|
| 482 |
+
|
| 483 |
+
[6] Y. Zhou, Y. Liu, Y. Zhao, M. Cao, and G. Chen, "Fully distributed prescribed-time bipartite synchronization of general linear systems: An adaptive gain scheduling strategy," Automatica, vol. 161, p. 111459, 2024.
|
| 484 |
+
|
| 485 |
+
[7] L. Feng, C. Wiltsche, L. Humphrey, and U. Topcu, "Synthesis of human-in-the-loop control protocols for autonomous systems," IEEE Transactions on Automation Science and Engineering, vol. 13, no. 2, pp. 450-462, 2016.
|
| 486 |
+
|
| 487 |
+
[8] B. Kiumarsi and T. Basar, "Human-in-the-loop control of distributed multi-agent systems: A relative input-output approach," in 2018 IEEE Conference on Decision and Control (CDC), 2018, pp. 3343-3348.
|
| 488 |
+
|
| 489 |
+
[9] L. Ma and F. Zhu, "Human-in-the-loop formation control for multi-agent systems with asynchronous edge-based event-triggered communications," Automatica, doi: 10.1016/j.automatica.2024.111744.
|
| 490 |
+
|
| 491 |
+
[10] G. Lin, H. Li, H. Ma, D. Yao, and R. Lu, "Human-in-the-loop consensus control for nonlinear multi-agent systems with actuator faults," IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 1, pp. 111-122, 2022.
|
| 492 |
+
|
| 493 |
+
[11] J. Chen, J. Xie, J. Li, and W. Chen, "Human-in-the-loop fuzzy iterative
|
| 494 |
+
|
| 495 |
+
learning control of consensus for unknown mixed-order nonlinear multi-agent systems," IEEE Transactions on Fuzzy Systems, vol. 32, no. 1, pp. 255-265, 2023.
|
| 496 |
+
|
| 497 |
+
[12] G. Lin, H. Li, H. Ma, and Q. Zhou, "Distributed containment control for human-in-the-loop MASs with unknown time-varying parameters," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 69, no. 12, pp. 5300-5311, 2022.
|
| 498 |
+
|
| 499 |
+
[13] P.-M. Liu, X.-G. Guo, J.-L. Wang, D. Coutinho, and Z.-G. Wu, "Preset-time and preset-accuracy human-in-the-loop cluster consensus control for MASs under stochastic actuation attacks," IEEE Transactions on Automatic Control, vol. 69, no. 3, pp. 1675-1688, 2024.
|
| 500 |
+
|
| 501 |
+
[14] H. Guo, M. Chen, Y. Jiang, and M. Lungu, "Distributed adaptive human-in-the-loop event-triggered formation control for QUAVs with quantized communication," IEEE Transactions on Industrial Informatics, vol. 19, no. 6, pp. 7572-7582, 2023.
|
| 502 |
+
|
| 503 |
+
[15] L. Chen, H. Liang, Y. Pan, and T. Li, "Human-in-the-loop consensus tracking control for UAV systems via an improved prescribed performance approach," IEEE Transactions on Aerospace and Electronic Systems, vol. 59, no. 6, pp. 8380-8391, 2023.
|
| 504 |
+
|
| 505 |
+
[16] P. J. Werbos, "Reinforcement learning and approximate dynamic programming (RLADP)-foundations, common misconceptions, and the challenges ahead," Reinforcement Learning and Approximate Dynamic Programming for Feedback Control, pp. 1-30, 2012.
|
| 506 |
+
|
| 507 |
+
[17] J. J. Murray, C. J. Cox, G. G. Lendaris, and R. Saeks, "Adaptive dynamic programming," IEEE Transactions on Systems, Man, and Cybernetics, Part $C$ (Applications and Reviews), vol. 32, no. 2, pp. 140-153,2002.
|
| 508 |
+
|
| 509 |
+
[18] M. Abu-Khalaf and F. L. Lewis, "Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach," Automatica, vol. 41, no. 5, pp. 779-791, 2005.
|
| 510 |
+
|
| 511 |
+
[19] T. Li, W. Bai, Q. Liu, Y. Long, and C. L. P. Chen, "Distributed fault-tolerant containment control protocols for the discrete-time multiagent systems via reinforcement learning method," IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 8, pp. 3979-3991, 2023.
|
| 512 |
+
|
| 513 |
+
[20] Q. Liu, H. Yan, M. Wang, Z. Li, and S. Liu, "Data-driven optimal bipartite consensus control for second-order multiagent systems via policy gradient reinforcement learning," IEEE Transactions on Cybernetics, vol. 54, no. 6, pp. 3468-3478, 2024.
|
| 514 |
+
|
| 515 |
+
[21] Y. Song, Y. Wang, J. Holloway, and M. Krstic, "Time-varying feedback for regulation of normal-form nonlinear systems in prescribed finite time," Automatica, vol. 83, pp. 243-251, 2017.
|
| 516 |
+
|
| 517 |
+
[22] Y. Wang and Y. Song, "A general approach to precise tracking of nonlinear systems subject to non-vanishing uncertainties," Automatica, vol. 106, pp. 306-314, 2019.
|
| 518 |
+
|
| 519 |
+
[23] Y. Cao, J. Cao, and Y. Song, "Practical prescribed time tracking control over infinite time interval involving mismatched uncertainties and nonvanishing disturbances," Automatica, vol. 136, p. 110050, 2022.
|
| 520 |
+
|
| 521 |
+
[24] G. Lin, H. Li, C. K. Ahn, and D. Yao, "Event-based finite-time neural control for Human-in-the-Loop UAV attitude systems," IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 12, pp. 10387-10397, 2023.
|
| 522 |
+
|
| 523 |
+
[25] L.-X. Wang, "Stable adaptive fuzzy control of nonlinear systems," IEEE Transactions on Fuzzy Systems, vol. 1, no. 2, pp. 146-155, 1993.
|
| 524 |
+
|
| 525 |
+
[26] Y. Zhang, M. Chadli, and Z. Xiang, "Prescribed-time formation control for a class of multiagent systems via fuzzy reinforcement learning," IEEE Transactions on Fuzzy Systems, vol. 31, no. 12, pp. 4195-4204, 2023.
|
| 526 |
+
|
| 527 |
+
[27] M. Abu-Khalaf and F. L. Lewis, "Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach," Automatica, vol. 41, no. 5, pp. 779-791, 2005.
|
papers/IEEE/IEEE ICIST/IEEE ICIST 2024/IEEE ICIST 2024 Conference/IuP6BhQcDi/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,471 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PERFORMANCE-BASED HUMAN-IN-THE-LOOP OPTIMAL BIPARTITE CONSENSUS CONTROL FOR MULTI-AGENT SYSTEMS VIA REINFORCEMENT LEARNING
|
| 2 |
+
|
| 3 |
+
Zongsheng Huang
|
| 4 |
+
|
| 5 |
+
School of Automation Engineering
|
| 6 |
+
|
| 7 |
+
University of Electronic Science and Technology of China Chengdu 611731, China
|
| 8 |
+
|
| 9 |
+
zs_Huang@163.com
|
| 10 |
+
|
| 11 |
+
Tieshan Li
|
| 12 |
+
|
| 13 |
+
School of Automation Engineering
|
| 14 |
+
|
| 15 |
+
University of Electronic Science and Technology of China Chengdu 611731, China
|
| 16 |
+
|
| 17 |
+
tieshanli@126.com
|
| 18 |
+
|
| 19 |
+
Yue Long
|
| 20 |
+
|
| 21 |
+
School of Automation Engineering
|
| 22 |
+
|
| 23 |
+
University of Electronic Science and Technology of China
|
| 24 |
+
|
| 25 |
+
Chengdu 611731, China
|
| 26 |
+
|
| 27 |
+
longyue@uestc.edu.cn
|
| 28 |
+
|
| 29 |
+
Hanqing Yang
|
| 30 |
+
|
| 31 |
+
School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731, China
|
| 32 |
+
|
| 33 |
+
hqyang5517@uestc.edu.cn
|
| 34 |
+
|
| 35 |
+
${Abstract}$ -This paper investigates the performance-based human-in-the-loop (HiTL) optimal bipartite consensus control problem for nonlinear multi-agent systems (MASs) under signed topology. First, to respond to any emergencies and guarantee the safety of MASs, the MASs are monitored by human operator sending command signals to the non-autonomous leader. Then, under the joint design architecture of prescribe-time performance function and error transformation, a novel performance index function involving transformed error and control input is developed to achieve optimal bipartite consensus with prescribed-time. Subsequently, the reinforcement learning (RL) method is utilized to learn the solution to Hamilton-Jacobian-Bellman (HJB) equation, in which the fuzzy logic systems (FLSs) are employed to implement the method. Finally, the simulation results depict the effectiveness of the constructed control scheme.
|
| 36 |
+
|
| 37 |
+
Index Terms-Human-in-the-loop control, prescribed-time control, reinforcement learning, nonlinear multi-agent systems.
|
| 38 |
+
|
| 39 |
+
§ I. INTRODUCTION
|
| 40 |
+
|
| 41 |
+
In recent years, with the rapid development of multiple unmanned aerial vehicles (UAVs) [1], multiple unmanned ground vehicles (UGVs) [2] and other fields, multi-agent systems (MASs) have been paid more and more attention by scholars. As one of the hot issues in control problems of MASs, consensus control problems have been widely studied. As a branch of consensus control, bipartite consensus was first introduced in [3] taking both competition and cooperation relationships between agents into consideration. For bipartite consensus, the agents eventually converge to two states of opposite sign but equal size. In [4]-[6], the various control strategies of bipartite consensus have been designed broadly.
|
| 42 |
+
|
| 43 |
+
Notably, the MASs mentioned above are fully autonomous. However, incidents with Boeing 737 jetliners and Tesla's autonomous driving systems have raised serious concerns and highlighted the challenges that fully autonomous MASs face in making judgments during in uncertain and complex environments. Therefore, it is urgent to develop monitoring schemes to complete tasks when MASs encounter unexpected situations [7]. Fortunately, the human-in-the-loop (HiTL) control approach was introduced in MASs to supervise the entire system to respond to sudden changes by sending commands to the leader agent [8]. Later, many studies on HiTL control for MASs have emerged in [9]-[15]. In [9], the HiTL formation tracking control scheme together with edge-based event-driven mechanism was constructed for MASs. Considering stochastic actuation attacks, in [13], the prescribed-time and prescribed-accuracy HiTL cluster consensus control problem has been solved. In view of the ability to deal with emergencies, the HiTL control approach has also been favored by multi-UAV systems in [14], [15].
|
| 44 |
+
|
| 45 |
+
Optimal control, a widely used control method, has garnered significant attention. For nonlinear systems, the optimal solution is derived from the Hamilton-Jacobian-Bellman (HJB) equation. However, obtaining the solution of HJB equation through numerical methods is infeasible. To overcome this challenge, reinforcement learning (RL) that motivated by animal behaviors was proposed as a powerful tool [16]. The core idea of RL is to approximate the solution of the HJB equation using a function approximation structure. The value iteration algorithm, one of the valuable algorithms in RL, was developed by Murray et al. in [17], in which the convergence analysis was also detailed. In [18], the policy iteration algorithm, as another equally important algorithm, was designed to obtain the optimal saturation controller for nonlinear systems. Based on the previous work, RL method has been used to solve the optimal problem for MASs. In [19], an optimal control protocol based on RL was designed to achieve containment control without prior knowledge of the system dynamics. For unknown discrete-time MASs, in [20], the optimal bipartite consensus control problem was solved. Nevertheless, the above results only conclude that the optimal controller is globally asymptotically stable. It is important to note that achieving specified accuracy within a given time is crucial in many fields.
|
| 46 |
+
|
| 47 |
+
This work was supported in part by the National Natural Science Foundation of China under Grant 51939001, Grant 62273072, and Grant 62203088, in part by the Natural Science Foundation of Sichuan Province under Grant 2022NSFSC0903.(Corresponding author: Tieshan Li)
|
| 48 |
+
|
| 49 |
+
Fortunately, the prescribed-time control (PTC) was firstly proposed by Song et al. [21]. The PTC distinguishes from finite-time control and fixed-time control, in which the preset settling time is not related to the initial values of the system. Depending on [21], in [22], the convergence rate can be predetermined as needed, and a general method for constructing the time-varying rate function was provided. In [23], a novel time-varying constraint function was devised to guarantee that the system remains operational beyond the prescribed time, leading to a global result. In particular, the PTC-based HiTL control scheme was developed to realize the cluster consensus within given time in [13]. However, to the best of the authors' knowledge, the bipartite consensus control scheme considering both optimal performance and prescribed-time performance under the framework of HiTL control has not been fully explored, which promotes our research.
|
| 50 |
+
|
| 51 |
+
Driven by these observations, this paper focuses on investigating the performance-based HiTL optimal bipartite consensus control problem. The main contributions are summarized below.
|
| 52 |
+
|
| 53 |
+
(1) Unlike the autonomous leader described in [4]-[6] which lacked intelligent decision-making, this paper aims to improve the security, stability, and emergency response capabilities of the system by designing the leader of the MASs to be non-autonomous, where the time-varying control input is governed by a human operator.
|
| 54 |
+
|
| 55 |
+
(2) Compared with the existing optimal results for MASs in [19], [20], to realize both optimal performance and prescribed-time performance, a unified design framework of PTC and RL method is proposed, where the settling time and accuracy can be preset without initial values.
|
| 56 |
+
|
| 57 |
+
The structure is given below. In Section II, the considered system and some assumptions are given. In Section III, the main results including the PTC performance function and optimal controller are designed. In Section IV, the convergence analysis is provided. The simulation results is given in Section V. Finally, the conclusion is presented in Section VI.
|
| 58 |
+
|
| 59 |
+
§ II. PROBLEM FORMULATION AND PRELIMINARIES
|
| 60 |
+
|
| 61 |
+
§ A. SIGNED COMMUNICATION TOPOLOGIES
|
| 62 |
+
|
| 63 |
+
The structurally balanced bipartition communication topology containing $N$ followers is represented by a directed graph $\mathcal{G} = \{ \mathcal{V},\varepsilon ,\mathcal{A}\}$ , where $\mathcal{V} = \left\{ {{\mathcal{V}}_{1},{\mathcal{V}}_{2},\cdots ,{\mathcal{V}}_{N}}\right\}$ represents the vertex set, which is divided into the cooperative set ${\mathcal{V}}_{\alpha }$ and competitive set ${\mathcal{V}}_{\beta }$ such that ${\mathcal{V}}_{\alpha } \cap {\mathcal{V}}_{\beta } = 0$ and ${\mathcal{V}}_{\alpha } \cup {\mathcal{V}}_{\beta } = \mathcal{V}$ . $\varepsilon \subseteq \mathcal{V} \times \mathcal{V}$ represents the edge set of $N$ followers. Let $\mathcal{A} = \left\lbrack {a}_{ij}\right\rbrack \in {\mathbb{R}}^{N \times N}$ be the signed weight matrix, where ${a}_{ij} > 0$ if $\left( {{\mathcal{V}}_{i},{\mathcal{V}}_{j}}\right) \in {\mathcal{V}}_{m},m \in \{ \alpha ,\beta \}$ and ${a}_{ij} < 0$ if ${\mathcal{V}}_{i} \in {\mathcal{V}}_{m},{\mathcal{V}}_{j} \in {\mathcal{V}}_{n},m \neq n,m,n \in \{ \alpha ,\beta \}$ . The neighbor set of $i$ th follower is defined as ${\mathcal{N}}_{i} = \left\{ {j \in \mathcal{V} : {a}_{ij} \neq 0}\right\}$ . Define $\mathcal{L} = \mathcal{D} - \mathcal{A} \in {\mathbb{R}}^{N \times N}$ as the Laplacian matrix of $\mathcal{G}$ , where $\mathcal{D} = \operatorname{diag}\left( {{d}_{1},{d}_{2},\cdots ,{d}_{N}}\right) \in {\mathbb{R}}^{N \times N}$ denotes the degree matrix with ${d}_{i} = \mathop{\sum }\limits_{{j = 1}}^{N}\left| {a}_{ij}\right|$ .
|
| 64 |
+
|
| 65 |
+
The argument graph consisting of one leader and $N$ followers is denoted as $\widetilde{\mathcal{G}} = \{ \widetilde{\mathcal{V}},\widetilde{\varepsilon }\}$ , in which $\widetilde{\mathcal{V}} =$ $\left\{ {{\mathcal{V}}_{0},{\mathcal{V}}_{1},{\mathcal{V}}_{2},\cdots ,{\mathcal{V}}_{N}}\right\}$ and $\widetilde{\varepsilon } \subseteq \widetilde{\mathcal{V}} \times \widetilde{\mathcal{V}}$ . Let $\mathcal{B} =$ $\operatorname{diag}\left\{ {\left| {b}_{1}\right| ,\left| {b}_{2}\right| ,\cdots ,\left| {b}_{N}\right| }\right\} \in {\mathbb{R}}^{N \times N}$ , where ${b}_{i} = 1$ indicates that the information of the leader is available for the $i$ th node and ${b}_{i} > 0$ represents cooperative relation, ${b}_{i} < 0$ represents competitive relation.
|
| 66 |
+
|
| 67 |
+
§ B. PROBLEM FORMULATION
|
| 68 |
+
|
| 69 |
+
Assume that the nonlinear MAS is composed of $N\left( { \geq 2}\right)$ followers and one leader. The dynamics model of $i$ th follower is provided as
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{\dot{x}}_{i} = {f}_{i}\left( {x}_{i}\right) + {g}_{i}\left( {x}_{i}\right) {u}_{i},i = 1,2,\cdots ,N \tag{1}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where ${x}_{i}\left( t\right) \in {\mathbb{R}}^{n}$ denotes state, ${u}_{i}\left( t\right) \in {\mathbb{R}}^{m}$ is control input, ${f}_{i}\left( {x}_{i}\right) \in {\mathbb{R}}^{n}$ is internal dynamics and ${g}_{i}\left( {x}_{i}\right) \in {\mathbb{R}}^{n \times m}$ is input dynamics.
|
| 76 |
+
|
| 77 |
+
Next, the dynamics of the human-manipulated leader is given as
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{\dot{x}}_{0}^{h} = {f}_{0}^{h}\left( {x}_{0}^{h}\right) + {u}_{0}^{h}, \tag{2}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where ${x}_{0}^{h}\left( t\right) \in {\mathbb{R}}^{n}$ denotes state and ${u}_{0}^{h}\left( t\right) \in {\mathbb{R}}^{m}$ is nonzero control input of human operator sending to leader, ${f}_{0}^{h}\left( {x}_{0}^{h}\right) \in$ ${\mathbb{R}}^{n}$ represents internal dynamics.
|
| 84 |
+
|
| 85 |
+
The following assumptions and lemma are imposed.
|
| 86 |
+
|
| 87 |
+
Assumption 1. [19] The signed graph $\mathcal{G}$ has a directed spanning tree.
|
| 88 |
+
|
| 89 |
+
Assumption 2. [24] The input of human operator always makes the leader (2) stable.
|
| 90 |
+
|
| 91 |
+
Lemma 1. [25]: The FLS can estimate a nonlinear continuous function $f\left( \mathfrak{x}\right) \in \mathbb{R}$ on a compact set ${\Omega }_{f} \in {\mathbb{R}}^{n}$ as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\mathop{\sup }\limits_{{\mathfrak{x} \in {\Omega }_{f}}}\left| {f\left( \mathfrak{x}\right) - {\Theta }^{T}\phi \left( \mathfrak{x}\right) }\right| \leq b \tag{3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
with $b > 0$ .
|
| 98 |
+
|
| 99 |
+
§ III. MAIN RESULTS
|
| 100 |
+
|
| 101 |
+
§ A. PRESCRIBED-TIME FUNCTION AND ERROR TRANSFORMATION
|
| 102 |
+
|
| 103 |
+
To achieve prescribed-time (PT) performance for MASs, the PT performance function $\vartheta \left( t\right)$ is given as
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\vartheta \left( t\right) = \left\{ \begin{array}{ll} \iota {e}^{-\beta {\left( \frac{T}{T - t}\right) }^{h}} + {\vartheta }_{{T}_{r}}, & 0 < t < {T}_{r} \\ {\vartheta }_{{T}_{r}}, & t \geq {T}_{r} \end{array}\right. \tag{4}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $h > 0,\iota > 0,\beta > 0,{\vartheta }_{{T}_{r}} > 0,0 < {T}_{r} < \infty$ and $0 < {\vartheta }_{{T}_{r}} < \infty$ represent the user-defined settling time and steady-state tracking accuracy, respectively.
|
| 110 |
+
|
| 111 |
+
Construct the bipartite consensus error as ${e}_{i} =$ $\mathop{\sum }\limits_{{j = 1}}^{N}\left| {a}_{ij}\right| \left( {{x}_{i} - \operatorname{sign}\left( {a}_{ij}\right) {x}_{j}}\right) + \left| {b}_{i}\right| \left( {{x}_{i} - \operatorname{sign}\left( {b}_{i}\right) {x}_{0}^{h}}\right) ,{e}_{i} =$ ${\left\lbrack {e}_{i,1},\cdots ,{e}_{i,n}\right\rbrack }^{T} \in {\mathbb{R}}^{n}$ and adopt the error transformation function as
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\varrho }_{i,\imath } = \tan \left( {\frac{\pi }{2}\frac{{e}_{i,\imath }}{\vartheta }}\right) ,\imath = 1,\cdots ,n, \tag{5}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where $\left| {{e}_{i,\iota }\left( 0\right) }\right| < \vartheta \left( 0\right)$ .
|
| 118 |
+
|
| 119 |
+
Based on (5), it yields
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
{e}_{i,\imath } = \frac{2\vartheta }{\pi }\arctan \left( {\varrho }_{i,\imath }\right) ,\imath = 1,\cdots ,n,i = 1,\cdots ,N. \tag{6}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
Remark 1. From (5), the inequality $- \vartheta \leq {e}_{i,\iota } \leq \vartheta ,\forall t \geq 0$ holds. Combined the definition in (4), it further observes that $- {\vartheta }_{{T}_{r}} \leq {e}_{i,\iota } \leq {\vartheta }_{{T}_{r}},\forall t \geq {T}_{r}$ if ${\varrho }_{i,\iota }$ is bounded, which means the PT performance of ${e}_{i}$ can be ensured.
|
| 126 |
+
|
| 127 |
+
§ B. OPTIMAL CONTROL SCHEME DESIGN
|
| 128 |
+
|
| 129 |
+
Define the performance index function as
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{J}_{i} = {\int }_{t}^{\infty }\left( {{e}_{i}^{T}{\mathcal{Q}}_{i}{e}_{i} + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i}}\right) {d\tau } \tag{7}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
= {\int }_{t}^{\infty }\left( {{\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i}}\right) {d\tau },
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where ${\mathcal{Q}}_{i}$ and ${\mathcal{R}}_{i}$ are symmetric positive definite matrices with suitable dimensions, ${\mathcal{A}}_{i} = {\left\lbrack {\mathcal{A}}_{i,1},\cdots ,{\mathcal{A}}_{i,n}\right\rbrack }^{T} =$ ${\left\lbrack \arctan \left( {\varrho }_{i,1}\right) ,\cdots ,\arctan \left( {\varrho }_{i,n}\right) \right\rbrack }^{T}$ .
|
| 140 |
+
|
| 141 |
+
Taking the time derivative of ${\mathcal{A}}_{i,i}$ , one has
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
{\dot{\mathcal{A}}}_{i,\iota } = \frac{1}{1 + {\varrho }_{i,\iota }^{2}}{\chi }_{i,\iota }\left( {{\dot{e}}_{i,\iota } - {\nu }_{i,\iota }}\right) , \tag{8}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where ${\chi }_{i,\imath } = \frac{\pi }{{2\vartheta }{\cos }^{2}\left( {\frac{\pi }{2}\frac{{e}_{i,\imath }}{\vartheta }}\right) },{\nu }_{i,\imath } = \frac{{e}_{i,\imath }\dot{\vartheta }}{\vartheta },{\dot{e}}_{i} = {\Gamma }_{i}\left( {{f}_{i} + {g}_{i}{u}_{i}}\right) -$ $\mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\dot{x}}_{j} - {b}_{i}{\dot{x}}_{0}^{h}$ and ${\Gamma }_{i} = {d}_{i} + \left| {b}_{i}\right|$ .
|
| 148 |
+
|
| 149 |
+
Then, define the Hamiltonian function as
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
{H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i},\frac{\partial {J}_{i}}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}}{\partial \vartheta }}\right) = {\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right)
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
+ {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i} + \frac{\partial {J}_{i}}{\partial {\mathcal{A}}_{i}}\left\lbrack {{\bar{\chi }}_{i}\left( {{\dot{e}}_{i} - {\nu }_{i}}\right) }\right\rbrack + \frac{\partial {J}_{i}}{\partial \vartheta }\frac{\partial \vartheta }{\partial t} \tag{9}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
= {\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + {u}_{i}^{T}{\mathcal{R}}_{i}{u}_{i} + \frac{\partial {J}_{i}}{\partial {\varrho }_{i}}\left\lbrack {{\chi }_{i}\left( {{\dot{e}}_{i} - {\nu }_{i}}\right) }\right\rbrack
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
+ \frac{\partial {J}_{i}}{\partial \vartheta }\frac{\partial \vartheta }{\partial t},
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where ${\bar{\chi }}_{i} = \operatorname{diag}\left\{ {\frac{{\chi }_{i,1}}{1 + {\varrho }_{i,1}^{2}},\cdots ,\frac{{\chi }_{i,n}}{1 + {\varrho }_{i,n}^{2}}}\right\} ,{\nu }_{i} = \left\lbrack {{\nu }_{i,1},\cdots ,{\nu }_{i,n}}\right\rbrack$ and ${\chi }_{i} = \operatorname{diag}\left\{ {{\chi }_{i,1},\cdots ,{\chi }_{i,n}}\right\}$ .
|
| 168 |
+
|
| 169 |
+
The corresponding HJB equation is given as
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\mathop{\min }\limits_{{u}_{i}}{H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i}^{ * },\frac{\partial {J}_{i}^{ * }}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}^{ * }}{\partial \vartheta }}\right) = 0. \tag{10}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
Differentiating the (10) with respect to ${u}_{i}$ , one has
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
{u}_{i}^{ * } = - \frac{{\Gamma }_{i}}{2}{\mathcal{R}}_{i}^{-1}{g}_{i}^{T}{\chi }_{i}^{T}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}. \tag{11}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
Substituting (11) into (10), (10) becomes
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
{\left( \frac{2\vartheta }{\pi }{\mathcal{A}}_{i}\right) }^{T}{\mathcal{Q}}_{i}\left( {\frac{2\vartheta }{\pi }{\mathcal{A}}_{i}}\right) + \frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}\left\lbrack {{\chi }_{i}\left( {{\Gamma }_{i}{f}_{i} - \mathop{\sum }\limits_{{j = 1}}^{N}{a}_{ij}{\dot{x}}_{i} - {b}_{i}{\dot{x}}_{0}^{h}}\right. }\right.
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\left. \left. {-{\nu }_{i}}\right) \right\rbrack + \frac{\partial {J}_{i}^{ * }}{\partial \vartheta }\frac{\partial \vartheta }{\partial t} - \frac{{\Gamma }_{i}^{2}}{4}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}^{T}}{g}_{i}{\chi }_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{T}{g}_{i}^{T}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}} = 0.
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
Inspired by [26], $\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}$ can be segmented as
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}} = \frac{2{k}_{i}}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\varrho }_{i} + \frac{2}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) + \frac{1}{{\Gamma }_{i}}{\chi }_{i}^{-2}{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) , \tag{12}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
where ${k}_{i} > 0,{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) = {\mathcal{R}}_{i}{\chi }_{i}\left( {{f}_{i}\left( {x}_{i}\right) - {\dot{x}}_{0}^{h} - {o}^{-1}{\nu }_{i}}\right)$ with $o = {\lambda }_{\max }\left( {\mathcal{L} + \mathcal{B}}\right) ,{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) = - 2{k}_{i}{\varrho }_{i}^{2} - 2{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) + {k}_{i}{\chi }_{i}^{2}\frac{\partial {J}_{i}^{ * }}{\partial {\varrho }_{i}}.$
|
| 198 |
+
|
| 199 |
+
Substituting (12) into (11), one has
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
{u}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) . \tag{13}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
§ C.PI ALGORITHM AND FLSS-BASED IMPLEMENTATION
|
| 210 |
+
|
| 211 |
+
Obviously, the HJB equation can not be acquired by numerical methods. Therefore, the PI approach is given in Algorithm 1 to find the optimal result.
|
| 212 |
+
|
| 213 |
+
Algorithm 1: PI Algorithm for Solving PT Optimal
|
| 214 |
+
|
| 215 |
+
Consensus Control Policy
|
| 216 |
+
|
| 217 |
+
1 Step 1: Initialization. Give an initial control protocols
|
| 218 |
+
|
| 219 |
+
${u}_{i}^{\left( 0\right) },\forall i$ .
|
| 220 |
+
|
| 221 |
+
2 Step 2: Policy evaluation. Solve the cost function ${J}_{i}^{l}$
|
| 222 |
+
|
| 223 |
+
as: ${H}_{i}\left( {{\mathcal{A}}_{i},\vartheta ,{u}_{i}^{ * },\frac{\partial {J}_{i}^{l}}{\partial {\mathcal{A}}_{i}},\frac{\partial {J}_{i}^{l}}{\partial \vartheta }}\right) = 0$ .
|
| 224 |
+
|
| 225 |
+
3 Step 3: Policy improvement. Update optimal control
|
| 226 |
+
|
| 227 |
+
input ${u}_{i}^{\left( l + 1\right) }$ as Eq. (13).
|
| 228 |
+
|
| 229 |
+
Step 4: If $\begin{Vmatrix}{{J}_{i}^{\left( l + 1\right) } - {J}_{i}^{\left( l\right) }}\end{Vmatrix} \leq \aleph$ with the predefined
|
| 230 |
+
|
| 231 |
+
parameter $\aleph > 0$ , stop; otherwise, set $l = l + 1$ and
|
| 232 |
+
|
| 233 |
+
return to Step 2.
|
| 234 |
+
|
| 235 |
+
The convergence and optimality of Algorithm 1 have been proved in [27] and are omitted here.
|
| 236 |
+
|
| 237 |
+
In view of the unknown term ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)$ and ${\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right)$ in (13), the FLSs is used to approximate these terms as.
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
{\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) = {\omega }_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{14}
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
{\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) = {\omega }_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{15}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
where ${\omega }_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1} \times n}$ and ${\omega }_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2} \times n}$ represent ideal weight matrices with ${h}_{c1}$ and ${h}_{c2}$ are the number of fuzzy rules; ${\phi }_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1}}$ and ${\phi }_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2}}$ are fuzzy basis functions; ${\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right)$ and ${\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right)$ denote bounded approximation errors.
|
| 248 |
+
|
| 249 |
+
Thus, (13) becomes
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{u}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\omega }_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{F}}_{i}}(\mathcal{X}}\right.
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\omega }_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\epsilon }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) .
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
However, the ${\omega }_{{\mathcal{F}}_{i}}$ and ${\omega }_{{\mathcal{J}}_{i}}$ are unknown, the estimation forms of (14) and (15) are
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
{\widehat{\mathcal{F}}}_{i}\left( {\mathcal{X}}_{i}\right) = {\widehat{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{16}
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
{\widehat{\mathcal{J}}}_{i}\left( {\mathcal{X}}_{i}\right) = {\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) , \tag{17}
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
where ${\widehat{\omega }}_{{\mathcal{F}}_{i}} \in {\mathbb{R}}^{{h}_{c1} \times n}$ and ${\widehat{\omega }}_{{\mathcal{J}}_{i}} \in {\mathbb{R}}^{{h}_{c2} \times n}$ represent estimated weight matrices.
|
| 270 |
+
|
| 271 |
+
According to (16) and (17), one has
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
{\widehat{u}}_{i}^{ * } = - {k}_{i}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\widehat{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right)
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
- \frac{1}{2}{\mathcal{R}}_{i}^{-1}{\chi }_{i}^{-1}\left( {{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) . \tag{18}
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
The updating laws are constructed as
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
{\dot{\widehat{\omega }}}_{{\mathcal{F}}_{i}} = {\mathcal{C}}_{i}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1} - {r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) , \tag{19}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
{\dot{\widehat{\omega }}}_{{\mathcal{J}}_{i}} = - {r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}, \tag{20}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
where ${\mathcal{C}}_{i} \in {\mathbb{R}}^{{h}_{c1} \times {h}_{c1}}$ is a positive-definite matrix, ${r}_{{\mathcal{F}}_{i}} >$ $0,{r}_{{\mathcal{J}}_{i}} > 0,r > 0$ are design parameters.
|
| 292 |
+
|
| 293 |
+
§ IV. STABILITY ANALYSIS
|
| 294 |
+
|
| 295 |
+
Theorem 1. Consider the MAS consisting of followers (1) and leader (1) under Assumption 1-3, by choosing ${k}_{i} > \frac{3}{4}$ and adopting optimal control input (18) and adaptive law (19) and (20), then the consensus error can converge to the prescribed accuracy within prescribed time.
|
| 296 |
+
|
| 297 |
+
Proof. Develop the Lyapunov function as
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
V = \frac{1}{2}{\varrho }^{T}\varrho + \frac{1}{2}\mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\mathcal{C}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} + {\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}}\right) \tag{21}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
where $\varrho = {\left\lbrack {\varrho }_{1}^{T},\cdots ,{\varrho }_{n}^{T}\right\rbrack }^{T} \in {\mathbb{R}}^{N \times n}$ , estimation error ${\widetilde{\omega }}_{{\mathcal{F}}_{i}} =$ ${\omega }_{{\mathcal{F}}_{i}} - {\widehat{\omega }}_{{\mathcal{F}}_{i}}$ and ${\widetilde{\omega }}_{{\mathcal{J}}_{i}} = {\omega }_{{\mathcal{J}}_{i}} - {\widehat{\omega }}_{{\mathcal{J}}_{i}}$ . Invoking (5), (19) and (20), it yields
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
\dot{V} = {\varrho }^{T}\left\lbrack {\chi \left( {\mathcal{L} + \mathcal{B}}\right) \dot{e} - {\chi \nu }}\right\rbrack - \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}}\right. }\right.
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
\left. {-{r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}}\right)
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\leq \mathop{\sum }\limits_{{j = 1}}^{N}{\varrho }_{i}^{T}o\left( {-{k}_{i}{\mathcal{R}}_{i}^{-1}{\varrho }_{i} - {\mathcal{R}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) + {\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right.
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
\left. {-\frac{1}{2}{\mathcal{R}}_{i}^{-1}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) - \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}\left( {o{\phi }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) {\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}}\right. }\right.
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
\left. {-{r}_{{\mathcal{F}}_{i}}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}}\right)
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
\leq \mathop{\sum }\limits_{{j = 1}}^{N}{\varrho }_{i}^{T}o\left( {-{k}_{i}{\mathcal{R}}_{i}^{-1}{\varrho }_{i} + {\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}}\left( {\mathcal{X}}_{i}\right) - \frac{{\mathcal{R}}_{i}^{-1}}{2}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right)
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
+ \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{r}_{{\mathcal{F}}_{i}}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widehat{\omega }}_{{\mathcal{F}}_{i}}}\right) + \mathop{\sum }\limits_{{j = 1}}^{N}\left( {{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{r}_{{\mathcal{J}}_{i}}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) }\right. }\right.
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\left. {\left. {+r}\right) {\mathcal{I}}_{{h}_{c2}}}\right) \left. {\widehat{\omega }}_{{\mathcal{J}}_{i}}\right) \text{ . }
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
(22)
|
| 338 |
+
|
| 339 |
+
Using Young's inequality, we have
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
o{\varrho }_{i}^{T}{\mathcal{R}}_{i}^{-1}{\epsilon }_{{\mathcal{F}}_{i}} \leq \frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} + \frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\epsilon }_{{\mathcal{F}}_{i}}\end{Vmatrix}}^{2}, \tag{23}
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
\left. {\left. {-\frac{o{\mathcal{R}}_{i}^{-1}}{2}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) \leq \frac{o{\mathcal{R}}_{i}^{-1}}{4}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) ){\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) }\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
+ \frac{o{\mathcal{R}}_{i}^{-1}}{4}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2},
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
(24)
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widehat{\omega }}_{{\mathcal{F}}_{i}} \leq - \frac{1}{2}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} + \frac{1}{2}{\omega }_{{\mathcal{F}}_{i}}^{T}{\omega }_{{\mathcal{F}}_{i}}, \tag{25}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}} \leq \frac{-{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right.
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
\left. {+r{\mathcal{I}}_{{h}_{c2}}}\right) {\widetilde{\omega }}_{{\mathcal{J}}_{i}} + \frac{{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}}.
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
(26)
|
| 368 |
+
|
| 369 |
+
Calculating (22) by bringing (23)-(26), one has
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
\dot{V} \leq - \mathop{\sum }\limits_{{j = 1}}^{N}o{\mathcal{R}}_{i}^{-1}\left( {{k}_{i} - \frac{3}{4}}\right) {\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} - \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{r}_{{\mathcal{F}}_{i}}}{2}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
\left. {-\mathop{\sum }\limits_{{j = 1}}^{N}\left( {\frac{{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widetilde{\omega }}_{{\mathcal{J}}_{i}}}\right) + \Lambda }\right)
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
\leq - \frac{{\kappa }_{1}}{2}\mathop{\sum }\limits_{{j = 1}}^{N}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} - \frac{{\kappa }_{2}}{2}\mathop{\sum }\limits_{{j = 1}}^{N}{\widetilde{\omega }}_{{\mathcal{F}}_{i}}^{T}{\mathcal{C}}_{i}^{-1}{\widetilde{\omega }}_{{\mathcal{F}}_{i}} - \frac{{\kappa }_{3}}{2}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}^{T}{\widetilde{\omega }}_{{\mathcal{J}}_{i}}
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
$$
|
| 384 |
+
+ \Lambda
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
\leq - {\kappa V} + \Lambda ,
|
| 389 |
+
$$
|
| 390 |
+
|
| 391 |
+
(27)
|
| 392 |
+
|
| 393 |
+
where $\;\Lambda \; = \;\mathop{\sum }\limits_{{j = 1}}^{N}\frac{o}{2}{\mathcal{R}}_{i}^{-1}{\begin{Vmatrix}{\epsilon }_{{\mathcal{F}}_{i}}\end{Vmatrix}}^{2} +$ $\left. {\left. {\mathop{\sum }\limits_{{j = 1}}^{N}\frac{o{\mathcal{R}}_{i}^{-1}}{4}{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}{\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) }\right) {\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) }\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}} + \mathop{\sum }\limits_{{j = 1}}^{N}\frac{o{\mathcal{R}}_{i}^{-1}}{4}{\begin{Vmatrix}{\varrho }_{i}\end{Vmatrix}}^{2} +$ $\mathop{\sum }\limits_{{j = 1}}^{N}\frac{{r}_{{\mathcal{F}}_{i}}}{2}{\omega }_{{\mathcal{F}}_{i}}^{T}{\omega }_{{\mathcal{F}}_{i}} + \mathop{\sum }\limits_{{j = 1}}^{N}\frac{{\widehat{\omega }}_{{\mathcal{J}}_{i}}^{T}}{2}\left( {{\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right) + r{\mathcal{I}}_{{h}_{c2}}}\right) {\widehat{\omega }}_{{\mathcal{J}}_{i}},$ ${\kappa }_{1}\; = \;\mathop{\min }\limits_{{i = 1,\cdots ,N}}\left\{ {{2o}{\mathcal{R}}_{i}^{-1}\left( \begin{array}{lll} {k}_{i} & - & \frac{3}{4} \end{array}\right) }\right\} ,\;{\kappa }_{2}\; =$ $\mathop{\min }\limits_{{i = 1,\cdots ,N}}\left\{ \frac{{r}_{{\mathcal{F}}_{i}}}{{\lambda }_{\max }\left( {\mathcal{C}}_{i}^{-1}\right) }\right\} ,{\kappa }_{3} = \mathop{\min }\limits_{{i = 1,\cdots ,N}}\left\{ {{r}_{{\mathcal{J}}_{i}}{\lambda }_{\min }\left( {\phi }_{i}\right) }\right\} ,$ $\kappa = \min \left\{ {{\kappa }_{1},{\kappa }_{2},{\kappa }_{3}}\right\} ,{\lambda }_{\min }\left( {\phi }_{i}\right)$ is the minimal eigenvalue of ${\phi }_{{\mathcal{J}}_{i}}^{T}\left( {\mathcal{X}}_{i}\right) {\phi }_{{\mathcal{J}}_{i}}\left( {\mathcal{X}}_{i}\right)$ .
|
| 394 |
+
|
| 395 |
+
§ V. SIMULATION
|
| 396 |
+
|
| 397 |
+
A nonlinear MAS composed by four single-link robot arms (three followers and one human-controlled leader) is given to verify the effectiveness of the proposed control scheme. The model of agent is given as [12]
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
{J}_{i}{\ddot{q}}_{i} + {D}_{i}{\dot{q}}_{i} + {M}_{i}g{d}_{i}\sin \left( {q}_{i}\right) = {u}_{i},i = 1,\cdots ,3,
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
the physical parameters of $g,{M}_{i},{D}_{i},{J}_{i}$ and ${d}_{i}$ can be found in [12] for details. ${u}_{0}^{h}$ is set as
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
{u}_{0}^{h} = \left\{ \begin{array}{l} {0.3} * \sin \left( t\right) * \sin \left( t\right) ,0 \leq t < {15} \\ 0,{15} \leq t < {30} \\ \sin \left( t\right) * \cos \left( t\right) ,{30} \leq t \leq {50}. \end{array}\right.
|
| 407 |
+
$$
|
| 408 |
+
|
| 409 |
+
The communication graph is shown below
|
| 410 |
+
|
| 411 |
+
< g r a p h i c s >
|
| 412 |
+
|
| 413 |
+
Fig. 1: Communication graph.
|
| 414 |
+
|
| 415 |
+
As shown in Fig. 1, it can be obtained that
|
| 416 |
+
|
| 417 |
+
$$
|
| 418 |
+
\mathcal{A} = \left\lbrack \begin{matrix} 0 & - 1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack ,\mathcal{L} = \left\lbrack \begin{matrix} 2 & 1 & - 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right\rbrack ,
|
| 419 |
+
$$
|
| 420 |
+
|
| 421 |
+
$\mathcal{B} = \operatorname{diag}\{ 1,0,0,0,0\} .$
|
| 422 |
+
|
| 423 |
+
For PT performance function, select ${\vartheta }_{{T}_{r}} = {0.06},{T}_{r} = {3s}$ . The initial state values of followers and leader are presented in Table 1.
|
| 424 |
+
|
| 425 |
+
TABLE I: Initial state values of followers and leader.
|
| 426 |
+
|
| 427 |
+
max width=
|
| 428 |
+
|
| 429 |
+
State $i = 0$ $i = 1$ $i = 2$ $i = 3$
|
| 430 |
+
|
| 431 |
+
1-5
|
| 432 |
+
${x}_{i,1}\left( 0\right)$ 1 0.8 0.5 0.8
|
| 433 |
+
|
| 434 |
+
1-5
|
| 435 |
+
${x}_{i,2}\left( 0\right)$ -1 0.8 -0.5 -0.8
|
| 436 |
+
|
| 437 |
+
1-5
|
| 438 |
+
|
| 439 |
+
For the unknown term ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right) ,{\mathcal{X}}_{i} = {\left\lbrack {x}_{i},{x}_{0}^{h},{\dot{x}}_{0}^{h},\vartheta ,\dot{\vartheta }\right\rbrack }^{T}$ and defined over $\left\lbrack {-6,6}\right\rbrack$ . Choose ${\mathcal{X}}_{i}^{0} =$ ${\left\lbrack \underset{5}{\underbrace{{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T},\cdots ,{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T}}}\right\rbrack }^{T}$ ${\phi }_{{\mathcal{F}}_{i}^{\mathcal{L}}}\left( {\mathcal{X}}_{i}\right) = \exp \left( {-\frac{{\left( {\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}\right) }^{T}\left( {{\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}}\right) }{2}}\right) .$
|
| 440 |
+
|
| 441 |
+
For the unknown term ${\mathcal{J}}_{i}\left( {\mathcal{X}}_{i}\right) ,{\mathcal{X}}_{i} = {\left\lbrack {x}_{i},{\varrho }_{i},{x}_{0}^{h},{\dot{x}}_{0}^{h},\vartheta ,\dot{\vartheta }\right\rbrack }^{T}$ and defined over $\left\lbrack {-6,6}\right\rbrack$ . Choose ${\mathcal{X}}_{i}^{0} =$ ${\left\lbrack \underset{6}{\underbrace{{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T},\cdots ,{\left\lbrack -6 - \mathcal{L}, - 6 + \mathcal{L}\right\rbrack }^{T}}}\right\rbrack }^{T}\;$ and ${\phi }_{{\mathcal{J}}_{i}}{\left( {\mathcal{X}}_{i}\right) }^{\mathcal{L}}\left( {\mathcal{X}}_{i}\right) = \exp \left( {-\frac{{\left( {\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}\right) }^{T}\left( {{\mathcal{X}}_{i} - {\mathcal{X}}_{i}^{0}}\right) }{2}}\right) .$
|
| 442 |
+
|
| 443 |
+
For updating law (19) and (20), ${\widehat{\omega }}_{{\mathcal{F}}_{1}}\left( 0\right) =$ ${\widehat{\omega }}_{{\mathcal{F}}_{2}}\left( 0\right) = {\widehat{\omega }}_{{\mathcal{F}}_{3}}\left( 0\right) = {\left\lbrack {0.1}\right\rbrack }_{{12} \times 2},{\widehat{\omega }}_{{\mathcal{J}}_{1}}\left( 0\right) = {\widehat{\omega }}_{{\mathcal{J}}_{2}}\left( 0\right) =$ ${\widehat{\omega }}_{{\mathcal{J}}_{3}}\left( 0\right) = {\left\lbrack {0.92}\right\rbrack }_{{12} \times 2},{\mathcal{C}}_{1} =$ diag $\{ {0.5},\cdots ,{0.5}\} ,{\mathcal{C}}_{2} =$ diag $\underset{12}{\underbrace{\{ {0.7},\cdots ,{0.7}\} }},{\mathcal{C}}_{3} = \;$ diag $\underset{12}{\underbrace{\{ {0.3},\cdots ,{0.3}\} }}$ ,
|
| 444 |
+
|
| 445 |
+
${\mathcal{R}}_{i} = \operatorname{diag}\{ {0.8},{0.8}\} ,{r}_{{\mathcal{F}}_{i}} = 2,{k}_{i} = {45},{r}_{{\mathcal{J}}_{i}} = 1.$
|
| 446 |
+
|
| 447 |
+
< g r a p h i c s >
|
| 448 |
+
|
| 449 |
+
Fig. 2: Curves of ${\widetilde{x}}_{i,1},{x}_{0,1}^{h}$ and $- {x}_{0,1}^{h}$ .
|
| 450 |
+
|
| 451 |
+
< g r a p h i c s >
|
| 452 |
+
|
| 453 |
+
Fig. 3: Curves of ${\widetilde{x}}_{i,2},{x}_{0,2}^{h}$ and $- {x}_{0,2}^{h}$ .
|
| 454 |
+
|
| 455 |
+
< g r a p h i c s >
|
| 456 |
+
|
| 457 |
+
Fig. 4: Curves of errors and performance bounds.
|
| 458 |
+
|
| 459 |
+
< g r a p h i c s >
|
| 460 |
+
|
| 461 |
+
Fig. 5: Curves of optimal control input.
|
| 462 |
+
|
| 463 |
+
< g r a p h i c s >
|
| 464 |
+
|
| 465 |
+
Fig. 6: Curves of $\begin{Vmatrix}{\omega }_{{\mathcal{F}}_{i}}\end{Vmatrix}$ .
|
| 466 |
+
|
| 467 |
+
From Fig. 2 and Fig. 3, the bipartite consensus can be achieved and the leader, followers 1 and 2 belong to a group while the follower 3 is geared to another group with opposite sign. Fig. 4 shows the bipartite consensus and the PT performance bounds. It can be obtained that the consensus error can reach the given accuracy 0.06 with the prescribed time ${3s}$ . The optimal control input for each agent is depicted in Fig. 5, in which ${u}_{i}$ rapidly converges to a small region of zero. The norm of updating weights in unknown terms ${\mathcal{F}}_{i}\left( {\mathcal{X}}_{i}\right)$ are given in Fig. 6.
|
| 468 |
+
|
| 469 |
+
§ VI. CONCLUSION
|
| 470 |
+
|
| 471 |
+
In this article, the problem of performance-based HiTL optimal bipartite consensus control for nonlinear MASs has been studied. First, the MASs have been monitored by human operator sending command signals to the non-autonomous leader to respond to any emergencies and guarantee the safety of MASs. Then, under the joint design architecture of prescribe-time performance function and error transformation, a novel performance index function has been developed to achieve optimal bipartite consensus with prescribed-time. Subsequently, the RL has been utilized to learn the solution to HJB equation, in which the FLSs are employed to implement the algorithm. The validity of the designed control scheme has been confirmed by simulation.
|